consider X∼N(0,1). Also consider −X which will be identically distributed (by symmetry of − and N).
So we have that −X∼N(0,1).
But this tells us nothing about X and −x! so this type of "convergence of distribution" is very weak.
Strongest notion of convergence (#2): Almost surely. Tna.sT iff P({ω:Tn(ω)→T(ω)})=1. Consider a snowball left out in the sun. In a couple hours, It'll have a random shape, random volume, and so on. But the ball itself is a definite thing --- the ω. Almost sure says that for almost all of the balls, Tn converges to T.
#2 notion of convergence: Convergence in probability. TnPT iff P(∣Tn−T∣≥ϵ)n→∞0 for all ϵ>0. This allows us to squeeze ϵ probability under the rug.
Convergence in Lp: TnLpT iff E[∣Tn−T∣p]n→∞0. Eg. think of convergence in variance of a gaussian.
Convergence in distrbution: (weakest): TndT iff P[Tn≤x]n→∞P[T≤x] for all x.
If XndX and YnPc (That is, the sequence of Yn is eventually deterministic),we then have that (Xn,Y)d(X,c). In particular, we get that Xn+YndX+c and XnYndXc.
This is important, because in general, convergence in distribution says nothing about the RV! but in this special case, it's possible.