Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W
221K - views

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W

Silverstein Department of Mathematics Box 8205 North Carolina State University Raleigh North Carolina 276958205 Summary Let be containing iid complex entries with 11 11 1and an random Hermitian nonnegative de64257nite independent of Assume almost s

Download Pdf

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W

Download Pdf - The PPT/PDF document "Strong Convergence of the Empirical Dist..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

Presentation on theme: "Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W"— Presentation transcript:

Page 1
Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W. Silverstein* Department of Mathematics Box 8205 North Carolina State University Raleigh, North Carolina 27695-8205 Summary Let be containing i.i.d. complex entries with 11 11 =1,and an random Hermitian non-negative definite, independent of . Assume, almost surely, as , the empirical distribution function (e.d.f.) of the eigenvalues of converges in distribution, and the ratio n/N tends to a positive number. Then it is shown that, almost surely, the e.d.f. of the

eigenvalues of (1 /N XX converges in distribution. The limit is nonrandom and is characterized in terms of its Stieltjes transform, which satisfies a certain equation. * Supported by the National Science Foundation under grant DMS-9404047 AMS 1991 subject classifications. Primary 60F15; Secondary 62H99. Key Words and Phrases. Random matrix, empirical distribution function of eigenval- ues, Stieltjes transform.
Page 2
1. Introduction. For any square matrix with only real eigenvalues, let denote the empirical distribution function (e.d.f.) of the eigenvalues of (that is,

)isthe proportion of eigenvalues of ). This paper continues the work on the e.d.f. of the eigenvalues of matrices of the form (1 /N XX ,where is containing i.i.d. complex entries with 11 11 =1, is random Hermitian non-negative definite, independent of ,and are large but on the same order of magnitude. Assuming the entries of and to be real, it is shown in Yin [4] that, if and both converge to infinity while their ratio n/N converges to a positive quantity , and the moments of converge almost surely to those of a nonrandom probability distribution function (p.d.f.) satisfying the

Carleman sufficiency condition, then, with probability one, (1 /N XX converges in distribution to a nonrandom p.d.f. . The aim of this paper is to extend the limit theorem to the complex case, with arbitrary having mass on [0 ), assuming convergence in distribution of to almost surely. Obviously the method of moments, used in the proof of the limit theorem in Yin [4], cannot be further relied on. As will be seen, the key tool in understanding both the limiting behavior of (1 /N XX and analytic properties of is the Stieltjes transform, defined for any p.d.f. as the analytic function

dG ≡{ Imz > Due to the inversion formula a,b lim Imm i d a,b continuity points of ), convergence in distribution of a tight sequence of p.d.f.s is guaranteed once convergence of the corresponding Stieltjes transforms on a countable subset in possessing at least one accumulation point is verified. One reason for using the Stieltjes tranform to characterize spectral e.d.fs is the simple way it can be expressed in terms of the resolvent of the matrix. Indeed, for pA having real eigenvalues ,..., )=(1 /p =1 =(1 /p tr zI tr denoting trace, and the identity matrix). Stieltjes transform

methods are used in Mar cenko and Pastur [1] on matrices of the form +(1 /N TX , where the entries of have finite absolute fourth moments (in- dependence is replaced by a mild dependency condition reflected in their mixed second and fourth moments), = diag( ,..., ) with s i.i.d. having p.d.f. where is an
Page 3
arbitrary p.d.f. on ,and is nonrandom Hermitian with converging vaguely to a (possibly degenerate) distribution function. For ease of exposition, we confine our discussion only to the case = 0. It is proven that (1 /N TX i.p. ) for all =0 (it can be shown

that is absolutely continuous away from 0. See Silverstein and Choi [3] for results on the analytic behavior of and ), where is the solution to (1 1) τdH 1+ τm in the sense that, for every ) is the unique solution in to (1.1). Under the assumptions on originally given in this paper, strong convergence for diagonal is proven in Silverstein and Bai [2] with no restriction on ,..., other than its e.d.f. converges in distribution to almost surely. The proof takes an approach more direct than in Mar cenko and Pastur [1] (which involves the construction of a certain partial

differential equation), providing a clear understanding of why satisfies (1.1), at the same time displaying where random behavior primarily comes into play (basically from Lemma 2.1 given below). The difference between the spectra of (1 /N XX and (1 /N TX is zero eigenvalues, expressed via their e.d.f.s by the relation (1 /N TX =(1 [0 (1 /N XX denoting the indicator function on the set ). It follows that their Stieltjes transforms satisfy (1 2) (1 /N TX )= (1 (1 /N XX Therefore, in the limit (when and are known to exist) =(1 )1 [0 cF, and (1 3) )= (1 cm From (1.1) and (1.3)

it is straightfoward to conclude that for each )is a solution to (1 4) (1 czm dH
Page 4
It is unique in the set (1 cm It is remarked here that (1.1) reveals much of the analytic behavior of , and con- sequently (Silverstein and Choi [3]), and should be viewed as another indication of the importance of Stieltjes transforms to these types of matrices. Even when has all mo- ments, it seems unlikely much information about can be extracted from the explicit expressions for the moments of given in Yin [4]. Using again the Stieltjes transform as the essential tool in analyzing convergence,

this paper will establish strong convergence of (1 /N XX to under the weakest assumptions on non-negative definite . In order to keep notation to a minimum, the statement of the result and its proof will be expressed in terms of the Hermitian matrix (1 /N XX denoting a Hermitian square root of The following theorem will be proven. Theorem 1.1. Assume on a common probability space a) For =1 ,... X =( ij ), ij , i.d. for all n,i,j , independent across i,j for each 11 11 =1. b) ) with n/N c> 0as c) random Hermitian non-negative definite, with converging almost surely in distribution

to a p.d.f. on [0 )as d) and are independent. Let be the Hermitian non-negative square root of , and let (1 /N (obviously (1 /N ). Then, almost surely, con- verges in distribution, as , to a (nonrandom) p.d.f. , whose Stieltjes transform )( ) satisfies (1.4), in the sense that, for each ) is the unique solution to (1.4) in ≡{ (1 cm The proof will be given in the next section. Much of the groundwork has already been laid out in Silverstein and Bai [2], in particular the first step, which is to truncate and centralize the entries of , and Lemma 2.1. Therefore we will, on

occasion, refer the reader to the latter paper for further details.
Page 5
Proof of Theorem 1.1. As in Silverstein and Bai [2], the dependency of the variables on will occasionally be dropped. Through two successive stages of truncations and centralizations (truncate at centralize, truncate at ln , centralize) and a final scaling, the main part of section 3 in Silverstein and Bai [2] argues that the assumptions on the entries of can be replaced by standardized variables bounded in absolute value by a fixed multiple of ln .Write in its spectral decompositon: (diag( ,...,

)) . By replacing the diagonal matrix in that paper with (diag( ,..., )) , exactly the same argument applies in the present case. However, the following proof requires truncation of the eigenvalues of . It is shown in Silverstein and Bai [2] (and used in section 3) that for any matrix ,if then QTQ QT a.s. 0as where kk here denotes the sup norm on functions. Therefore, we may assume along with the conditions in Theorem 1.1 1) 11 | log , where log denotes the logarithm of with a certain base (defined in Silverstein and Bai [2]), 2) 11 =0, 11 =1 3) k log where here and throughout the

following kk denotes the spectral norm on matrices. The following two results are derived in Silverstein and Bai [2]. The first accounts for much of the truth of Theorem 1.1 due to random behavior. The second relies on the following fact (which contributes much to the form of equation (1.4)): For nB and for which and qq are invertible, (2 1) qq 1+ (follows from qq )=(1+ ). Lemma 2.1 (Lemma 3.1 of Silverstein and Bai [2]). Let be an matrix with k 1, and =( ,...,X , where the s are i.i.d. satisfying conditions 1) and 2) above. Then CY tr Kn log 12 where the constant does not depend on ,

nor on the distribution of Lemma 2.2 (Lemma 2.6 of Silverstein and Bai [2]). Let with Imz and with Hermitian, and . Then tr zI τqq zI
Page 6
The next lemma contains some additional inequalities. Lemma 2.3. For iv let ), ) be Stieltjes transforms of any two p.d.f.s, and Bn with Hermitian non-negative definite, and . Then a) k max(4 /v, 2) b) tr (( |≤| kk (max(4 /v, 2)) c) |≤| |k kk (max(4 /v, 2)) denoting Euclidean norm on ). Proof: Notice b) and c) follow easily from a) using basic matrix properties. We have for any positive +1 =( Rem +1) +( Imm )) . Using the

Cauchy- Schwarz inequality it is easy to show Rem | Imm /v . This leads us to consider minimizing over the expression ( vx +( mx +1) , or with mx the function )= ay +( +1) where =( v/x . Upon considering on either side of 2 we find min( a/ 16 4), from which a) follows. We proceed with the proof of Theorem 1.1 Fix iv .Let =(1 /N TX ,and .Let n/N . In Silverstein and Bai [2] it is argued that, almost surely, the sequence for diagonal , is tight. The argument carries directly over to the present case. Thus the quantity = inf Imm inf vdF 2( )+ is positive almost surely. For =1 ,...,N ,let

=(1 denoting the th column of ), (1 ,and .Write zI zI =1 Taking the inverse of zI on the right on both sides and using (2.1) we find zI =1 1+ zI zI
Page 7
Taking the trace on both sides and dividing by we have zc =1 zI 1+ zI =1 =1 1+ zI From (1.2) we see that (2 2) )= =1 (1 + zI For each we have Imr ((1 /z ((1 /z ((1 ((1 /z ((1 Therefore (2 3) (1 + zI Write zI zm zI =1 zm )) . Taking inverses and using (2.1), (2.2) we have zm zI zI =( zm zI =1 zm )) zI =1 (1+ zI zI zI Taking the trace and dividing by we find (2 4) tr zm zI )= =1 (1 + zI where zI tr zI From Lemma 2.2 we

see max | nv
Page 8
Let for each jm )= (1 ). From (1.2) we have for any positive (2 5) max log | 0as Also, by writing )= Nz (1 from (1.2) we see that is the Stieltjes transform of a p.d.f. Using condition 3), Lemma 2.1, Lemma 2.3 a), the fact that is independent of both and ), and zI k /v for any Hermitian matrix , we find |k log 12 and for sufficiently large zI tr zI 12 log 24 Therefore we have almost surely as (2 6) max max[ |k zI tr zI We concentrate on a realization for which (2.6) holds, is tight (implying δ> 0), and converges in distribution to . From

condition 3), Lemma 2.2, Lemma 2.3 b), c), (2.5), and (2.6) we find that max | 0as . Therefore, from (2.3), (2.4) tr zm zI 0as Consider a subsequence on which (bounded in absolute value by 1 /v converges to a number .Let (1 cm be the corresponding limit of ). We have Imm so that . We use the fact that for +1) | /Imm and τ/ +1) | /Imm to conclude that the function )= +1
Page 9
is bounded and satisfies +1 Therefore tr zm zI +1 dF +1 dH )as Thus satisfies (1.4). Since is unique we have . Thus, with probability one, converges in distribution to having Stieltjes

transform defined through (1.4). This completes the proof of Theorem 1.1.
Page 10
REFERENCES [1] V. A. Mar cenkoandL.A.Pastur, Distribution of eigenvalues for some sets of random matrices, USSR-Sb., 1 (1967), pp. 457-483. [2] J. W. Silverstein and Z. D. Bai, On the empirical distribution of eigenvalues of a class of large dimensional random matrices, J. Multivariate Anal. 54 (1995), pp. 175-192. [3] J. W. Silverstein and S.I. Choi, Analysis of the limiting spectral distribution of large dimensional random matrices, J. Multivariate Anal. 54 (1995), pp. 295-309. [4] Y. Q. Yin,

Limiting spectral distribution for a class of random matrices, J. Multivariate Anal. 20 (1986), pp. 50-68.