ART

In the mathematical theory of random matrices, the Marchenko–Pastur distribution, or Marchenko–Pastur law, describes the asymptotic behavior of singular values of large rectangular random matrices. The theorem is named after Ukrainian mathematicians Vladimir Marchenko and Leonid Pastur who proved this result in 1967.

Marchenko-Pastur distribution

Plot of the Marchenko-Pastur distribution for various values of lambda

If X denotes a \( m\times n \) random matrix whose entries are independent identically distributed random variables with mean 0 and variance \( \sigma ^{2}<\infty \) , let

\( {\displaystyle Y_{n}={\frac {1}{n}}XX^{T}} \)

and let \( } {\displaystyle \lambda _{1},\,\lambda _{2},\,\dots ,\,\lambda _{m}} \) be the eigenvalues of \( Y_{n} \) (viewed as random variables). Finally, consider the random measure

\( {\displaystyle \mu _{m}(A)={\frac {1}{m}}\#\left\{\lambda _{j}\in A\right\},\quad A\subset \mathbb {R} .} \)

Theorem. Assume that \( {\displaystyle m,\,n\,\to \,\infty } \) so that the ratio \( {\displaystyle m/n\,\to \,\lambda \in (0,+\infty )} \). Then \( {\displaystyle \mu _{m}\,\to \,\mu } \) (in weak* topology in distribution), where

\( \mu(A) =\begin{cases} (1-\frac{1}{\lambda}) \mathbf{1}_{0\in A} + \nu(A),& \text{if } \lambda >1\\ \nu(A),& \text{if } 0\leq \lambda \leq 1, \end{cases} \)

and

\( {\displaystyle d\nu (x)={\frac {1}{2\pi \sigma ^{2}}}{\frac {\sqrt {(\lambda _{+}-x)(x-\lambda _{-})}}{\lambda x}}\,\mathbf {1} _{x\in [\lambda _{-},\lambda _{+}]}\,dx} \)

with

\( {\displaystyle \lambda _{\pm }=\sigma ^{2}(1\pm {\sqrt {\lambda }})^{2}.}

The Marchenko–Pastur law also arises as the free Poisson law in free probability theory, having rate \( 1/\lambda \)and jump size \( \sigma ^{2}. \)

Some transforms of this law

The Cauchy transform (which is the negative of the Stieltjes transformation), when \( {\displaystyle \sigma ^{2}=1} \), is given by

\( {\displaystyle G_{\mu }(z)={\frac {z+\lambda -1-{\sqrt {(z-\lambda -1)^{2}-4\lambda }}}{2\lambda z}}} \)

This gives an R {\displaystyle R} R-transform of:

\( {\displaystyle R_{\mu }(z)={\frac {1}{1-\lambda z}}} \)

Application to correlation matrices

When applied to correlation matrices \( \sigma ^{2}=1 \) and \( {\displaystyle \lambda =m/n} \) which leads to the bounds

\( {\displaystyle \lambda _{\pm }=\left(1\pm {\sqrt {\frac {m}{n}}}\right)^{2}.} \)

Hence, it is often assumed that eigenvalues of correlation matrices lower than \( \lambda_+ \) are by a chance, and the values higher than \( \lambda_+ \) are the significant common factors. For instance, obtaining a correlation matrix of a year long series (i.e. 252 trading days) of 10 stock returns, would render \( {\displaystyle \lambda _{+}=\left(1+{\sqrt {\frac {10}{252}}}\right)^{2}\approx 1.43} \) . Out of 10 eigen values of the correlation matrix only the values higher than 1.43 would be considered significant.
See also

Wigner semicircle distribution
Tracy–Widom distribution

References

Götze, F.; Tikhomirov, A. (2004). "Rate of convergence in probability to the Marchenko–Pastur law". Bernoulli. 10 (3): 503–548. doi:10.3150/bj/1089206408.
Marchenko, V. A.; Pastur, L. A. (1967). "Распределение собственных значений в некоторых ансамблях случайных матриц" [Distribution of eigenvalues for some sets of random matrices]. Mat. Sb. N.S. (in Russian). 72 (114:4): 507–536. doi:10.1070/SM1967v001n04ABEH001994. Link to free-access pdf of Russian version
Nica, A.; Speicher, R. (2006). Lectures on the Combinatorics of Free probability theory. Cambridge Univ. Press. pp. 204, 368. ISBN 0-521-85852-6. Link to free download Another free access site

vte

Probability distributions (List)
Discrete univariate
with finite support

Benford Bernoulli beta-binomial binomial categorical hypergeometric Poisson binomial Rademacher soliton discrete uniform Zipf Zipf–Mandelbrot

Discrete univariate
with infinite support

beta negative binomial Borel Conway–Maxwell–Poisson discrete phase-type Delaporte extended negative binomial Flory–Schulz Gauss–Kuzmin geometric logarithmic negative binomial parabolic fractal Poisson Skellam Yule–Simon zeta

Continuous univariate
supported on a bounded interval

arcsine ARGUS Balding–Nichols Bates beta beta rectangular continuous Bernoulli Irwin–Hall Kumaraswamy logit-normal noncentral beta raised cosine reciprocal triangular U-quadratic uniform Wigner semicircle

Continuous univariate
supported on a semi-infinite interval

Benini Benktander 1st kind Benktander 2nd kind beta prime Burr chi-squared chi Dagum Davis exponential-logarithmic Erlang exponential F folded normal Fréchet gamma gamma/Gompertz generalized gamma generalized inverse Gaussian Gompertz half-logistic half-normal Hotelling's T-squared hyper-Erlang hyperexponential hypoexponential inverse chi-squared
scaled inverse chi-squared inverse Gaussian inverse gamma Kolmogorov Lévy log-Cauchy log-Laplace log-logistic log-normal Lomax matrix-exponential Maxwell–Boltzmann Maxwell–Jüttner Mittag-Leffler Nakagami noncentral chi-squared noncentral F Pareto phase-type poly-Weibull Rayleigh relativistic Breit–Wigner Rice shifted Gompertz truncated normal type-2 Gumbel Weibull
discrete Weibull Wilks's lambda

Continuous univariate
supported on the whole real line

Cauchy exponential power Fisher's z Gaussian q generalized normal generalized hyperbolic geometric stable Gumbel Holtsmark hyperbolic secant Johnson's SU Landau Laplace asymmetric Laplace logistic noncentral t normal (Gaussian) normal-inverse Gaussian skew normal slash stable Student's t type-1 Gumbel Tracy–Widom variance-gamma Voigt

Continuous univariate
with support whose type varies

generalized chi-squared generalized extreme value generalized Pareto Marchenko–Pastur q-exponential q-Gaussian q-Weibull shifted log-logistic Tukey lambda

Mixed continuous-discrete univariate

rectified Gaussian

Multivariate (joint)

Discrete
Ewens
multinomial
Dirichlet-multinomial
negative multinomial
Continuous
Dirichlet
generalized Dirichlet
multivariate Laplace
multivariate normal
multivariate stable
multivariate t
normal-inverse-gamma
normal-gamma
Matrix-valued
inverse matrix gamma
inverse-Wishart
matrix normal
matrix t
matrix gamma
normal-inverse-Wishart
normal-Wishart
Wishart

Directional

Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham

Degenerate and singular

Degenerate
Dirac delta function
Singular
Cantor

Families

Circular compound Poisson elliptical exponential natural exponential location–scale maximum entropy mixture Pearson Tweedie wrapped

Undergraduate Texts in Mathematics

Graduate Texts in Mathematics

Graduate Studies in Mathematics

Mathematics Encyclopedia

World

Index

Hellenica World - Scientific Library

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License