In mathematics, in particular linear algebra, the Sherman–Morrison formula,[1][2][3] named after Jack Sherman and Winifred J. Morrison, computes the inverse of the sum of an invertible matrix A and the outer product, \( {\displaystyle uv^{\textsf {T}}} \), of vectors u and v. The Sherman–Morrison formula is a special case of the Woodbury formula. Though named after Sherman and Morrison, it appeared already in earlier publications.[4]
Statement
Suppose \( {\displaystyle A\in \mathbb {R} ^{n\times n}} \) is an invertible square matrix and \( {\displaystyle u,v\in \mathbb {R} ^{n}} \) are column vectors. Then \( {\displaystyle A+uv^{\textsf {T}}} \) is invertible iff \( {\displaystyle 1+v^{\textsf {T}}A^{-1}u\neq 0} \). In this case,
\( {\displaystyle \left(A+uv^{\textsf {T}}\right)^{-1}=A^{-1}-{A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}.} \)
Here, \( {\displaystyle uv^{\textsf {T}}} \) is the outer product of two vectors u and v v. The general form shown here is the one published by Bartlett.[5]
Proof
( \( \Leftarrow \) ) To prove that the backward direction ( \( {\displaystyle 1+v^{\textsf {T}}A^{-1}u\neq 0\Rightarrow A+uv^{\textsf {T}}} \) is invertible with inverse given as above) is true, we verify the properties of the inverse. A matrix Y (in this case the right-hand side of the Sherman–Morrison formula) is the inverse of a matrix X (in this case \( {\displaystyle A+uv^{\textsf {T}}}) \) if and only if XY=YX=I.
We first verify that the right hand side ( Y) satisfies XY=I.
\( {\displaystyle {\begin{aligned}XY&=\left(A+uv^{\textsf {T}}\right)\left(A^{-1}-{A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\right)\\[6pt]&=AA^{-1}+uv^{\textsf {T}}A^{-1}-{AA^{-1}uv^{\textsf {T}}A^{-1}+uv^{\textsf {T}}A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\\[6pt]&=I+uv^{\textsf {T}}A^{-1}-{uv^{\textsf {T}}A^{-1}+uv^{\textsf {T}}A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\\[6pt]&=I+uv^{\textsf {T}}A^{-1}-{u\left(1+v^{\textsf {T}}A^{-1}u\right)v^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\\[6pt]&=I+uv^{\textsf {T}}A^{-1}-uv^{\textsf {T}}A^{-1}\\[6pt]\end{aligned}}} \)
To end the proof of this direction, we need to show that \( {\displaystyle YX=I} \) in a similar way as above:
\( YX=\left(A^{-1}-{A^{-1}uv^{T}A^{-1} \over 1+v^{T}A^{-1}u}\right)(A+uv^{T})=I. \)
( \( \Rightarrow \) ) Reciprocally, if \( {\displaystyle 1+v^{\textsf {T}}A^{-1}u=0} \_, then letting \( {\displaystyle x=A^{-1}u} \), \( {\displaystyle \left(A+uv^{\textsf {T}}\right)} \) has a non-regular kernel and is therefore not invertible.
Application
If the inverse of A is already known, the formula provides a numerically cheap way to compute the inverse of A corrected by the matrix \( {\displaystyle uv^{\textsf {T}}} \) (depending on the point of view, the correction may be seen as a perturbation or as a rank-1 update). The computation is relatively cheap because the inverse of \( {\displaystyle A+uv^{\textsf {T}}} \) does not have to be computed from scratch (which in general is expensive), but can be computed by correcting (or perturbing) \( A^{-1}. \)
Using unit columns (columns from the identity matrix) for u or v, individual columns or rows of A may be manipulated and a correspondingly updated inverse computed relatively cheaply in this way.[6] In the general case, where \( A^{-1} \) is a n-by- n matrix and u and v are arbitrary vectors of dimension n, the whole matrix is updated[5] and the computation takes \( 3n^{2} \) scalar multiplications.[7] If u is a unit column, the computation takes only \( 2n^{2} \) scalar multiplications. The same goes if v is a unit column. If both u and v are unit columns, the computation takes only \( n^{2} \) scalar multiplications.
This formula also has application in theoretical physics. Namely, in quantum field theory, one uses this formula to calculate the propagator of a spin-1 field.[8][circular reference] The inverse propagator (as it appears in the Lagrangian) has the form \( {\displaystyle \left(A+uv^{\textsf {T}}\right)} \). One uses the Sherman-Morrison formula to calculate the inverse (satisfying certain time-ordering boundary conditions) of the inverse propagator—or simply the (Feynman) propagator—which is needed to perform any perturbative calculation[9] involving the spin-1 field.
Alternative verification
Following is an alternate verification of the Sherman–Morrison formula using the easily verifiable identity
\( {\displaystyle \left(I+wv^{\textsf {T}}\right)^{-1}=I-{\frac {wv^{\textsf {T}}}{1+v^{\textsf {T}}w}}}. \)
Let
\( {\displaystyle u=Aw,\quad {\text{and}}\quad A+uv^{\textsf {T}}=A\left(I+wv^{\textsf {T}}\right),} \)
then
\( {\displaystyle \left(A+uv^{\textsf {T}}\right)^{-1}=\left(I+wv^{\textsf {T}}\right)^{-1}A^{-1}=\left(I-{\frac {wv^{\textsf {T}}}{1+v^{\textsf {T}}w}}\right)A^{-1}}. \)
Substituting \( {\displaystyle w=A^{-1}u} \) gives
\( {\displaystyle \left(A+uv^{\textsf {T}}\right)^{-1}=\left(I-{\frac {A^{-1}uv^{\textsf {T}}}{1+v^{\textsf {T}}A^{-1}u}}\right)A^{-1}=A^{-1}-{\frac {A^{-1}uv^{\textsf {T}}A^{-1}}{1+v^{\textsf {T}}A^{-1}u}}} \)
Generalization (Woodbury matrix identity)
Given a square invertible \( n\times n \) matrix A, an \( n \times k \) matrix U, and a\( k\times n \) matrix V, let B be an \( n\times n \) matrix such that \( {\displaystyle B=A+UV}. \) Then, assuming \( {\displaystyle \left(I_{k}+VA^{-1}U\right)} \) is invertible, we have
\( {\displaystyle B^{-1}=A^{-1}-A^{-1}U\left(I_{k}+VA^{-1}U\right)^{-1}VA^{-1}.} \)
See also
The matrix determinant lemma performs a rank-1 update to a determinant.
Woodbury matrix identity
Quasi-Newton method
Binomial inverse theorem
Bunch–Nielsen–Sorensen formula
Maxwell stress tensor contains an application of the Sherman-Morrison formula.
References
Sherman, Jack; Morrison, Winifred J. (1949). "Adjustment of an Inverse Matrix Corresponding to Changes in the Elements of a Given Column or a Given Row of the Original Matrix (abstract)". The Annals of Mathematical Statistics. 20: 621. doi:10.1214/aoms/1177729959.
Sherman, Jack; Morrison, Winifred J. (1950). "Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix". Annals of Mathematical Statistics. 21 (1): 124–127. doi:10.1214/aoms/1177729893. MR 0035118. Zbl 0037.00901.
Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 2.7.1 Sherman–Morrison Formula", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
Hager, William W. (1989). "Updating the inverse of a matrix" (PDF). SIAM Review. 31 (2): 221–239. doi:10.1137/1031049. JSTOR 2030425. MR 0997457.
Bartlett, Maurice S. (1951). "An Inverse Matrix Adjustment Arising in Discriminant Analysis". The Annals of Mathematical Statistics. 22 (1): 107–111. doi:10.1214/aoms/1177729698. MR 0040068. Zbl 0042.38203.
Langville, Amy N.; and Meyer, Carl D.; "Google's PageRank and Beyond: The Science of Search Engine Rankings", Princeton University Press, 2006, p. 156
Update of the inverse matrix by the Sherman–Morrison formula
Propagator#Spin 1
[1]
Undergraduate Texts in Mathematics
Graduate Studies in Mathematics
Hellenica World - Scientific Library
Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License