Power Method

Basics

  • A nonzero vector $\vec{x}$ is an eigenvector of a square matrix $\mathbf{A}$ if there exists a scalar $\lambda$, called an eigenvalue, such that $\mathbf{A}\vec{x} = \lambda\vec{x}$.
  • Non-zero vectors in the eigenspace of the matrix $\mathbf{A}$ for the eigenvalue $\lambda$ are eigenvectors of $\mathbf{A}$

Properties

  • The power method is used to find a dominant (largest) eigenvalue (one with the largest absolute value), if one exists, and a corresponding eigenvector.
  • It does not always guaranteed to converge if the given matrix is non-diagonalizable.
  • Inverse power method (if convergent) calculates the eigenvalue with smallest absolute value.
  • It is effective for a very large sparse matrix with appropriate implementation.

Facts

  • This method is known as Von Mises iteration.
  • Especially suitable for sparse matrices
  • It is used to calculate the Google page rank.

Algorithm

  1. To apply this method to a square matrix $\mathbf{A}$, start with an initial guess $\vec{u}_{0}$ for the eigenvector of the dominant eigenvalue
  2. Multiply the most recently obtained vector on the left by $\mathbf{A}$ i.e. for $i \ge 1$, $(\vec{u}_{i})_{\text{un-normalize}} = \mathbf{A} \vec{u}_{i - 1}$

  3. Normalize the result i.e. $\vec{u}_{i} = \frac{(\vec{u}_{i})_{\text{un-normalize}}}{|| \mathbf{A} \vec{u}_{i-1}||} = \frac{ \mathbf{A} \vec{u}_{i - 1}}{|| \mathbf{A} \vec{u}_{i-1}||}$

    to avoid underflow and overflow.
  4. Repeat until consecutive vectors $\vec{u}_{i}$ are either identical or opposite.
  5. If $\vec{u}_{k}$ denotes the last vector calculated in this process, then $\vec{u}_{k}$ is an approximate eigenvector of $\mathbf{A}$, and $|| \mathbf{A}\vec{u}_{k} ||$ is the absolute value of the dominant eigen vector for $\mathbf{A}$.

  6. If $\vec{u}_{k}$ is an eigenvector of $\mathbf{A}$ then, its eigenvalue is given by Rayleigh quotient i.e. $\lambda = \frac{(\mathbf{A} \vec{u}_{k}). (\vec{u}_{k})^{\text{T}}}{\vec{u}_{k}. (\vec{u}_{k})^{\text{T}} }$

    . Note that the above normalization does not affect the Rayleigh quotient. It's just a scaling factor to avoid numerical issues.

It is easy to prove point (6) as $\frac{(\mathbf{A} \vec{u}_{k}). (\vec{u}_{k})^{\text{T}}}{\vec{u}_{k}. (\vec{u}_{k})^{\text{T}} } = \frac{(\lambda \vec{u}_{k}). (\vec{u}_{k})^{\text{T}}}{\vec{u}_{k}. (\vec{u}_{k})^{\text{T}} } = \lambda$.

Example

  • Given a matrix $\mathbf{A}$ defined as

    $$ \mathbf{A} = \begin{pmatrix} 1 & 3 & 0 \\ 2 & 5 & 1 \\ -1 & 2 & 3 \end{pmatrix}. $$

    Find the largest eigenvalue and its corresponding eigenvector?
  • Initial guess say

    $$ \vec{u}_{0} = \begin{pmatrix} 1 \\ 1 \\ 1 \\ \end{pmatrix} $$

  • Let $k = 1$,

    $$ \vec{u}_{1} = \mathbf{A} \vec{u}_{0} = \begin{pmatrix} 1 & 3 & 0 \\ 2 & 5 & 1 \\ -1 & 2 & 3 \end{pmatrix}. \begin{pmatrix} 1 \\ 1 \\ 1 \\ \end{pmatrix} = \begin{pmatrix} 4 \\ 8 \\ 4 \end{pmatrix} $$

    So, we normalize $\vec{u}_{1}$ as

    $$ \vec{u}_{1} = \frac{\vec{u}_{1}}{|| \vec{u}_{1} ||} = \begin{pmatrix} 0.4082 \\ 0.8165 \\ 0.4082 \end{pmatrix} $$

    where $|| \vec{u}_{1} || = \sqrt{4^{2} + 8^{2} + 4^{2}}$ i.e. an Euclidean norm.
  • Similarly, $k = 2$,

    $$ \vec{u}_{2} = \mathbf{A} \vec{u}_{1} = \begin{pmatrix} 2.8557 \\ 5.3072 \\ 2.4495 \end{pmatrix} $$

    So,

    $$ \vec{u}_{2} = \frac{\vec{u}_{2}}{|| \vec{u}_{2} ||} = \begin{pmatrix} 0.4392\\ 0.8157 \\ 0.3765 \end{pmatrix} $$

  • For $k = 3$,

    $$ \vec{u}_{3} = \mathbf{A} \vec{u}_{2} = \begin{pmatrix} 2.8863 \\ 5.3334 \\ 2.3216 \end{pmatrix} $$

    So,

    $$ \vec{u}_{3} = \frac{\vec{u}_{3}}{|| \vec{u}_{3} ||} = \begin{pmatrix} 0.4445\\ 0.8213 \\ 0.3575 \end{pmatrix} $$

  • For $k = 4$,

    $$ \vec{u}_{4} = \mathbf{A} \vec{u}_{3} = \begin{pmatrix} 2.9085 \\ 5.3532 \\ 2.2708 \end{pmatrix} $$

    So,

    $$ \vec{u}_{4} = \frac{\vec{u}_{4}}{|| \vec{u}_{4} ||} = \begin{pmatrix} 0.4473\\ 0.8233 \\ 0.3493 \end{pmatrix} $$

  • For two digits accuracy of the eigen vector, we can stop the iteration when $k = 4$. But for higher accuracy, we need to run the iteration until it converges to desired digits of accuracy.

  • Therefore, dominant eigenvalue is $\lambda = \frac{(\mathbf{A} \vec{u}_{4}). (\vec{u}_{4})^{\text{T}}}{\vec{u}_{4}. (\vec{u}_{4})^{\text{T}} } = 6.5036$.

Implementation

View Jupyter notebook in full screen! Download it here!

Theorems

  • If $\mathbf{A}$ is diagonalizable and has dominant eigenvalue then, normalized power iteration sequence ($\mathbf{A}\vec{u}, \mathbf{A}^{2}\vec{u}, \mathbf{A}^{3}\vec{u}, \ldots$) (is known as Krylov sequence) converges to the dominant eigenvector. You can notice matrix $\mathbf{A}$ get power on it. This is why this method is called power iteration method or simply power method.
  • If $\mathbf{A}$ is a square matrix with dominant eigenvalue $\lambda_{\text{max}}$ and $\vec{x}$ is any vector then, $|| \mathbf{A} x|| \leq |\lambda_{\text{max}} |~|| x ||$ where $|| \cdot ||$ is an Euclidean norm. This is very useful in estimates. This is why we are more interested in dominant eigenvalue.

Permalink at https://www.physicslog.com/cs-notes/nm/power-method

Published on Jul 17, 2021

Last revised on Feb 17, 2023

References