Section 7.2 Eigenvalues and Eigenvectors
We define an important class of invariant subspaces. We refer the reader to Section A.2 for a definition and properties of the determinant.
Definition 7.2.1. (Eigenvector and Eigenvalue of a linear map).
Let \(V\) be a finite-dimensional vector space over a field \(F\) and let \(T\colon V\to V\) be an \(F\)-linear map. An eigenvector \(v\) of \(T\) is a nonzero vector such that
\begin{equation*}
T(v)=\lambda v
\end{equation*}
for some scalar \(\lambda\in F\text{.}\) The scalar \(\lambda\) is called the eigenvalue associated to the eigenvector \(v\text{.}\)Similarly we define eigenvector and eigenvalue of a matrix.
Definition 7.2.2. (Eigenvector and Eigenvalue of a square matrix).
Let \(A\in M_n(F)\) be an \(n\times n\) matrix over a field \(F\text{.}\) A nonzero column vector is an eigenvector of \(A\) if it is an eigenvector of a linear map \(\ell_A\colon F^n\to F^n\) defined by the left multiplication by \(A\text{.}\) The scalar corresponding to an eigenvector of \(\ell_A\) is called an eigenvalue of \(A\text{.}\)Using above remark and thinking geometrically it is clear that the anti-clockwise rotation of the real plane by \(90^0\) has no eigenvectors. Indeed, algebraically we have the following.
Example 7.2.4. (Rotation in the real plane).
The anti-clockwise rotation by \(90^0\) is the map \(T\colon\R^2\to\R^2\) given by
\begin{equation*}
e_1\mapsto e_2\quad\text{and}\quad e_2\mapsto -e_1.
\end{equation*}
Suppose that \(\lambda\in\R\) and \((\alpha,\beta)\neq (0,0)\) are such that
\begin{align*}
\lambda\alpha e_1+\lambda\beta e_2\amp= T(\alpha e_1+\beta e_2)\\
\amp=\alpha e_2-\beta e_1.
\end{align*}
Therefore,
\begin{equation*}
\lambda\alpha=-\beta\quad\text{and}\quad\lambda\beta=\alpha.
\end{equation*}
Suppose that \(\beta\neq 0\) (the case when \(\alpha\neq 0\) can be similarly dealt with). Thus, \(-\beta=\lambda^2\beta\) implies that \(\lambda^2=-1\text{,}\) a contradiction.
In the above example the underlying field plays an important role. If we consider the complex plane \(\C^2\) over \(\C\) then the “rotation” have eigenvectors, as the following calculations show.
Example 7.2.5. (Rotation in the complex plane).
Consider the map \(T\colon\C^2\to\C^2\) given by
\begin{equation*}
e_1\mapsto e_2\quad\text{and}\quad e_2\mapsto -e_1.
\end{equation*}
We consider \(\C^2\) as a vector space over \(\C\text{.}\) Suppose that \(\lambda\in\C\) and \((\alpha,\beta)\neq (0,0)\) are such that
\begin{align*}
\lambda\alpha e_1+\lambda\beta e_2\amp= T(\alpha e_1+\beta e_2)\\
\amp=\alpha e_2-\beta e_1.
\end{align*}
Therefore,
\begin{equation*}
\lambda\alpha=-\beta\quad\text{and}\quad\lambda\beta=\alpha.
\end{equation*}
Suppose that \(\beta\neq 0\) (the case when \(\alpha\neq 0\) can be similarly dealt with). Thus, \(-\beta=\lambda^2\beta\) implies that \(\lambda^2=-1\text{,}\) i.e., \(\lambda=\pm i\text{.}\) Take \(\lambda=i\) (the case when \(\lambda=-i\) can be dealt with similarly) and \(v=ie_1+e_2\neq 0\text{.}\) Thus
\begin{equation*}
T(ie_1+e_2)=ie_2-e_1=i(ie_1+e_2)\text{,}
\end{equation*}
i.e., \(v\) is an eigenvector of \(T\) with an eigenvalue \(i\text{.}\) Corresponding to \(-i\) we get eigenvector \(-ie_1+e_2\) which is linearly independent from \(ie_1+e_2\text{.}\)
As similar matrices represents the same linear map we obtain the following
Proposition 7.2.6.
Similar matrices have the same eigenvalues.The following simple observations will be useful.
Proposition 7.2.7.
Let \(V\) be a finite-dimensional vector space over a field \(F\) and let \(T\colon V\to V\) be an \(F\)-linear map. The matrix of \(T\) with respect to an ordered basis \((v_1,v_2,\ldots,v_n)\) is a diagonal matrix if and only if each \(v_i\) is an eigenvector.
Proposition 7.2.8.
Let \(V\) be a finite-dimensional vector space over a field \(F\) and let \(T\colon V\to V\) be an \(F\)-linear map. A nonzero vector is an eigenvector with an eigenvalue \(\lambda\) if and only if it is in the kernel of \(T-\lambda\unit_V\text{.}\)
Corollary 7.2.9.
Following are equivalent.
\(T\) is not invertible
\(T\) has an eigenvalue equal to \(0\)
If \(A\) is a matrix of \(T\) with respect to an arbitrary basis then \(\det(A)=0\text{.}\)
Definition 7.2.11. (Eigenspace).
Let \(V\) be a finite-dimensional vector space over a field \(F\) and let \(T\colon V\to V\) be an \(F\)-linear map. Suppose that \(\lambda\in F\) is an eigenvalue of \(T\text{.}\) The subspace \(\ker(T-\lambda\unit_V)\) is called the eigenspace corresponding to eigenvalue \(\lambda\).If \(A\in M_n(F)\) is a matrix then the expansion of the determinant \(\det(tI_n-A)\) shows that it is a polynomial in \(t\) whose coefficients are in \(F\) and it has degree \(n\text{.}\) We define the characteristic polynomial of a linear map.
Definition 7.2.13. (Characteristic polynomial).
Let \(V\) be a finite-dimensional vector space over a field \(F\) of dimension \(n\) and let \(T\colon V\to V\) be an \(F\)-linear map. Suppose that \(A\) is a matrix representation of \(T\text{.}\) The characteristic polynomial of \(T\) is the polynomial
\begin{equation*}
\chi_T(t)=\det(tI_n-A)\in F[t].
\end{equation*}
Let \(P\in M_n(F)\text{.}\) The characteristic polynomial of \(P\) is the polynomial
\begin{equation*}
\chi_P(t)=\det(tI_n-P)\in F[t].
\end{equation*}
Corollary 7.2.14.
The eigenvalues of \(T\) are roots in \(F\) of its characteristic polynomial.
Proposition 7.2.15.
Let \(T\colon V\to V\) be an \(F\)-linear map on a vector space \(V\) of dimension \(n<\infty\text{.}\)
The linear map \(T\) has at most \(n\) eigenvalues.
If \(F=\C\) and \(V\neq\{0\}\) then \(T\) has at least one eigenvalue.
Proof.
Note that the characteristic polynomial of anti-clockwise rotation by
\(90^0\) of the real plane is
\(t^2+1\text{.}\) Hence it has no real roots and thus no eigenvalues. However,
\(t^2+1\) has roots in
\(\C\) (refer to
Example 7.2.4 and
Example 7.2.5).
Following result shows that the characteristic polynomial of \(T\) does not depend on a particular matrix representation.
Proposition 7.2.16.
The characteristic polynomial of \(T\) does not depend on the choice of a basis.Proof.
Suppose that \(A\) and \(B\) be matrix representations of \(T\) with respect to some bases of \(V\text{.}\) Then there exists an invertible matrix \(P\) such that \(B=C^{-1}AC\) (see Corollary 5.3.6). Thus,
\begin{equation*}
tI_n-B=tI_n-C^{-1}AC=C^{-1}(tI_n-A)C.
\end{equation*}
Hence
\begin{equation*}
\det(tI_n-B)=\det\left(C^{-1}(tI_n-A)C\right)=\det(tI_n-A).
\end{equation*}
Thus the result is proved.
Definition 7.2.17. (Minimal polynomial).
Let \(V\) be a finite-dimensional vector space over a field \(F\text{,}\) and let \(T\colon V\to V\) be an \(F\)-linear map. The minimal polynomial of \(T\) is the monic polynomial \(m_T(t)\in F[t]\) of least degree annihilating \(T\text{.}\) Let \(A\in M_n(F)\text{.}\) The minimal polynomial of \(A\) is the monic polynomial \(m_A(t)\in F[t]\) of least degree annihilating \(A\text{.}\)
Lemma 7.2.18.
If \(A,B\in M_n(F)\) are similar then the minimal polynomial of \(A\) and the minimal polynomial of \(B\) are the same.Proof.
This follows from the following observation. For any invertible matrix \(P\) and any polynomial \(f(t)\in F[t]\)
\begin{equation*}
f(P^{-1}AP)=P^{-1}f(A)P.
\end{equation*}
Checkpoint 7.2.19.
Find the minimal polynomial of a linear map corresponding to the following matrix.
\begin{equation*}
A=\begin{pmatrix}1\amp 1\amp 1\\-1\amp -1\amp -1\\1\amp 1\amp 1\end{pmatrix}
\end{equation*}
Proposition 7.2.20.
Let \(p(t)\in F[t]\) be an annihilating polynomial of an \(F\) linear map \(T\colon V\to V\text{.}\) Then the minimal polynomial of \(T\text{,}\) \(m_T\) divides \(p(t)\text{.}\)Proof.
By the division algorithm there are polynomials \(q(t),r(t)\in F[t]\) with \(\deg r(t)\lt\deg m_T\) such that
\begin{equation*}
p(t)=m_T\cdot q(t)+r(t).
\end{equation*}
This gives the following equation.
\begin{equation*}
0=p(T)=m_T(T)\cdot q(T)+r(T)=r(T).
\end{equation*}
Since \(\deg r(t)\lt\deg m_T\) and \(m_T\) is the least degree monic polynomial annihilating \(T\) we must have \(r(t)=0\in F[t]\text{.}\) Thus \(m_T\) divides \(p(t)\text{.}\)
In fact roots of the characteristic and minimal polynomial are the same.
Theorem 7.2.21.
Let \(V\) be a finite-dimensional vector space over a field \(F\text{,}\) and let \(T\colon V\to V\) be an \(F\)-linear map. The characteristic polynomial of \(T\) and the minimal polynomial of \(T\) has the same roots.Proof.
Let
\(\lambda\in F\) be a root of
\(\chi_T\text{,}\) i.e.,
\(\lambda\) is an eigenvalue of
\(T\) (see
Corollary 7.2.14). Let
\(v\in V\) be an eigenvector corresponding to
\(\lambda\text{.}\) Thus
\(T^k(v)=\lambda^kv\) for any
\(k\in\mathbb{N}\text{.}\) Suppose that
\(m_T=a_0+a_1t+\cdots+a_{r-1}t^{r-1}+t^r\in F[t]\text{.}\) Since
\(m_T\) is an annihilating polynomial we get the following.
\begin{equation*}
0=m_T(T)(v)=a_0v+a_1\lambda v+\cdots+a_{r-1}\lambda^{r-1}v+\lambda^rv=m_T(\lambda)v
\end{equation*}
As \(v\) is an eigenvector, it is nonzero. Hence we must have \(m_T(\lambda)=0\text{,}\) i.e., \(\lambda\) is a root of \(m_T\text{.}\)
Conversely assume that
\(\lambda\in F\) is a root of
\(m_T\text{.}\) By
Proposition 7.2.20, there exists
\(q(t)\in F[t]\) such that
\(\chi_T=m_T\cdot q(t)\text{.}\) Hence
\(\chi_T(\lambda)=m_T(\lambda)\cdot q(\lambda)=0\text{,}\) i.e.,
\(\lambda\) is a root of
\(\chi_T\text{.}\)