Section 3.4 Invariance of dimension
Proof.
\begin{equation*}
\alpha_1 v_1+\cdots+\alpha_n v_n+\alpha_{n+1}v_{n+1}=0.
\end{equation*}
If \(\alpha_{n+1}=0\) then the linear independence of \(v_1,v_2,\ldots,v_n\) implies that \(\alpha_1=\alpha_2=\cdots=\alpha_n=0\text{,}\) which will be a contradiction. Hence, \(\alpha_{n+1}\neq 0\text{.}\) Therefore,
\begin{equation*}
v_{n+1}=\tfrac{\alpha_1}{\alpha_{n+1}}v_1+\cdots+\tfrac{\alpha_n}{\alpha_{n+1}}v_n.
\end{equation*}
Lemma 3.4.2.
Let \(v_1,v_2,\ldots,v_n\in V\) (\(n>1\)) and let \(v_i^\prime=v_i\) for \(i=1,2,\ldots,n-1\) and \(v_n^\prime=v_n+\alpha v_1\) for some \(\alpha\in F\text{.}\) Then, \(v_1,v_2,\ldots,v_n\) are linearly independent if and only if \(v_1^\prime,v_2^\prime,\ldots,v_n^\prime\) are linearly independent.Proof.
\begin{equation*}
\gamma_1 v_1^\prime+\cdots+\gamma_{n-1}v_{n-1}^\prime+\gamma_nv_n^\prime=0,
\end{equation*}
i.e.,
\begin{equation*}
\gamma_1 v_1+\cdots+\gamma_{n-1}v_{n-1}+\gamma_n (v_n+\alpha v_1)=0
\end{equation*}
Because \(v_1,v_2,\ldots,v_n\) are linearly independent, we get that
\begin{equation*}
\gamma_1+\gamma_n\alpha=\gamma_2=\gamma_3=\cdots=\gamma_n=0.
\end{equation*}
Hence all \(\gamma_i=0\) and \(v_1^\prime,v_2^\prime,\ldots,v_n^\prime\) are linearly independent.We now prove the following important result. This result show that any two basis of a vector space have the same cardinality.
Theorem 3.4.3.
Let \(V\) be a finite-dimensional vector space over a field \(F\text{.}\) If a basis of \(V\) has \(n\) vectors then any \(n+1\) vectors in \(V\) are linearly dependent.Proof.
We use induction to prove this theorem. Suppose that \(\{v_1,v_2,\ldots,v_n\}\) is a basis of \(V\text{.}\) Let \(w_1,w_2,\ldots,w_n,w_{n+1}\) be vectors in \(V\text{.}\) If \(n=1\) then, there are scalars \(\alpha,\beta\in F\) such that \(w_1=\alpha v_1\) and \(w_2=\beta v_1\text{.}\) In view of Remark 3.1.2 it is enough to consider the case when \(\alpha\neq 0\) and \(\beta\neq 0\text{.}\) Thus, \(\alpha\neq 0\) (resp., \(\beta\neq 0\)) implies that \(w_2=\beta\alpha^{-1}w_1\) (resp., \(w_1=\alpha\beta^{-1}w_2)\text{.}\) Hence, we are done when \(n=1\text{.}\)
We now assume that the result is true for vector spaces having \(n-1\) number of vectors in its basis. Now assume that \(w_1,\ldots,w_n,w_{n+1}\in V\) are linearly independent. As \(\{v_1,v_2,\ldots,v_n\}\) is a basis it spans \(V\text{,}\) in particular we have
\begin{equation*}
w_i=\sum_{i=1}^{n}\alpha_{ij}v_j.
\end{equation*}
Without loss of generality we assume that \(w_1\neq 0\text{.}\) So, one of the scalars, say \(\alpha_{1n}\neq 0\text{.}\) Set \(w_1^\prime=w_1\) and \(w_k^\prime=w_k-\alpha_{kn}\alpha_{1n}^{-1}w_1\) for \(k=2,3,\ldots,n+1\text{.}\) Note that in the expression of \(w_k^\prime\) (\(1\leq k\leq n+1\)) the vector \(v_n\) does not occur. Repeated application of Lemma 3.4.2 implies that \(w_1^\prime,w_2^\prime,\ldots,w_{n+1}^\prime\) is linearly independent. In particular, \(w_2^\prime,\ldots,w_{n+1}^\prime\) is linearly independent. As seen above, each \(w_k^\prime\) (\(1\leq k\leq n+1\)) is a linear combination of \(v_1,\ldots,v_{n-1}\text{.}\) Induction hypothesis applied to a vector subspace \(\langle v_1,v_2,\ldots,v_{n-1}\rangle\) implies that \(w_2^\prime,\ldots,w_{n+1}^\prime\) are linearly dependent, a contradiction.
Corollary 3.4.4. (Invariance of dimension).
Any two bases of a finite-dimensional vector space have the same number of elements.Corollary 3.4.5.
Let \(\dim_FV=n\) (refer to Definition 3.3.2). Then any subset of \(V\) with \(m\) elements with \(m>n\) is linearly dependent.Remark 3.4.6. (Completing to a basis).
Let \(V\) be a finite-dimensional vector space over a field \(F\text{,}\) and let \(S\subseteq V\) be a nonempty subset. Consider linearly independent vectors \(v_1,v_2,\ldots,v_n\in S\text{.}\) Then either every subset \(\{v_1,\ldots,v_n,w\}\) with \(w\in S\) is linearly dependent or there exists \(v_{n+1}\in S\) such that \(\{v_1,\ldots,v_n,v_{n+1}\}\) is linearly independent.
Similarly, either every \(\{v_1,\ldots,v_n,v_{n+1},w\}\) with \(w\in S\) is linearly dependent or there exists \(v_{n+2}\in S\) such that \(\{v_1,\ldots,v_n,v_{n+1},v_{n+2}\}\) is linearly independent. Continuing in this way, after a finite number of steps, we obtain a linearly independent subset of \(S\) such that any bigger subset of \(S\) is linearly dependent.
Therefore, any linearly independent subset can be expanded to a maximal linearly independent susbset.
In fact we have the following result.
Theorem 3.4.7.
Let \(V\) be a finite-dimensional vector space over \(F\text{,}\) and let \(\{e_1,e_2,\ldots,e_n\}\) be a basis of \(V\text{.}\) If \(\mathfrak{B}=\{f_1,f_2,\ldots,f_r\}\subseteq V\) is linearly independent then, \(\mathfrak{B}\) can be extended to a basis of \(V\) by adding \(n-r\) vectors from \(\{e_1,e_2,\ldots,e_n\}\text{.}\)Proof.
Consider the set \(\{f_1,f_2,\ldots,f_r,e_1,e_2,\ldots,e_n\}\text{.}\) Choose, in this set, a maximal linearly independent set \(\{f_1,f_2,\ldots,f_r,e_{k_1},e_{k_2},\ldots,e_{k_\ell}\}\) that includes all \(f_i\text{.}\)