Skip to main content

Section 3.6 Examples

In this section we give few examples of basis of a given vector space. First recall the definition of Kronecker delta:
\begin{equation*} \delta_{ij}=\begin{cases}1\amp\text{if }i=j\\0\amp\text{if } i\neq j\end{cases} \end{equation*}
We begin with the following example.
Let \(F\) be a field and consider \(F^n\) as a vector space over \(F\) (refer to Example 2.2.2). For \(i\in\{1,2,\ldots,n\}\text{,}\) let \(e_i=(0,0,\ldots,0,1,0,\ldots,0)\text{,}\) where \(1\) is at the \(i\)-th place. In other words,
\begin{equation*} e_i=(\delta_{i1},\delta_{i2},\ldots,\delta_{in}). \end{equation*}
We claim that \(\mathfrak{B}=\{e_1,e_2,\ldots,e_n\}\) is a basis of \(F^n\text{.}\) We first show that \(\mathfrak{B}\) is linearly independent. Suppose \(\sum_i\alpha_ie_i=0\text{.}\) Therefore, for each \(\ell\in\{1,2,\ldots,n\}\) we get
\begin{equation*} \sum_i\alpha_i\delta_{i\ell}=0. \end{equation*}
Hence, \(\alpha_\ell=0\text{.}\)
Now we show that \(\mathfrak{B}\) spans \(F^n\text{.}\) Let \(x=(x_1,x_2,\ldots,x_n)\in F^n\text{.}\) We can write \(x\) as a linear combination of vectors in \(\mathfrak{B}\text{,}\) indeed,
\begin{equation*} x=\sum_ix_ie_i. \end{equation*}
This also shows that \(\dim_F(F^n)=n.\) We call \(\{e_1,\ldots,e_n\}\) the standard basis of \(F^n\text{.}\)
Similarly, \(\{e_i^t:1\leq n\}\) is a basis of \(M_{n\times 1}\text{.}\) We call \(\{e_i^t:1\leq i\leq n\}\) the standard basis.
Consider the matrix over \(\Q\)
\begin{equation*} A=\begin{pmatrix}1\amp 1\amp 1\\1\amp 1\amp 1\\1\amp 1\amp 1\end{pmatrix}. \end{equation*}
Its row-reduced echelon form is
\begin{equation*} R=\begin{pmatrix}1\amp 1\amp 1\\0\amp 0\amp 0\\0\amp 0\amp 0\end{pmatrix}. \end{equation*}
We want to find a basis for the solution space of \(AX=0\text{.}\) In Chapter 1 we saw that the solution space of the system \(AX=0\) and the system \(RX=0\) is same. So, we work with the matrix \(R\text{.}\)
Only the first row of \(R\) is nonzero and the leading term occurs in the first column. We have
\begin{equation*} x_1+x_2+x_3=0. \end{equation*}
Solutions of the system \(RX=0\) are obtained by assigning arbitrary values to \(x_2,x_3\) and then computing the corresponding value for \(x_1\text{.}\) Consider the vectors \(v=(-1,1,0)^t\) and \(w=(-1,0,1)^t\text{.}\) We claim that \(\{v,w\}\) forms a basis for the space of solutions of \(RX=0\text{.}\) It is easy to see that \(\{v,w\}\) is linearly independent.
Now suppose that \((s_1,s_2,s_3)^t\) is a solution of \(RX=0\text{.}\) Then,
\begin{equation*} s_2\cdot v+s_3\cdot w=(-s_2-s_3,s_2,s_3)^t \end{equation*}
is also a solution of \(RX=0\text{.}\) Moreover, as \(s_1=-s_2-s_3\) we get that \((s_1,s_2,s_3)^t\) is a linear combination of \(v,w\text{.}\) Hence, \(\{v,w\}\) spans the space of solutions of \(RX=0\text{.}\)
Let \(A\in M_{m\times n}(F)\) be a matrix and \(R\) be its row-reduced echelon form. Let \(\ker A\) be the subspace of solutions of \(AX=0\text{.}\) Then, \(\ker A\) is also the subspace of solutions of \(RX=0\text{.}\) Suppose that the leading nonzero entries of nonzero rows of \(R\) occur in columns \(k_1,k_2,\ldots,k_r\text{.}\) Let \(I=\{1,2,\ldots,n\}\setminus\{k_1,k_2,\ldots,k_r\}\text{.}\) Thus if \((x_1,x_2,\ldots,x_n)^t\in\ker A\) then, for \(\ell=1,2,\ldots,r\) and for some \(\alpha_{k_\ell}(i)\in F\) we get the following:
\begin{equation} x_{k_\ell}=\sum_{i\in I}\alpha_{k_\ell}(i)\,x_i\tag{3.6.1} \end{equation}
Solutions of \(RX=0\) can be obtained by assigning arbitrary values to \(x_i\) (\(i\in I\)) and then obtaining corresponding values for \(x_{k_\ell}\) (\(1\leq\ell\leq r\)). Consider for each \(i\in I\) vectors \(v_i\in\ker A\) obtained by putting \(1\) at the \(i\)-th place and \(0\) at \(j\)-th place for \(j\in I\setminus\{i\}\text{.}\) Therefore, \(k_\ell\)-th place of \(v_i\) will be \(\alpha_{k_\ell}(i)\in F\) for \(1\leq\ell\leq r.\) To summarize, for \(p\in\{1,2,\ldots,n\}\text{,}\) the \(p\)-th place of the vector \(v_i\) (\(i\in I\)) is given by
\begin{equation*} v_{ip}=\begin{cases}\delta_{ip}\amp p\in I\\\alpha_{p}(i)\amp p\in\{k_1,k_2,\ldots,k_r\}.\end{cases} \end{equation*}
We claim that \(\{v_i\}_{i\in I}\) is linearly independent. Suppose that \(\sum_{i\in I}\beta_iv_i=0\text{.}\) We thus have \(\sum_{i\in I}\beta_iv_{ip}=0\text{.}\) Therefore, for \(p\in I\) we get \(\sum_{i\in I}\beta_i\delta_{ip}=0\) and thus \(\beta_p=0\text{.}\) Hence, the claim is proved.
We now show that \(\{v_i\}_{i\in I}\) spans \(\ker A\text{.}\) Let \(s=(s_1,s_2,\ldots,s_n)^t\in\ker A\text{.}\) In particular, by eq. (3.6.1), \(s_{k_\ell}=\sum_{i\in I}\alpha_{k_\ell}(i)s_i\text{.}\) Hence, we have \(s=\sum_{i\in I} s_iv_i\text{.}\)
This also shows that \(\dim_F(\ker A)=n-r.\)
Let \(F\) be a field and let \(\mathcal{P}_n(F)\) be the subspace of all polynomials in one variable over \(F\) of degree at most \(n\text{.}\) Consider \(\mathfrak{B}=\{1,x,x^2,\ldots,x^{n-1},x^n\}\text{.}\) Check that \(\mathfrak{B}\) is a basis of \(\mathcal{P}_n(F)\text{.}\)
In particular \(\dim_F\big(\mathcal{P}_n(F)\big)=n\text{.}\)
Let \(V\) and \(W\) be finite-dimensional vector spaces over a field \(F\text{.}\) Suppose that \(\mathfrak{B}_V=\{v_1,v_2,\ldots,v_n\}\) and \(\mathfrak{B}_W=\{w_1,w_2,\ldots,w_m\}\) are bases of \(V\) and \(W\text{,}\) respectively. Consider the vector space \(V\bigoplus W\) (refer to Example 2.2.6). We claim that \(\mathfrak{B}=\{(v_i,0),(0,w_j):v_i\in\mathfrak{B}_V\text{ and }w_j\in\mathfrak{B}_W\}\) is a basis of \(V\bigoplus W\text{.}\) Indeed, if
\begin{equation*} \sum_i\alpha_i(v_i,0)+\sum_j\beta_j(0,w_j)=0 \end{equation*}
then, \(\sum_i\alpha_iv_i=0\) and \(\sum_j\beta_jw_j=0\text{.}\) Since \(\mathfrak{B}_V,\mathfrak{B}_W\) are bases, all \(\alpha_i=0\) and \(\beta_j=0\text{.}\) Further, if \((v,w)\in V\bigoplus W\) then there are scalars \(\alpha_i,\beta_j\in F\) such that \(v=\sum_i\alpha_iv_i\quad\text{and}\quad w=\sum_j\beta_jw_j\text{.}\) Therefore,
\begin{equation*} (v,w)=\sum_i\alpha_i(v_i,0)+\sum_j\beta_j(0,w_j). \end{equation*}
In particular, \(\dim_F\big(V\bigoplus W\big)=\dim_FV+\dim_FW\text{.}\)