Linear algebra: the theory of vector spaces and linear transformations

The definition of a subspace of a vector space \(V\) is very much in the same spirit as our definition of linear transformations. It is a subset of \(V\) that in some sense respects the vector space structure: in the language of Definition 3.3.1, it is a subset that is closed under addition and closed under scalar multiplication.

In fact the connection between linear transformations and subspaces goes deeper than this. As we will see in Definition 3.4.1, a linear transformation \(T\colon V\rightarrow W\) naturally gives rise to two important subspaces: the null space of \(T\) and the image of \(T\) .

Subsection 3.3.1 Definition of subspace

Definition 3.3.1 . Subspace.

Let \(V\) be a vector space. A subset \(W\subseteq V\) is a of \(V\) if the following conditions hold:

  1. \(W\) contains the zero vector.
We have \(\boldzero\in W\text<.>\)

For all \(\boldv_1,\boldv_2\in V\text<,>\) if \(\boldv_1,\boldv_2\in W\text<,>\) then \(\boldv_1+\boldv_2\in W\text<.>\) Using logical notation:

\begin \boldv_1,\boldv_2\in W\implies \boldv_1+\boldv_2\in W\text \end

For all \(c\in \R\) and \(\boldv\in V\text<,>\) if \(\boldv\in W\text<,>\) then \(c\boldv\in W\text<.>\) In logical notation:

\begin \boldv\in W\Rightarrow c\boldv\in W\text \end

Example 3.3.2 .

Let \(V=\R^2\) and let \begin W=\<(t,t)\in\R^2 \colon t\in\R\>\text \end Prove that \(W\) is a subspace. We must show properties (i)-(iii) hold for \(W\text<.>\)

The zero element of \(V\) is \(\boldzero=(0,0)\text<,>\) which is certainly of the form \((t,t)\text<.>\) Thus \(\boldzero\in W\text<.>\)

We must prove the implication \(\boldv_1, \boldv_2\in W\Rightarrow \boldv_1+\boldv_2\in W\text<.>\)

\begin \boldv_1,\boldv_2\in W\amp \Rightarrow\amp \boldv_1=(t,t), \boldv_2=(s,s) \text< for some \(t,s\in\R\) >\\ \amp \Rightarrow\amp \boldv_1+\boldv_2=(t+s,t+s)\\ \amp \Rightarrow\amp \boldv_1+\boldv_2\in W\text <.>\end

We must prove the implication \(\boldv\in W\Rightarrow c\boldv\in W\text<,>\) for any \(c\in \R\text<.>\) We have

\begin \boldv\in W\amp \Rightarrow\amp \boldv=(t,t)\\ \amp \Rightarrow\amp c\boldv=(ct,ct)\\ \amp \Rightarrow\amp c\boldv\in W \end

Example 3.3.3 .

Let \(V=\R^n\) and let \begin W=\<(x,y)\in \R^2\colon x, y\geq 0\>\text \end

Is \(W\) a vector space? Decide which of the of properties (i)-(iii) in Definition 3.3.1 (if any) are satisfied by \(W\text<.>\)

Clearly \(\boldzero=(0,0)\in W\text<.>\)

Suppose \(\boldv_1=(x_1,y_1), \boldv_2=(x_2,y_2)\in W\text<.>\) Then \(x_1, x_2, y_1, y_2\geq 0\text\) in which case \(x_1+x_2, y_1+y_2\geq 0\text\) and hence \(\boldv_1+\boldv_2\in W\text<.>\) Thus \(W\) is closed under addition.

The set \(W\) is not closed under scalar multiplication. Indeed, let \(\boldv=(1,1)\in W\text<.>\) Then \((-2)\boldv=(-2,-2)\notin W\text<.>\)

Procedure 3.3.4 . Two-step proof for subspaces.

As with proofs regarding linearity of functions, we can merge conditions (ii)-(iii) of Definition 3.3.1 into a single statement about linear combinations, deriving the following two-step method for proving a set \(W\) is a subspace of a vector space \(V\text<.>\)

Show \(\boldzero_V\in W\) \begin \boldv_1, \boldv_2\in W\implies c\boldv_1+d\boldv_2\in W\text \end for all \(c,d\in\R\text<.>\)

Video example: deciding if \(W\subseteq V\) is a subspace.

Figure 3.3.5 . Video: deciding if \(W\subseteq V\) is a subspace Figure 3.3.6 . Video: deciding if \(W\subseteq V\) is a subspace

Remark 3.3.7 . Subspaces are vector spaces.

If \(W\) is a subspace of a vector space \(V\text<,>\) then it inherits a vector space structure from \(V\) by simply restricting the vector operations defined on \(V\) to the subset \(W\text<.>\)

It is important to understand how conditions (ii)-(iii) of Definition 3.3.1 come into play here. Without them we would not be able to say that restricting the vector operations of \(V\) to elements of \(W\) actually gives rise to well-defined operations on \(W\text<.>\) To be well-defined the operations must output elements that lie not just in \(V\text\) but in \(W\) itself. This is precisely what being closed under addition and scalar multiplication guarantees.

Once we know restriction gives rise to well-defined operations on \(W\text<,>\) verifying the axioms of Definition 3.1.1 mostly amounts to observing that if a condition is true for all \(\boldv\) in \(V\text<,>\) it is certainly true for all \(\boldv\) in the subset \(W\text<.>\)

The “existential axioms” (iii) and (iv) of Definition 3.1.1, however, require special consideration. By definition, a subspace \(W\) contains the zero vector of \(V\text\) and clearly this still acts as the zero vector when we restrict the vector operations to \(W\text<.>\) What about vector inverses? We know that for any \(\boldv\in W\) there is a vector inverse \(-\boldv\) lying somewhere in \(V\text<.>\) We must show that in fact \(-\boldv\) lies in \(W\text\) i.e. we need to show that the operation of taking the vector inverse is well-defined on \(W\text<.>\) We prove this as follows:

We now know how to determine whether a given subset of a vector space is in fact a subspace. We are also interested in means of constructing subspaces from some given ingredients. The result below tells us that taking the intersection of a given collection of subspaces results in a subspace. In Subsection 3.4.1 we see how a linear transformation automatically gives rise to two subspaces.

Theorem 3.3.8 . Intersection of subspaces.

Let \(V\) be a vector space. Given a collection \(W_1, W_2,\dots, W_r\text<,>\) where each \(W_i\) is a subspace of \(V\text<,>\) the intersection