- Find the row rank of each of the following matrices:

(a) $\left(\begin{array}{cccc}2 & 4 & -3 & 0 \\ 1 & -4 & 3 & 0 \\ 3 & -5 & 2 & 1\end{array}\right);\quad$(b) $\left(\begin{array}{llll}1 & 2 & 3 & 0 \\ 2 & 3 & 4 & 1 \\ 3 & 4 & 5 & 2\end{array}\right);\quad$(c) $\left(\begin{array}{llll}1 & 2 & 3 & 0 \\ 2 & 3 & 4 & 2 \\ 3 & 4 & 5 & 2\end{array}\right)$

*Solution.*

(a) $\left(\begin{array}{cccc}2 & 4 & -3 & 0 \\ 1 & -4 & 3 & 0 \\ 3 & -5 & 2 & 1\end{array}\right)\rightarrow\left(\begin{array}{cccc}1&2&-\frac32&0\\0&-6&\frac92&0\\0&-11&-\frac{13}2&1\end{array}\right)\rightarrow\left(\begin{array}{cccc}1&2&-\frac32&0\\0&1&-\frac34&0\\0&0&-\frac{59}4&1\end{array}\right)$. rank=3.

(b) $\left(\begin{array}{llll}1 & 2 & 3 & 0 \\ 2 & 3 & 4 & 1 \\ 3 & 4 & 5 & 2\end{array}\right)\rightarrow\left(\begin{array}{llll}1 & 2 & 3 & 0 \\ 0 & -1 & -2 & 1 \\ 0 & -2 &-4 & 2\end{array}\right)\rightarrow\left(\begin{array}{llll}1 & 2 & 3 & 0 \\ 0 & 1 &2 &-1 \\ 0 & 0 &0 &0\end{array}\right)$. rank=2.

(c) $\left(\begin{array}{llll}1 & 2 & 3 & 0 \\ 2 & 3 & 4 & 2 \\ 3 & 4 & 5 & 2\end{array}\right)\rightarrow\left(\begin{array}{llll}1 & 2 & 3 & 0 \\ 0 &1 &2 &-2 \\0&-2&-4& 2\end{array}\right)\rightarrow\left(\begin{array}{llll}1 & 2 & 3 & 0 \\ 0 &1 &2 &-2 \\0&0&0&-2\end{array}\right)$. rank=3. - Let $V$ be a vector space and let $U,W$ be subspaces of $V$. Show that $U\setminus W$ is never a subspace. Find a necessary and sufficient condition for $U∪W$ to be a subspace.

*Solution.*

$0_V\in U\cap W\Rightarrow0_V\notin U\setminus W\Rightarrow U\setminus W$ is never a subspace.

Condition: One of $U$ and $W$ is contained in the other.

Sufficient: If $U\subset W$, then $U\cup W = W$ and $W$ is a subspace of $V$ by assumption. If $W\subset U$, then $U\cup W = U$ and $U$ is a subspace of $V$ by assumption.

Necessary: Suppose $U\cup W$ is a subspace of $V$. Assume that $U\nsubseteq W$ and $W\nsubseteq U$.

Then, there is an element $u\in U$ such that $u\notin W$ and there is an element $w\in W$ such that $w\notin U$.

Since $U\cup W$ is a subspace of $V$ and $u\in U\subset U\cup W$ and $w\in W\subset U\cup W$, $u+w\in U\cup W$.

So, $u+w\in U$ or $u+w\in W$. If $u+w\in U$, then $u+w=u'$ for some $u'\in U$. Since $U$ is a subspace of $V$, $w=u'-u\in U$. This is a contradiction.

If $u+w\in W$, then $u+w=w'$ for some $w'\in W$. Since $W$ is a subspace of $V$, $u=w'-w\in W$. This is a contradiction. So, $U\subset W$ or $W\subset U$. - Let $V =\mathbb R^n$ where $n > 2$, and let $U$ and $W$ be subspaces of $V$ of dimension $n − 1$.

(a) Show that if $U\ne W$ then $\dim(U ∩ W) = n − 2$.

(b) Now suppose that $n>3$ and let $U_1,U_2, U_3$ be three distinct subspaces of dimension $n−1$. Must it be true that $\dim(U_1∩U_2∩U_3)=n−3$? Give a proof or find a counterexample.

(c) Show that if $\dim U\le n − 2$ then there are infinitely many different subspaces $X$ such that $U\le X\le V$.

*Solution.*

(a) Because $U\subset U+W\subset V$, $\dim(U+W)=n-1$ or $n$. If $\dim(U+W)=n-1$, then $U+W=U$ and $U+W=W$, contradicting with $U\ne W$, so $\dim(U+W)=n$, by the dimension formula, $\dim(U+W)+\dim(U ∩ W) =\dim U+\dim W$, so $\dim(U ∩ W) = n − 2$.

(b) Counterexample: Let $n=3$, $L=U_1∩U_2∩U_3$ be a line through the origin, $U_1,U_2,U_3$ be 3 planes containing $L$, then $\dim L=1\ne 0=n-3$.

(c) Let $Y$ be a complement to $U$, then $U\cap Y=\{0\}$ and $\dim Y=2$. For any $Z<Y$, we have $X=U+Z$ satisfy $U\le X\le V$. For any different $Z_1,Z_2$, we have $Y\cap(U+Z_1)=Z_1\ne Z_2=Y\cap(U+Z_2)$, so $U+Z_1\ne U+Z_2$. So $X\leftrightarrow Z$ is a bijection. Because there are infinite choices of $Z$, we see that there are infinite choices of $X$. - Let $V =\mathbb R^4$, and let$$X=\left\{\left(x_{1}, x_{2}, x_{3}, x_{4}\right) \in V: x_{2}+x_{3}+x_{4}=0\right\}$$$$Y=\left\{\left(x_{1}, x_{2}, x_{3}, x_{4}\right) \in V: x_{1}+x_{2}=x_{3}-2 x_{4}=0\right\}$$Find bases for $X, Y , X ∩ Y$ and $X + Y$, and write down the dimensions of these subspaces.

*Solution.*

$\{(3,-3,2,1),(0,1,-1,0),(0,1,0,-1)\}$ is a basis for $X$, so $\dim X=3$.

$\{(3,-3,2,1),(1,-1,0,0)\}$ is a basis for $Y$, so $\dim Y=2$.

$\{(3,-3,2,1)\}$ is a basis for $X ∩ Y$, so $\dim(X ∩ Y)=1$.

$\{(3,-3,2,1),(0,1,-1,0),(0,1,0,-1),(1,-1,0,0)\}$ is a basis for $X + Y$, so $\dim(X + Y)=4$. - Let $V$ be an $n$-dimensional vector space.

(a) Prove that $V$ contains a subspace of dimension $r$ for each $r$ in the range $0\le r\le n$.

(b) Show that if $U_0 < U_1 <\cdots < U_k\le V$ (strict containments of subspaces of $V$) then $k\le n$. Show also that if $k=n$ then $\dim U_r=r$ for $0\le r\le k$.

(c) Let $U,W$ be subspaces of $V$ such that $U\le W$. Show that there is a subspace $X$ of $V$ such that $W∩X=U$ and $W+X=V$.

*Proof.*

(a) Let $\{v_1,v_2,\cdots,v_n\}$ be a basis for $V$, then $\operatorname{span}\{v_1,v_2,\cdots,v_r\}$ is a subspace of dimension $r$.

(b) $\dim U_0<\dim U_1<\cdots<\dim U_k$ are $k+1$ integers in $[0,n]$, so $k\le n$. Since there are $n+1$ integers in $[0,n]$, if $k=n$ then $\dim U_r=r$ for $0\le r\le k$.

Ingredient: if $U_i<U_{i+1}$, then $\dim U_i<\dim U_{i+1}$.

*Proof:*Suppose $\dim U_i=\dim U_{i+1}$, take a basis $\{u_i\}$ of $U_i$ and a vector $v\in U_{i+1}\setminus U_i$, then $\{u_i\}\cup\{v\}$ is linearly independent in $U_{i+1}$, so $\dim U_i+1\le\dim U_{i+1}$.

(c) Let $\{v_1,v_2,\cdots,v_k\}$ be a basis for $U$, and extend it to a basis for $W$:$\{v_1,v_2,\cdots,v_m\}$, and further extend to a basis for $V$:$\{v_1,v_2,\cdots,v_n\}$. Then $X=\operatorname{span}\{v_1,v_2,\cdots,v_k,v_{m+1},v_{m+2},\cdots,v_n\}$ satisfies $W∩X=U$ and $W+X=V$. - (a) Show that $V = U ⊕ W$ if and only if every vector $v ∈ V$ can be expressed uniquely in the form $v = u + w$ with $u ∈ U$ and $w ∈ W$.

(b) Let $V =\mathbb R^3$ and $U = \{(x_1, x_2, x_3) ∈ V : x_1 + x_2 + x_3 = 0\}$. For each of the following subspaces $W$, either prove that $V = U ⊕ W$, or explain why this is not true:

(i) $W =\{(x, 0, −x) : x ∈\mathbb R\}$;

(ii) $W =\{(x, 0, x) : x ∈\mathbb R\}$;

(iii) $W=\{(x_1, x_2, x_3) ∈ V : x_1 = x_3\}$.

*Proof.*

(a) ⇐: Suppose $0\ne v\in U\cap W$, then $0=0+0=v+(−v)$, where $v∈U$ and $−v∈W$, a contradiction, so $U\cap W=\{0\}$.

⇒: Suppose $v=u_1+w_1=u_2+w_2$, where $u_1,u_2\in U$ and $w_1,w_2\in W$, then $u_1-u_2=w_2-w_1\in U\cap W$, so $u_1-u_2=w_2-w_1=0$, so $u_1=u_2,w_1=w_2$, so $v$ can be expressed uniquely in the form $v=u+w$.

(b)(i) $x-x=0$⇒$(x, 0, −x)\in U$, so $W<U$, so $U+W=U<V$.

(ii) For any $v=(x_1, x_2, x_3) ∈ V$, we have $v=u+w$, where $u=\left(\frac{x_1-x_2-x_3}2,x_2,\frac{-x_1-x_2+x_3}2\right)\in U$, $w=\left(\frac{x_1+x_2+x_3}2,0,\frac{x_1+x_2+x_3}2\right)\in W$.

$(x, 0, x)\in U$⇒$2x=0$⇒$x=0$, so $U\cap W=\{0\}$. So $V = U ⊕ W$.

(iii) $\{(1,0,1),(0,1,0)\}$ is a basis for $W$⇒$\dim W=2$⇒$V$ cannot be direct sum of $U$ and $W$.

**Starter**

**S1.**For each of the following sets $S$ in a vector space $V$, find a basis for $\langle S\rangle$.

(i) $S = \{(1, 0, 3),(−2, 5, 4)\} ⊆\mathbb R^3$

(ii) $S = \{(6, 2, 0, −1),(3, 5, 9, −2),(−1, 0, 7, 8),(5, 5, −1, 2)\} ⊆\mathbb R^4$

(iii) $S =\{(7, −3, 2, 11, 2),(0, 4, 9, −5, 16),(20, −13, 16, 24, 38),(1, 12, 8, −1, 0)\} ⊆\mathbb R^5$

One strategy is to create a matrix whose rows are the vectors of $S$. Then $\langle S\rangle$ is precisely the row space of this matrix, which we can find by applying EROs to reduce the matrix to echelon form. But by looking carefully at the situation, it might be possible to use another, quicker, strategy too.

(a) We have two vectors in $S$, and it is quick to see that they are linearly independent, so $S$ is itself a basis for $\langle S\rangle$.

(b) It’s harder to see by eye whether these vectors are linearly independent this time, so let’s try creating a matrix whose rows are the vectors of $S$, and reduce the matrix to echelon form.

We start with the matrix$$\left(\begin{array}{cccc}6 & 2 & 0 & -1 \\ 3 & 5 & 9 & -2 \\ -1 & 0 & 7 & 8 \\ 5 & 5 & -1 & 2\end{array}\right)$$When I used EROs to reduce it to echelon form, I got$$\left(\begin{array}{cccc}1 & \frac{1}{3} & 0 & -\frac{1}{6} \\ 0 & 1 & \frac{9}{4} & -\frac{3}{8} \\ 0 & 0 & 1 & \frac{191}{150} \\ 0 & 0 & 0 & 1\end{array}\right)$$So a basis for $\langle S\rangle$ is $\left\{\left(1, \frac{1}{3}, 0,-\frac{1}{6}\right),\left(0,1, \frac{9}{4},-\frac{3}{8}\right),\left(0,0,1, \frac{191}{150}\right),(0,0,0,1)\right\}$. These span $\langle S\rangle$ because EROs don’t change the row space, and we can immediately see that they are linearly independent by looking at the leading entries.

*If we had continued with EROs to reduce the matrix to RRE form, then we would have found that the standard basis is a basis for $\langle S\rangle$ in this case.*

(c) Let’s try the matrix-and-EROs strategy again. We start with the matrix$$\left(\begin{array}{ccccc}7 & -3 & 2 & 11 & 2 \\ 0 & 4 & 9 & -5 & 16 \\ 20 & -13 & 16 & 24 & 38 \\ 1 & 12 & 8 & -1 & 0\end{array}\right)$$When I used EROs to reduce it to echelon form, I got$$\left(\begin{array}{ccccc}1 & -\frac{3}{7} & \frac{2}{7} & \frac{11}{7} & \frac{2}{7} \\ 0 & 1 & \frac{9}{4} & -\frac{5}{4} & 4 \\ 0 & 0 & 1 & -\frac{121}{189} & \frac{200}{81} \\ 0 & 0 & 0 & 0 & 0\end{array}\right)$$so a basis for $\langle S\rangle$ is $\left\{\left(1,-\frac{3}{7}, \frac{2}{7}, \frac{11}{7}, \frac{2}{7}\right),\left(0,1, \frac{9}{4},-\frac{5}{4}, 4\right),\left(0,0,1,-\frac{121}{189}, \frac{200}{81}\right)\right\}$. We know that these span $\langle S\rangle$ because EROs don’t change the row space, and we can immediately see that they are

linearly independent by looking at the leading entries.

We could, if we wanted to, make the basis look nicer (have all the entries integers) by multiplying through—this has no effect on whether the set is a basis. So we also see that $\{(7, −3, 2, 11, 7),(0, 4, 9, −5, 16),(0, 0, 567, −363, 1400)\}$ is a basis.

*[In fact, when I chose the vectors, I picked the third to be a linear combination of the others. The first, second and fourth are also linearly independent, so another answer is that $\{(7, −3, 2, 11, 2),(0, 4, 9, −5, 16),(1, 12, 8, −1, 0)\}$ is also a basis of $\langle S\rangle$.]*

**S2.**Let $U, W$ be subspaces of a finite-dimensional vector space $V$. The following are equivalent:

(i) $V=U \oplus W$;

(ii) every $v ∈ V$ has a unique expression as $u + w$ where $u ∈ U$ and $w ∈ W$;

(iii) $\dim V =\dim U +\dim W$ and $V = U + W$;

(iv) $\dim V =\dim U +\dim W$ and $U ∩ W =\{0_V\}$;

(v) if $u_1,\cdots,u_m$ is a basis for $U$ and $w_1,\cdots,w_n$ is a basis for $W$, then $u_1,\cdots, u_m, w_1,\cdots, w_n$ is a basis for $V$.

*[Note that proving the equivalence of (i) and (ii) is Q6(a) on this sheet]*

Since proving (i) ⇔ (ii) is Q6(a) on this sheet, I’m not going to write out a solution here.

We could go about proving all possible other implications, but that is more work than is necessary: we can strategically just prove a few.

(i) ⇒ (iii): Assume that $V=U⊕W$. Then $U ∩ W = \{0\}$, so $\dim(U ∩ W) = 0$. Also, $V = U + W$.

Hence, by the dimension formula, $\dim V =\dim(U + W) =\dim U +\dim W −\dim(U ∩ W) =\dim U +\dim W$.

(iii) ⇒ (iv): Assume that $\dim V =\dim U +\dim W$ and $V = U + W$. Then $\dim(U + W) =\dim U + \dim W$, so by the dimension formula $\dim(U ∩ W) = 0$, so $U ∩ W =\{0\}$.

(iv) ⇒ (i): Assume that $\dim V =\dim U +\dim W$ and $U∩W =\{0\}$. Then, by the dimension formula, $\dim(U + W) =\dim U +\dim W −\dim(U ∩ W) =\dim U+\dim W = \dim V$. So $U+W$ is a subspace of $V$ with the same dimension as $V$, so $U + W = V$. But since by assumption we also have $U ∩ W = \{0\}$, this gives $V = U ⊕ W$.

*[So now we know that (i), (iii) and (iv) are all equivalent.]*

(i) ⇒ (v): Assume that $V = U ⊕ W$. Take $u_1,\cdots, u_m$ a basis for $U$ and $w_1,\cdots, w_n$ a basis for $W$. Let $S = \{u_1,\cdots, u_m, w_1,\cdots,w_n\}$.

$S$*spans*$V$ : Take $v ∈ V$. Then $v = u+w$ for some $u ∈ U$ and $w ∈ W$. Now $u = α_1u_1+\cdots+α_mu_m$ and $w = β_1w_1 +\cdots+ β_nw_n$ for some $α_i, β_j ∈\mathbb F$, so $v$ is a linear combination of elements of $S$.

$S$*linear independent*: Take $α_i, β_j ∈\mathbb F$ with $α_1u_1 + \cdots + α_mu_m + β_1w_1 + \cdots+ β_nw_n = 0$. Then $α_1u_1 +\cdots + α_mu_m = −β_1w_1 − \cdots − β_nw_n ∈ U∩W$, so both sides are 0. But $u_1,\cdots, u_m$ are linearly independent, so $α_1 =\cdots= α_m = 0$, and similarly for $β_j$.

So $S$ is a basis for $V$.

(v) ⇒ (iii): Assume that if $u_1,\cdots,u_m$ is a basis for $U$ and $w_1,\cdots,w_n$ is a basis for $W$, then $u_1,\cdots, u_m, w_1,\cdots, w_n$ is a basis for $V$.

Then we see immediately that $\dim V =\dim U +\dim W$, and that $V = U + W$.

**S3.**Find an example to show that it is not the case that if $V = U ⊕ W$ then every basis of $V$ is a union of a basis of $U$ and a basis of $W$.

Here is one example, of course there are many possible examples that you might choose.

Let $V =\mathbb R^2$, let $U =\{(x,0) : x ∈\mathbb R\}$, let $W = \{(0,y) : y ∈\mathbb R\}$. Then $V = U ⊕ W$.

Consider the basis $\{(1,−1),(2,3)\}$ of $V$ — this is indeed a basis, because it is a set of two linearly independent elements in a vector space of dimension 2. Then neither basis vector is in $U$, and neither is in $W$, so neither can be a basis vector for $U$ or for $W$. So this basis of $V$ is not a union of a basis of $U$ and a basis of $W$.

**Pudding**

**P1**. Find the column rank of each of the matrices in Q1 on this sheet. What do you notice?

(a) The column rank is the dimension of the subspace of $\mathbb R^3$ spanned by the four columns. By taking the transpose, it is enough to find the row space of the transpose.

We can use our usual technique of applying EROs to put the matrix in echelon form, and we find that the matrix reduces to$$\left(\begin{array}{ccc}1 & \frac{1}{2} & \frac{3}{2} \\ 0 & 1 & \frac{11}{6} \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{array}\right)$$so the row rank is 3, and hence the column rank of the original matrix is 3.

(b) Similarly, we use EROs on the transpose. We can reduce the transpose to$$\left(\begin{array}{lll}1 & 2 & 3 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \\ 0 & 0 & 0\end{array}\right)$$and we find that the column rank is 2.

*[We could also have seen directly that the second and third columns are both linear combinations of the first and fourth columns, and the first and fourth columns are linearly independent.]*

(c) In this case, we can reduce the transpose to $\left(\begin{array}{lll}1 & 2 & 3 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{array}\right)$ so the column rank is 3. Comparing with the row ranks we found in Q1, we can see that in each of these three cases the row rank is the same as the column rank. Intriguing...

But note that in each case the row space and column space are different: the row space is a subspace of $\mathbb R^4$, whereas the column space is a subspace of $\mathbb R^3$.

*[In lectures, we’ll explore whether in general there is a link between row rank and column rank.]*

**P2**. Taking $A$ as each of the three matrices in Q1 on this sheet, find all the solutions to $A\mathbf x=\mathbf 0$ in each case. What do you notice?

(a) We can use EROs to reduce the matrix to RRE form. In fact we have already put this matrix into RRE form on Sheet 2.

*[We normally use an augmented matrix when solving a system of simultaneous linear equations. In this case, the additional column in the matrix would have all entries 0, and these would not*$$\left(\begin{array}{cccc}2 & 4 & -3 & 0 \\ 1 & -4 & 3 & 0 \\ 3 & -5 & 2 & 1\end{array}\right) \rightarrow\left(\begin{array}{cccc}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & -\frac{3}{7} \\ 0 & 0 & 1 & -\frac{4}{7}\end{array}\right)$$We read off that the set of solutions to $A\mathbf x =\mathbf 0$ is $\langle(0\;3\;4\;7)^{\rm T}\rangle$.

change when applying EROs, so we do not record them.]

(b) We use the same strategy again$$\left(\begin{array}{cccc}1 & 2 & 3 & 0 \\ 2 & 3 & 4 & 1 \\ 3 & 4 & 5 & 2\end{array}\right) \rightarrow\left(\begin{array}{cccc}1 & 0 & -1 & 2 \\ 0 & 1 & 2 & -1 \\ 0 & 0 & 0 & 0\end{array}\right)$$The set of solutions to $A\mathbf x=\mathbf 0$ is $\left\langle\left(\begin{array}{llll}1 & -2 & 1 & 0\end{array}\right)^{\rm T},\left(\begin{array}{llll}-2 & 1 & 0 & 1\end{array}\right)^{\rm T}\right\rangle$.

We know from Q1 that the matrices in (a) and (c) have row rank 3, and in these two cases the set of solutions to $A\mathbf x=\mathbf 0$ is a space with dimension 1. But the matrix in (b) has row rank 2, and it looks as though the set of solutions to $A\mathbf x=\mathbf 0$ is somehow compensating for the smaller row rank, by being a space with dimension 2.

*[This relates to important ideas that we’ll explore much more in lectures later this term!]*

**P3**. Let $V = V_1 ⊕ V_2$. Take a subspace $U\le V$. Must it be true that $U = U_1 ⊕ U_2$ where $U_1\le V_1$ and $U_2\le V_2$? (Find a proof or give a counterexample.)

This is not true. Here is a counterexample.

Let $V=\mathbb R^2$, and let $V_1 = \langle(1\;0)\rangle =\{(x\;0) : x∈\mathbb R\}$ and $V_2 =\langle(0,1)\rangle=\{(0\;y) : y ∈\mathbb R\}$. Then $V = V_1 + V_2 $and $V_1 ∩ V_2 =\{(0\;0)\}$, so $V=V_1⊕V_2$.

Consider $U=\langle(1\;1)\rangle=\{(z\;z):z∈\mathbb R\}$. This is certainly a subspace of $V$ (it is the span of a set of vectors, which guarantees that it is a subspace).

But we cannot write $U = U_1 ⊕ U_2$ where $U_1\le V_1$ and $U_2\le V_2$. Indeed, since $V_1$ and $V_2$ are 1-dimensional, their only subspaces are $\{(0\;0)\}$ and the whole space. But $U∩V_1=\{(0\;0)\}$ and similarly for $V_2$, so we cannot write $U$ in the desired form.