2018 Paper Ⅰ
PDF
- [Throughout this question you may assume that every matrix may be put into reduced row echelon form.]
- [8 marks] What does it mean to say that a square matrix $A$ is
(i) invertible?
(ii) elementary, so that pre-multiplication by the matrix has the effect of an elementary row operation?
Briefly explain why an elementary matrix is invertible and deduce that every invertible matrix can be written as a product of elementary matrices. - [8 marks] Let $A$ be a square matrix. Show that the following are equivalent.
(i) $A$ is invertible.
(ii) The only row vector $\mathbf{x} ∈\mathbb{R}^{n}$ which solves $\mathbf{x} A=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$.
(iii) The rows of $A$ are linearly independent. - [4 marks] Let$$A=\left(\begin{array}{lll}a & a & b \\a & b & a \\b & a & a\end{array}\right),$$where $a$ and $b$ are distinct real numbers. Without making any reference to determinants, determine for what values of $a$ and $b$ the matrix $A$ is invertible.
Solution.
- (i) An $n × n$ matrix $A$ is invertible if there exists an $n × n$ matrix $B$ such that $A B=I_{n}=B A$.
(ii) Applying an ERO to the identity matrix produces an elementary matrix. EROs can be one of the three forms:
A Swap two rows.
B Multiply a row by a non-zero scalar $\lambda$.
C Add a multiple of a row to another row.
Each elementary matrix is invertible as each ERO has an ERO as its inverse. (A) is self-inverse. (B) can be inverted by multiplying the same row by $1 / \lambda$. (C) Subtracting the same multiple of the row inverts C.
An invertible matrix $A$ reduces to an invertible matrix and the only invertible $n × n$ matrix in RRE form is $I_{n}$. So$$E_{k} ⋯ E_{1} A=I_{n}$$for elementary matrices $E_{1}, \ldots, E_{k}$ and hence $A^{-1}=E_k⋯ E_1$. - (i)⇒(ii) Say that $A$ is invertible with $\mathrm{x} A=\mathbf{0}$. Postmultiplying by $A^{-1}$ we find each $\mathbf{x}=\mathbf{0}$.
(ii)⇒(iii) Say that $A$ has rows $\mathbf{r}_1, \ldots \mathbf{r}_n$. If$$α_1\mathbf{r}_1+α_2\mathbf{r}_2+⋯ α_n\mathbf{r}_n=\mathbf0$$This can be rewritten as$$\left(α_{1}, α_2, \ldots, α_n\right) A=\mathbf{0} .$$Assuming (ii) we find each $α_{i}=0$ and hence the rows are linearly independent.
(iii)⇒(i) Say that the rows of $A$ are linearly independent. As the presence of a zero row in the RRE form of a matrix means that some non-trivial linear combination of the rows adds to $\mathbf{0}$, and so implies a linear dependency, then the RRE form of $A$ is $I_{n}$. Hence $E_{k} ⋯ E_{1} A=I_{n}$ for elementary matrices $E_{1}, \ldots, E_{k}$ and $A^{-1}=E_{k} ⋯ E_{1}$. - Row-reducing the matrix, and assuming for now that $a \neq 0$ we find$$\left(\begin{array}{lll}a & a & b \\a & b & a \\b & a & a\end{array}\right) \longrightarrow\left(\begin{array}{ccc}1 & 1 & b / a \\1 & b / a & 1 \\b / a & 1 & 1\end{array}\right) \longrightarrow\left(\begin{array}{ccc}1 & 1 & b / a \\0 & b / a-1 & 1-b / a \\0 & 1-b / a & 1-b^{2} / a^{2}\end{array}\right)$$We can see that the rows in the last matrix are independent unless the second and third rows are multiples of one another which occurs when$$\frac{b / a-1}{1-b / a}=\frac{1-b / a}{1-b^{2} / a^{2}}$$noting that $a \neq b$. The above simplifies to $2 a+b=0$ which is the condition for singularity. If $a=0$ we can see that the matrix is invertible unless $b=0$ as well which is not permitted. The required condition for invertibility is that $2 a+b \neq 0$.
- [6 marks] Let $T: V → V$ be a linear map of a real finite-dimensional vector space $V$.
(i) State the Rank-Nullity Theorem.
(ii) By considering the restriction of $T$ to the image of $T$, or otherwise, show that$$\operatorname{nullity}\left(T^{2}\right) ⩽ 2\operatorname{nullity}(T) \text {. }$$ - [7 marks] Let$$A=\left(\begin{array}{lll}1 & 2 & 3 \\2 & 3 & 4 \\3 & 4 & 5\end{array}\right)$$(i) Find the rank and nullity of $A$.
(ii) Show that $(x, y, z)^{T}$ is in the image of $A$ if and only if $x+z=2 y$.
(iii) Find a basis for the kernel of $A$. - [7 marks] For each of the cases described below, either find a $3×3$ square matrix $M$ with the given properties or show that no such matrix exists. Briefly justify your answers.
(i) $\operatorname{rank}(M)=2, \operatorname{rank}\left(M^{2}\right)=1, \operatorname{rank}\left(M^{3}\right)=0$.
(ii) $\operatorname{rank}(M)=2, \operatorname{rank}\left(M^{2}\right)=0, \operatorname{rank}\left(M^{3}\right)=0$.
(iii) $\operatorname{rank}(M)=3, \operatorname{rank}\left(M^{2}\right)=3, \operatorname{rank}\left(M^{3}\right)=2$.
Solution.- (i) $\operatorname{dim} V=\operatorname{dim} \operatorname{ker}(T)+\operatorname{dim} \operatorname{Im}(T)$.
(ii) By applying the Rank-Nullity Theorem to the restriction of $T$ to Im $T$ we find $\operatorname{dim}(\operatorname{Im} T)=\operatorname{dim} \operatorname{ker}\left(T|_{\operatorname{Im} T}\right)+\operatorname{dim} \operatorname{Im}\left(T|_{\operatorname{Im} T}\right)=\operatorname{dim}(\operatorname{ker} T \cap \operatorname{Im} T)+\operatorname{dim} \operatorname{Im}\left(T^{2}\right) .$
Hence$$(\operatorname{dim} V-\operatorname{nullity}(T))-\left(\operatorname{dim} V-\operatorname{nullity}\left(T^{2}\right)\right)=\operatorname{dim}(\operatorname{ker} T \cap \operatorname{Im} T) ⩽ \operatorname{nullity}(T) .$$and the result follows. - (i) As the column space of $A$ is spanned by $(1,2,3)^{T}$ and $(1,1,1)^{T}$ then the rank is $2$ and the nullity is $3-2=1$.
(ii) The system $(A \mid \mathbf{x})$ reduces as$$\left(\begin{array}{ccc|c}1 & 2 & 3 & x \\2 & 3 & 4 & y \\3 & 4 & 5 & z\end{array}\right) \longrightarrow\left(\begin{array}{ccc|c}1 & 2 & 3 & x \\0 & -1 & -2 & y-2 x \\0 & -2 & -4 & z-3 x\end{array}\right) \longrightarrow\left(\begin{array}{ccc|c}1 & 2 & 3 & x \\0 & 1 & 2 & 2 x-y \\0 & 0 & 0 & x-2 y+z\end{array}\right)$$and so the system is consistent if and only if $x+z=2 y$.
(iii) When $\mathbf{x}=\mathbf{0}$ we can read off the general solution as $y=-2z$ and $x=z$ so that $(1,-2,1)^T$ is a basis for the kernel. - (i) Such an $M$ is$$M=\left(\begin{array}{lll}0 & 1 & 0 \\0 & 0 & 1 \\0 & 0 & 0\end{array}\right)$$as the image of $M$ is the $x y$-plane, the image of $M^{2}$ is the $x$-axis and $M^{3}=0$.
(ii) In this case we have $\operatorname{nullity}(M)=1$ and $\operatorname{nullity}(M^2)=3$ which contradicts (a)(ii).
(iii) In this case $M$ is invertible, having full rank, and hence so is $M^{3}$ and hence (iii) is impossible.
- [7 marks] Let $V$ be a real vector space.
(i) What is meant by an inner product $⟨,⟩$ on $V$ ?
(ii) Let $U$ be a subspace of $V$. Show that$$U^{⊥}=\{v ∈V:⟨ v, u⟩=0 \quad \text { for all } u ∈U\}$$is a subspace of $V$. - [7 marks] Let $Z$ be a real vector space and let $X, Y$ be subspaces of $Z$ such that $Z=X ⊕ Y$, so every $z ∈Z$ can be uniquely written as $z=x+y$ where $x ∈X$ and $y ∈Y$.
Let $⟨,⟩_{X}$ and $⟨,⟩_{Y}$ be inner products on $X$ and $Y$ respectively.
(i) Show that $⟨,⟩_{⊕}$, defined for $x_{1}, x_{2} ∈X, y_{1}, y_{2} ∈Y$ by$$\left< x_{1}+y_{1}, x_{2}+y_{2}\right>_{⊕}=\left< x_{1}, x_{2}\right>_{X}+\left< y_{1}, y_{2}\right>_{Y}$$is an inner product on $Z$.
(ii) Show that $X^{⊥}=Y$. - [6 marks] Let $Z=C[-1,1]$ denote the vector space of continuous real-valued functions on $[-1,1]$ with inner product$$⟨ f, g⟩_{Z}=\int_{-1}^{1} f(x) g(x) \mathrm{d} x$$[You are not required to show that $⟨,⟩_{Z}$ is an inner product.]
(i) Let$$X=\{f ∈Z: f(-x)=f(x) \text { for all } x\} ; \quad Y=\{f ∈Z: f(-x)=-f(x) \text { for all } x\}$$Show that $Z=X ⊕ Y$.
(ii) $⟨,⟩_{Z}$ restricts to inner products $⟨,⟩_{X}$ and $⟨,⟩_{Y}$ on $X$ and $Y$ respectively. With $⟨,⟩_⊕$ as defined in part (b)(i), show that$$⟨ f, g⟩_{⊕}=⟨ f, g⟩_{Z} \quad \text { for all } f, g ∈Z$$
Solution.- (i) We say that $⟨,⟩: V × V → \mathbb{R}$, is an inner product if
- $\left<α_{1} v_{1}+α_{2} v_{2}, w\right>=α_{1}\left< v_{1}, w\right>+α_{2}\left< v_{2}, w\right>$ for all $α_{1}, α_{2} ∈\mathbb{R}, v_{1}, v_{2}, w ∈V$;
- $⟨ v, w⟩=⟨ w, v⟩$ for all $v, w ∈V$;
- $⟨ v, v⟩ ⩾ 0$ and $⟨ v, v⟩=0$ if and only if $v=0$.
(ii) Note that $0 ∈U^{⊥}$ and if $v_{1}, v_{2} ∈U^{⊥}$ and $u ∈U$ then$$\left<α_{1} v_{1}+α_{2} v_{2}, u\right>=α_{1}\left< v_{1}, u\right>+α_{2}\left< v_{2}, u\right>=0 .$$ - (i) $⟨,⟩_{⊕}$ is linear in the first variable as $⟨,⟩_{X},⟨,⟩_{Y}$ and direct sums are. Likewise $⟨,⟩_{⊕}$ is symmetric. Finally$$⟨ x+y, x+y⟩_{⊕}=⟨ x, x⟩_{X}+⟨ y, y⟩_{Y} ⩾ 0$$with equality if and only if $x=0=y$ and so $x+y=0$.
(ii) Given $z_{0}=x_{0}+y_{0}$ then\begin{aligned}\left< z_{0}, x\right>_{⊕}=0 \quad \text { for all } x ∈X & ⇔\left< x_{0}, x\right>_{X}+\left< y_{0}, 0\right>_{Y}=0 \quad \text { for all } x ∈X \\& ⇔\left< x_{0}, x\right>_{X}=0 \text { for all } x ∈X \\& ⇔ x_{0}=0 \\& ⇔ z_{0} ∈Y .\end{aligned} - (i) For any $f ∈Z$ note that$$f(x)=\frac{f(x)+f(-x)}2+\frac{f(x)-f(-x)}2$$with the former summand being even and the latter odd. Hence $Z=X+Y$ and this is in fact a direct sum as the only odd and even function is the zero function.
(ii) If we write $f=f_{e}+f_{o}$ for the decomposition of $f$ associated with this direct sum then we have\begin{aligned}⟨ f, g⟩_{Z} &=\int_{-1}^{1}\left(f_{e}+f_{o}\right)\left(g_{e}+g_{o}\right) \mathrm{d} x \\&=\int_{-1}^{1}\left(f_{e} g_{e}+f_{o} g_{o}\right) \mathrm{d} x \quad \text { as } f_{e} g_{o} \text { and } f_{o} g_{e} \text { are odd } \\&=⟨ f, g⟩_{⊕}\end{aligned}
- [8 marks] Let $T:V→V$ be a linear map of a real finite-dimensional vector space $V$. What does it mean to say that:
(i) $v ∈V$ is an eigenvector of $T$?
(ii) $T$ is diagonalizable?
Let $M$ be a matrix representing $T$ with respect to some basis for $V$. Show that $T$ is diagonalizable if and only if there exists an invertible matrix $P$ such that $P^{-1} M P$ is a diagonal matrix. - [8 marks] Let$$A=\left(\begin{array}{lll}0 & 1 & 0 \\0 & 0 & 1 \\1 & 0 & 0\end{array}\right)$$(i) Show that $A$ is not diagonalizable (over $\mathbb{R}$).
(ii) Show that the subspace$$X=\left\{(x, y, z)^T: x+y+z=0\right\}$$is $A$-invariant — that is, if $\mathbf{v} ∈X$ then $A \mathbf{v}∈X$.
(iii) Hence find an invertible real matrix $P$ such that$$
P^{-1} A P=\left(\begin{array}{lll}1 & 0 & 0 \\0 & a & b \\0 & c & d\end{array}\right)$$for some $a, b, c, d .$ [You are not required to determine $a, b, c, d$ explicitly.] - [4 marks] With $A$ as in part (b), let $Q$ be an invertible real matrix such that$$Q^{-1} A Q=\left(\begin{array}{ccc}1 & 0 & 0 \\0 & b_{11} & b_{12} \\0 & b_{21} & b_{22}\end{array}\right)$$for some $2 × 2$ matrix $B=\left(b_{i j}\right)$. Show that $B^2+B+I=0$. [Hint: note that $A^3=I$.]
Solution.
- (i) We say that $v \neq 0$ is an eigenvector of $T$ if $T v=\lambda v$ for some scalar $\lambda$.
(ii) $T$ is diagonalizable if $V$ has a basis of eigenvectors of $T$.
Let $M$ be a matrix representative of $T$. Say that $T$ has a basis of eigenvectors $v_{1}, \ldots, v_{n}$ and make their co-ordinate vectors the columns of a matrix $P$. Then $P$ is invertible as its columns are independent. Further we have that$$M P=M\left(v_{1}|\ldots| v_{n}\right)=\left(\lambda_{1} v_{1}|\ldots| \lambda_{n} v_{n}\right)=PD$$where $D=\operatorname{diag}\left(\lambda_{1}, \ldots, \lambda_{n}\right)$. Conversely, say that $P^{-1} M P$ is diagonal for some matrix $P$. As $P$ is invertible then the columns of $P$ form a basis and by the same argument as above are co-ordinate vectors of a basis of eigenvectors. - (i) The characteristic polynomial of $A$ equals$$\left|\begin{array}{ccc}-x & 1 & 0 \\0 & -x & 1 \\1 & 0 & -x\end{array}\right|=1-x^{3}$$which has only one real root $x=1$. As the other roots are not real then $A$ is not diagonalizable over $\mathbb{R}$.
(ii) If $(x, y, z)^{T}$ satisfies $x+y+z=0$ then $A(x, y, z)^{T}=(y, z, x)^{T}$ satisfies $y+z+x=0$ as well. Hence $X$ is invariant.
(iii) For $P^{-1} A P$ to have the required form, the first column of $P$ must be a 1-eigenvector. So we take $(1,1,1)^{T}$ as the first column. We must also have that the second and third columns of $P$ are basis vectors for a 2-dimensional invariant subspace. Hence we can take $(1,-1,0)^{T}$ and $(0,1,-1)^{T}$. - As argued above we need the first column of $Q$ to be a 1-eigenvector, and the second and third columns to be a basis for an $A$-invariant subspace $V$.
As $A^{3}=I$ then $\left(Q^{-1} A Q\right)^3=I$ and so $B^3=I$.
So for any $v ∈V$ we have $(B-I)\left(B^{2}+B+I\right) v=0$. But the only element of $V$ which is in the kernel of $B-I$ is the zero vector. Hence we have $\left(B^{2}+B+I\right) v=0$, giving $B^{2}+B+I=0$.
- Throughout this question, $G$ is a group of order 9.
(a) [5 marks] (i) State Lagrange's Theorem.
(ii) What are the possible orders of an element $g ∈G$? Justify your answer briefly.
(iii) If $G$ contains an element of order 9, show that $G \cong C_{9}$.
(b) [15 marks] Suppose for the rest of the question that $G$ contains no element of order 9 .
(i) If $H$ is a subgroup of $G$ and $g ∈G$, define the left coset $g H$.
(ii) Let $H=⟨ h⟩$ be a subgroup of order 3 and let $g ∈G$ with $g \notin H$. Show that$$
G=H \cup g H \cup g^{2} H=\left\{g^ih^j: 0 ⩽ i ⩽ 2,0 ⩽ j ⩽ 2\right\} .
$$(iii) Show that if $G$ is Abelian, then $G≌C_3×C_3$.
(iv) Let $g, h$ be as in part (ii). By considering the possible values of $h g$, or otherwise, show that there are, up to isomorphism, exactly two groups of order 9 .
Solution.
- (i) Lagrange's Theorem: if $H \leqslant G$ then $|H|$ divides $|G|$.
(ii) If $g$ is an element of order $k$ then $\left\{e, g, \ldots, g^{k-1}\right\}$ is a subgroup so by Lagrange $k$ divides $|G|$. So here the possible orders are $1,3,9$.
(iii) Suppse $o(g)=9$. Then $\left\{g^{i}: 0 \leqslant i \leqslant 8\right\}$ are distinct and so all the elements, and mapping the generator of $C_{9}$ to $g$ gives an isomorphism. - (i) $g H=\{g h: h \in H\}$.
(ii) If two cosets $g^{i} H$ and $g^{j} H$ intersect, then $g^{i-j} \in H$ so $i=j$(since $g \notin H$ and hence $g^{-1} \notin H$). So the three cosets are disjoint. Each has three elements, so between them they cover $G$.
(iii) Pick $H$ and $g$ as above (possible because there is an element of order 3). Then $G=\left\{g^{i} h^{j}\right\}$. Since $g$ and $h$ commute and have order 3, this gives $G \cong C_{3} \times C_{3}$.
(iv) As above, if $g$ and $h$ commute, the group is Abelian and hence $C_{3} \times C_{3}$. So assume $h g \neq g h$.
If $h g=g^{i}$ then $h=g^{i-1}$, a contradiction, similarly if $h g=h^{i}$. So $h g \notin\left\{e, g, g^{2}, h, h^{2}\right\}$.
Thus leaves the cases $h g \in\left\{g^{2} h, g h^{2}, g^{2} h^{2}\right\}$.
Suppose $h g=g^{2} h$. Then, for example, $(g h)^{2}=g(h g) h=g^{3} h^{2}=h^{2}$ which (as everything has order 3) gives $g h=h$, a contradiction. $h g=g h^{2}$ is analogous. And $h g=g^{2} h^{2}$ would imply $(h g)^{2}=g^{2} h^{2} h g=e$, a contradiction. So $G$ is either $C_{9}$ or $C_3 \times C_3$, which are not isomorphic.
- (a) [6 marks] Let $G$ be a group and let $Z$ be its centre, defined as the set of $z ∈G$ such that $g z=z g$ for all $g ∈G$. Show that $Z$ is a normal subgroup of $G$.
(b) [8 marks] Now let $G=D_{8}$ be the symmetry group of a square. Find the centre $Z$ in this case. By explicitly listing the cosets of $Z$, or otherwise, show that $D_{8} / Z$ is isomorphic to $C_{2} × C_2$.
(c) [6 marks] For $i=1,2$ let $ϕ_i:G_i→H_i$ be a homomorphism between finite groups. Suppose that $\operatorname{ker}\left(ϕ_{1}\right) \cong \operatorname{ker}\left(ϕ_{2}\right)$ and $\operatorname{im}\left(ϕ_{1}\right) \cong \operatorname{im}(ϕ_2)$. Is it necessarily the case that
(i) $\left|G_{1}\right|=\left|G_{2}\right|$ ?
(ii) $G_{1}$ and $G_{2}$ are isomorphic?
For each of (i) and (ii), either give a proof or give a counterexample.
Solution.
- To check subgroup, suppose $x, y \in Z$ and $g \in G$. Then $g x y=x g y=x y g$. Since this holds for all $g, x y \in Z$. Also, $e \in Z$, and if $x \in Z$ then pre- and post-multiplying $x g=g x$ by $x^{-1}$ gives $g x^{-1}=x^{-1} g$, so $x^{-1} \in Z$. Thus a subgroup.
Normal since $g Z=Z g$ element-wise and thus as a set. - (i) Writing $D_{8}=\left\{e, r, r^2, r^3, s, r s, r^2s, r^3 s\right\}$ as usual, with $r$ rotation by $\pi / 2$ and $s$ reflection in an axis, we see that $r^{2}$ (muliplication by $-1$) commutes with everything. Each reflection does not commute with $r$ or $r^3$, so $Z=\left\{e, r^2\right\}$.
(ii) A pedestrian way to find the quotient is to list the cosets $E=\left\{e, r^{2}\right\}$, $B=\left\{r, r^{3}\right\}, C=\left\{s, r^{2} s\right\}, D=\left\{r s, r^{3} s\right\}$. In the quotient $E$ is the identity and $B^{2}=C^{2}=D^{2}=E$ (only need to square the first element each time to see this), which is already enough. Also quick to check $B C=D$ etc. - (i) By the first isomorphism theorem we know that $G_{i} / \operatorname{ker}\left(\phi_{i}\right) \cong \operatorname{im}\left(\phi_{i}\right)$ and in particular that $\operatorname{ker}\left(\phi_{i}\right)$ has $\left|\operatorname{im}\left(\phi_{i}\right)\right|$ cosets in $G_{i}$, so $\left|G_{i}\right|=\left|\operatorname{ker}\left(\phi_{i}\right)\right|\left|\operatorname{im}\left(\phi_{i}\right)\right|$, so true.
(ii) Consider $C_{2}^{3}$ and the natural projection to $C_{2}^{2}$. The kernel is isormophic to $C_{2}$, as is the kernel of the quotient map $D_{8} \mapsto D_{8} / Z$. The images are isomorphic by the previous part, but $D_{8}$ and $C_{2}^{3}$ are not, as only one is abelian.
- (a) [10 marks] State and prove the Orbit-Stabilizer Theorem.
(b) [10 marks] Let $G$ be a finite group and let $H$ and $K$ be two subgroups of $G$. Let$$H K=\{h k: h ∈H, k ∈K\} .$$(i) Is $H K$ necessarily a subgroup of $G$ ? Give a proof or a counterexample.
(ii) The group $H×K$ acts on $H K$ by $(h, k)·x=h x k^{-1}$. Prove that this is indeed a group action.
(iii) Show that the orbit of $e$ equals $H K$ and find the stabilizer of $e$.
(iv) Deduce that $|H K|=\frac{|H||K|}{|H \cap K|}$
Solution.
- Let a finite group $G$ act on a set $S$, then $|G|=\lvert\operatorname{Stab}(s)\rvert\lvert\operatorname{Orb}(s)\rvert$ for any $s \in S$.
We consider a map $\phi: G / \operatorname{Stab}(s) \rightarrow \operatorname{Orb}(s)$ defined by $\phi(g \operatorname{Stab}(s))=g \cdot s$. This map is well-defined since if $g \operatorname{Stab}(s)=h \operatorname{Stab}(s)$ if and only if $g^{-1} h \in \operatorname{Stab}(s)$ which is equivalent to $g^{-1} h \cdot s=s$. This happens if and only if $h \cdot s=g \cdot s$. The same argument also shows that $\phi$ is one-to-one. It is also clearly onto. This proves that $|\operatorname{Orb}(s)|=|G / \operatorname{Stab}(s)|$ which is equal to $|G| /|\operatorname{Stab}(s)|$ by the Lagrange Theorem. - (i) Let $G=S_{3}$ and $H$ and $K$ be two different order 2 subgroups. Then it is easy to see that $|H K|=4$ so it can't be a subgroup.
(ii) It is clear that $(e, e) \cdot x=e x e^{-1}=x$. It is also clear that $\left(h_{2}, k_{2}\right) \cdot\left(\left(h_{1}, k_{1}\right) \cdot x\right)=h_{2} h_{1} x k_{1}^{-1} k_{2}^{-1}=\left(h_{2} h_{1}\right) x\left(k_{2} k_{1}\right)^{-1}=\left(h_{2} h_{1}, k_{2} k_{1}\right) \cdot x=\left(\left(h_{2}, k_{2}\right)\left(h_{1}, k_{1}\right)\right) \cdot x .$
This proves that it is an action.
(iii) The orbit is the set of elements of the form $(h, k) \cdot e=h k^{-1}$ for any $h \in H$ and $k \in K$.
Since $K$ is a group, this is the same as $H K$.
The stabilizer is the set of all $(h, k)$ such that $(h, k) \cdot e=e$. This is equivalent to $h=k$. So the stabilizer is made of elements $(g,g)$ where $g$ is any element of $H \cap K$.
(iv) Applying the Orbit-Stabilizer Theorem to $e$ we have $|H \times K|=\lvert\operatorname{Stab}(e)\rvert\lvert\operatorname{Orb}(e)\rvert$. Clearly $|H \times K|=|H||K|$. By the previous part $|\operatorname{Stab}(e)|=|H K|$ and $|\operatorname{Orb}(e)|=|H \cap K|$.