- The spectral theorem for real matrices says “Any real symmetric matrix is orthogonally diagonalizable”.
(i)Give an example of a matrix $A$ that is diagonalizable but not orthogonally diagonalizable (therefore it is not symmetric).
Solution.
Pick any diagonalizable matrix with simple eigenvalues whose corresponding eigenvectors are not orthogonal.
For example, $\pmatrix{2&-1\\0&1}$ has (1,1) and (1,0) as eigenvectors for the eigenvalues 1 and 2.
(ii)For an orthogonal matrix, eigenvectors of distinct eigenvalues is orthogonal. Show that this does not hold for matrices in general.
Solution.
For example, $\pmatrix{2&-1\\0&1}$ has (1,1) and (1,0) as eigenvectors for the eigenvalues 1 and 2. - Commuting matrices do not necessarily share all eigenvector, but generally do share a common eigenvector.
Proof.(https://math.stackexchange.com/q ... e-same-eigenvectors)
Let $A,B\in\mathbb{C}^{n\times n}$ such that $AB=BA$. There is always a nonzero subspace of $\mathbb{C}^n$ which is both $A$-invariant and $B$-invariant (namely $\mathbb{C}^n$ itself). Among all these subspaces, there exists hence an invariant subspace $\mathcal{S}$ of the minimal (nonzero) dimension.
We show that $\mathcal{S}$ is spanned by some common eigenvectors of $A$ and $B$.
Assume that, say, for $A$, there is a nonzero $y\in \mathcal{S}$ such that $y$ is not an eigenvector of $A$. Since $\mathcal{S}$ is $A$-invariant, it contains some eigenvector $x$ of $A$; say, $Ax=\lambda x$ for some $\lambda\in\mathbb{C}$. Let $\mathcal{S}_{A,\lambda}:=\{z\in \mathcal{S}:Az=\lambda z\}$. By the assumption, $\mathcal{S}_{A,\lambda}$ is a proper (but nonzero) subspace of $\mathcal{S}$ (since $y\not\in\mathcal{S}_{A,\lambda}$).
We know that for any $z\in \mathcal{S}_{A,\lambda}$, $Bz\in \mathcal{S}$ since $\mathcal{S}_{A,\lambda}\subset\mathcal{S}$ and $\mathcal{S}$ is $B$-invariant. However, $A$ and $B$ commute so $$ABz=BAz=\lambda Bz \quad \Rightarrow\quad Bz\in \mathcal{S}_{A,\lambda}.$$This means that $\mathcal{S}_{A,\lambda}$ is $B$-invariant. Since $\mathcal{S}_{A,\lambda}$ is both $A$- and $B$-invariant and is a proper (nonzero) subspace of $\mathcal{S}$, we have a contradiction. Hence every nonzero vector in $\mathcal{S}$ is an eigenvector of both $A$ and $B$. - A nonzero $A$-invariant subspace $\mathcal{S}$ of $\mathbb{C}^n$ contains an eigenvector of $A$.
Proof.
Let $S=[s_1,\ldots,s_k]\in\mathbb{C}^{n\times k}$ be such that $s_1,\ldots,s_k$ form a basis of $\mathcal{S}$. Since $A\mathcal{S}\subset\mathcal{S}$, we have $AS=SG$ for some $G\in\mathbb{C}^{k\times k}$. Since $k\geq 1$, $G$ has at least one eigenpair $(\lambda,x)$. From $Gx=\lambda x$, we get $A(Sx)=SGx=\lambda(Sx)$ ($Sx\neq 0$ because $x\neq 0$ and $S$ has full column rank). The vector $Sx\in\mathcal{S}$ is an eigenvector of $A$ and, consequently, $\mathcal{S}$ contains at least one eigenvector of $A$. - (i) Similar matrices have the same eigenvalues, up to order. Hence they have the same trace, determinant, and characteristic polynomial.
Proof.
Let $B=P^{-1}AP$, then $\det(λI-B)=\det(λI-P^{-1}AP)=\det(P^{-1}λIP-P^{-1}AP)=\det(P^{-1}(λI-A)P)=\det(λI-A)$, so $A$ and $B$ have the same charateristic polynomial, so have the same eigenvalues. Alternatively, for a eigenvector $v$ of $B$, $Bv=λv$, we have $A(Pv)=PBP^{-1}(Pv)=PBv=λPv$, so $Pv$ is an eigenvector of $A$ of the same eigenvalue, so $A$ and $B$ have the same eigenvalues.
(ii) Sylvester's law of inertia: two congruent symmetric matrices with real entries have the same numbers of positive, negative, and zero eigenvalues.
Proof. - A complex square matrix $A$ is normal if it commutes with its conjugate transpose $A^*$. Show that a normal triangular matrix is diagonal.
Proof.
Let $A$ be any normal upper triangular matrix. Since
$(A^* A)_{ii} = (A A^*)_{ii},$
using subscript notation, one can write the equivalent expression using instead the $i$th unit vector ($\hat{\mathbf e_i}$) to select the $i$th row and $i$th column:
$\hat{\mathbf e_i}^\intercal \left(A^* A\right) \hat{\mathbf e_i}=\hat{\mathbf e_i}^\intercal\left(A A^*\right) \hat{\mathbf e_i}.$
The expression
$\left( A \hat{\mathbf e_i}\right)^* \left( A \hat{\mathbf e_i}\right) = \left( A^* \hat{\mathbf e_i}\right)^* \left( A^* \hat{\mathbf e_i}\right)$
is equivalent, and so is
$\left \|A \hat{\mathbf e_i}\right\|^2 =\left \|A^* \hat{\mathbf e_i}\right \|^2,$
which shows that the $i$th row must have the same norm as the $i$th column.
Consider $i=1$. The first entry of row 1 and column 1 are the same, and the rest of column 1 is zero (because of triangularity). This implies the first row must be zero for entries 2 through $n$. Continuing this argument for row–column pairs 2 through $n$ shows $A$ is diagonal. - (i) Find a matrix $A \in M_{2×2}(\mathbb Q)$ which is not diagonalizable over $\mathbb Q$ but is diagonalizable over $\mathbb R$
(ii) Find a real matrix which is not diagonalizable over $\Bbb R$ but is diagonalizable over $\Bbb C$.
(iii) Find a non-diagonalizable matrix over $\Bbb C$.
Solution.
(i) $A=\begin{pmatrix}
1 & 1 \\
1 & 2 \end{pmatrix}$
Its characteristic polynomial is $\det \begin{pmatrix}
1-x & 1 \\
1 & 2-x \end{pmatrix}=(1-x)(2-x)-1=x^2-3x+1$ which has irrational roots.$$A=UDU^{-1}, \text{ where }D= \frac{1}{2}\begin{pmatrix}3+ \sqrt{5}& 0 \\0 & 3- \sqrt{5}\end{pmatrix}, \; U= \frac{1}{2}\begin{pmatrix}-1+ \sqrt{5}& -1- \sqrt{5}\\2 & 2 \end{pmatrix}$$(ii) Consider a rotation matrix $B=\left[{\begin{array}{rr}0&1\\\!-1&0\end{array}}\right]$. $B$ has no real eigenvalues. If we take $Q={\begin{bmatrix}1&i\\i&1\end{bmatrix}}$, then $Q^{-1}BQ$ is diagonalizable.
(iii) Consider a nilpotent matrix $\begin{bmatrix}0&1\\0&0\end{bmatrix}$. - (i) A linear transformation $P:W→W$ is orthogonal then it is self-adjoint.
Proof.
For every $x,y∈W$, $\langle \mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,P^{*}\mathbf {y} \rangle$
(ii) A projection is orthogonal if and only if it is self-adjoint.
Proof.
Let $P$ be self-adjoint and idempotent, for any $\mathbf x$ and $\mathbf y$ in $W$ we have $P\mathbf{x} \in U$, $\mathbf{y}-P\mathbf{y} \in V$, and
$$\langle P \mathbf x, \mathbf y - P \mathbf y \rangle = \langle P^2 \mathbf x, \mathbf y - P \mathbf y \rangle = \langle P \mathbf x, P\left(I-P\right) \mathbf y \rangle = \langle P \mathbf x, \left(P-P^2\right) \mathbf y \rangle = 0$$where $\langle\cdot,\cdot\rangle$ is the inner product associated with $W$. Therefore, $P$ and $I - P$ are orthogonal projections. The other direction follows from (i). - If $\mathbf {u} _{1},\ldots ,\mathbf {u} _{k}$ is a (not necessarily orthonormal) basis of $U$, and $A$ is the matrix with these vectors as columns, then the projection is $P_{A}=A\left(A^{\mathrm {T} }A\right)^{-1}A^{\mathrm {T}}$.
Proof.
For any $v∈U^⊥$, $A^{\rm T}v=0$, so $A\left(A^{\mathrm {T} }A\right)^{-1}A^{\mathrm {T}}v=0$. For any $k$, $A\left(A^{\mathrm {T} }A\right)^{-1}A^{\mathrm {T}}u_k=u_k$. Therefore $P_{A}=A\left(A^{\mathrm {T} }A\right)^{-1}A^{\mathrm {T}}$.