πŸ”¬ Tutorial problems epsilon \epsilon

πŸ”¬ Tutorial problems epsilon \(\epsilon\)#

Note

This problems are designed to help you practice the concepts covered in the lectures. Not all problems may be covered in the tutorial, those left out are for additional practice on your own.

\(\epsilon\).1#

What is the rank of the \(N \times N\) identity matrix \(I\)?

What about the upper-triangular matrix which diagonal elements are 1?

Check all relevant definitions

By definition, \(\mathrm{rank}(I)\) is equal to the dimension of the span of its columns. Its columns are the \(N\) canonical basis vectors in \(\mathbb{R}^N\), which we know span all of \(\mathbb{R}^N\). Hence

\[ \mathrm{rank}(I) = \dim(\mathbb{R}^N) = N \]

Draft of the proof for the second question: For the upper triangular matrix start by showing that the columns are linearly independent, and because there are \(N\) of them, they span the whole space \(\mathbb{R}^N\), thus the expression above applies again, and the rank is \(N\).

\(\epsilon\).2#

Show that if \(T: \mathbb{R}^N \to \mathbb{R}^N\) is nonsingular, i.e. linear bijection, the inverse map \(T^{-1}\) is also linear.

Check all relevant definitions

Let \(T \colon \mathbb{R}^N \to \mathbb{R}^N\) be nonsingular and let \(T^{-1}\) be its inverse. To see that \(T^{-1}\) is linear we need to show that for any pair \(x, y\) in \(\mathbb{R}^N\) (which is the domain of \(T^{-1}\)) and any scalars \(\alpha\) and \(\beta\), the following equality holds:

\[ T^{-1}(\alpha x + \beta y) = \alpha T^{-1}x + \beta T^{-1} y. \]

In the proof we will exploit the fact that \(T\) is by assumption a linear bijection.

So pick any vectors \(x, y \in \mathbb{R}^N\) and any two scalars \(\alpha, \beta\). Since \(T\) is a bijection, we know that \(x\) and \(y\) have unique preimages under \(T\). In particular, there exist unique vectors \(u\) and \(v\) such that

\[ % Tu = x \quad \text{and} \quad Tv = y % \]

Using these definitions, linearity of \(T\) and the fact that \(T^{-1}\) is the inverse of \(T\), we have

\[\begin{split} % T^{-1}(\alpha x + \beta y) = T^{-1}(\alpha Tu + \beta T v) \\ = T^{-1}(T(\alpha u + \beta v)) \\ = \alpha u + \beta v \\ = \alpha T^{-1} x + \beta T^{-1} y. % \end{split}\]

This chain of equalities confirms

\[ T^{-1}(\alpha x + \beta y) = \alpha T^{-1}x + \beta T^{-1} y. \]

\(\epsilon\).3#

Chose an orthonormal basis in \(\mathbb{R}^3\) which is not canonical basis \(\{e_1,e_2,e_3\}\) and verify by direct computation that the transformation matrix is orthogonal.

To come up with an orthonormal basis in \(\mathbb{R}^3\) think first of three orthogonal vectors (directions from the origin), then write them as vectors, and normalize each to length 1.

Denote \((x,y,z)\) the three dimensions in \(\mathbb{R}^3\). Lets consider the following orthogonal lines:

  • 45 degree line between \(x\) and \(y\) axes

  • \(z\) axis itself since it is normal to the whole plane spanned by \(x\) and \(y\) axes, and thus orthogonal to the first chosen line

  • lastly, the 45 degree line between the negative side of \(x\) axes and positive side of \(y\) axes

In other words, choose the following vectors to form the basis:

\[ (1,1,0), \quad (0,0,1), \quad (-1,1,0) \]

It is clear that

Computing the norm of which of the vectors and dividing by it we get the following orthonormal basis:

\[ e'_1=(\tfrac{1}{\sqrt{2}},\tfrac{1}{\sqrt{2}},0), \quad e'_2=(0,0,1), \quad e'_3=(-\tfrac{1}{\sqrt{2}},\tfrac{1}{\sqrt{2}},0) \]

The transformation matrix is formed by placing the vectors of the new basis in the coordinates of the old basis into its columns. The matrix is

\[\begin{split} P= \begin{pmatrix} \tfrac{1}{\sqrt{2}} & 0 & -\tfrac{1}{\sqrt{2}} \\ \tfrac{1}{\sqrt{2}} & 0 & \tfrac{1}{\sqrt{2}} \\ 0 & 1 & 0 \end{pmatrix} \end{split}\]

To show that the matrix is orthogonal, we need to show that \(P^T P = I\).

\[\begin{split} \begin{pmatrix} \tfrac{1}{\sqrt{2}} & \tfrac{1}{\sqrt{2}} & 0 \\ 0 & 0 & 1 \\ - \tfrac{1}{\sqrt{2}} & \tfrac{1}{\sqrt{2}} & 0 \end{pmatrix} \begin{pmatrix} \tfrac{1}{\sqrt{2}} & 0 & -\tfrac{1}{\sqrt{2}} \\ \tfrac{1}{\sqrt{2}} & 0 & \tfrac{1}{\sqrt{2}} \\ 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} \tfrac{1}{2}+\tfrac{1}{2} & 0 & -\tfrac{1}{2}+\tfrac{1}{2} \\ 0 & 1 & 0\\ -\tfrac{1}{2}+\tfrac{1}{2} & 0 & \tfrac{1}{2}+\tfrac{1}{2} \end{pmatrix} \end{split}\]

Alternatively, by performing Gauss-Jacobian operations, you could derive the inverse matrix \(P^{-1}\) and verify that it is equal to \(P^T\).

\(\epsilon\).4#

For each of the linear maps defined by the following matrices

\[\begin{split} T_1 = \begin{pmatrix} 4/3 & -2/3 & 0 \\ -1/3 & 5/3 & 0 \\ 0 & 0 & -1 \end{pmatrix} \end{split}\]
\[\begin{split} T_2 = \begin{pmatrix} 4 & 0 & 1 \\ -2 & 1 & 0 \\ -2 & 0 & 1 \end{pmatrix} \end{split}\]
\[\begin{split} T_3 = \begin{pmatrix} 5 & 0 & 1 \\ 1 & 1 & 0 \\ -7 & 1 & 0 \end{pmatrix} \end{split}\]
\[\begin{split} T_4 = \begin{pmatrix} 2 & 0 & 0 \\ 0 & 3 & -1 \\ 0 & -1 & 3 \end{pmatrix} \end{split}\]

perform the following tasks:

  1. Find eigenvalues

  2. Find at least one eigenvector for each eigenvalue

  3. Form a new basis from the eigenvectors (normalized or not)

  4. Compute the transformation matrix to the new basis

  5. Find the matrix \(T\) in the new basis and verify that it is diagonal

See example in the lecture notes

\(T_1\)

1.

To find eigenvalues solve

\[\begin{split} \det \begin{pmatrix} 4/3-\lambda & -2/3 & 0 \\ -1/3 & 5/3-\lambda & 0 \\ 0 & 0 & -1-\lambda \end{pmatrix} = -(1+\lambda) \begin{pmatrix} 4/3-\lambda & -2/3 \\ -1/3 & 5/3-\lambda \end{pmatrix} = \end{split}\]
\[ = -(1+\lambda) \left[\left(\frac{4}{3}-\lambda \right)\left(\frac{5}{3}-\lambda \right)-\frac{2}{3\cdot3}\right] = -(1+\lambda) (\lambda^2-3\lambda+\frac{4\cdot 5}{9}-\frac{2}{9}) = \]
\[ = -(1+\lambda)(\lambda^2-3\lambda+2) = -(1+\lambda)(\lambda-1)(\lambda-2) \]

Therefore the eigenvalues are \(\lambda_1=-1\), \(\lambda_2=1\) and \(\lambda_3=2\)

2.

To find eigenvectors plug the eigenvalues one by one to \(T_1-\lambda I\) and investigate the implications of the resulting system of equations. We should expect to find multiple eigenvectors for each eigenvalue, and therefore are looking for a formula rather than a usual answer.

\[\begin{split} (T_1 - \lambda_1 I) \begin{pmatrix} x \\ y \\ z \end{pmatrix} =0 \iff \begin{pmatrix} 7/3 & -2/3 & 0 \\ -1/3 & 8/3 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{split}\]

Thus, any value of \(z\) will do, for example \(z=1\). To find \(x\) and \(y\) we can use the first two equations (multiplied by 3 right away):

\[\begin{split} \left\{ \begin{array}{rcl} 7x - 2y &=& 0 \\ -x + 8y &=& 0 \end{array} \right. \implies \left\{ \begin{array}{l} x = 0 \\ y = 0 \end{array} \right. \end{split}\]

Therefore, the vector \(v_1 = (0,0,1)\) is an eigenvector for \(\lambda_1=-1\).

\[\begin{split} (T_1 - \lambda_2 I) \begin{pmatrix} x \\ y \\ z \end{pmatrix} =0 \iff \begin{pmatrix} 1/3 & -2/3 & 0 \\ -1/3 & 2/3 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \iff \end{split}\]
\[\begin{split} \iff \begin{pmatrix} 1 & -2 & 0 \\ 1 & -2 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{split}\]

Obviously, all vectors of the form \((2a,a,0)\) are eigenvectors for \(\lambda_2=1\). In particular, \(v_2 = (2/\sqrt{5},1/\sqrt{5},0)\) has a norm of 1.

\[\begin{split} (T_1 - \lambda_3 I) \begin{pmatrix} x \\ y \\ z \end{pmatrix} =0 \iff \begin{pmatrix} -2/3 & -2/3 & 0 \\ -1/3 & -1/3 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \iff \end{split}\]
\[\begin{split} \iff \begin{pmatrix} 1 & 1 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{split}\]

Now, all vectors of the form \((a,-a,0)\) are eigenvectors for \(\lambda_3=2\). In particular, \(v_3 = (1/\sqrt{2},-1/\sqrt{2},0)\) has a norm of 1.

3.

We have chosen the eigenvectors in such a way that they are already normalized, i.e. have length of 1. To verify, observe

\[\begin{split} \begin{array}{l} \|v_1\| = \sqrt{0^2+0^2+1^2} = 1 \\ \|v_2\| = \sqrt{(2/\sqrt{5})^2+(1/\sqrt{5})^2+0^2} = 1 \\ \|v_2\| = \sqrt{(1/\sqrt{2})^2+(-1/\sqrt{2})^2+0^2} = 1 \end{array} \end{split}\]

It’s easy to verify that vectors \(v_1, v_2, v_3\) are linearly independent, and therefore the set

\[\begin{split} \{v_1, v_2, v_3\} = \left\{ \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}, \begin{pmatrix} 2/\sqrt{5} \\ 1/\sqrt{5} \\ 0 \end{pmatrix}, \begin{pmatrix} 1/\sqrt{2} \\ -1/\sqrt{2} \\ 0 \end{pmatrix} \right\} \end{split}\]

forms a normalized basis in \(\mathbb{R}^3\)

4.

The transformation matrix is a matrix with columns formed from the vectors of the new basis expressed in coordinates (``the language’’) of the old basis.

\[\begin{split} P = \begin{pmatrix} 0 & 2/\sqrt{5} & 1/\sqrt{2} \\ 0 & 1/\sqrt{5} & -1/\sqrt{2} \\ 1 & 0 & 0 \end{pmatrix} \end{split}\]

5.

The matrix \(T\) and the matrix \(T'\) in the new basis are related as

\[ T = P T' P^{-1} \quad \iff \quad T' = P^{-1} T P \]

In any case, we need \(P^{-1}\). In order to find the inverse of the \(P\) matrix, we make the following argument. By definition, \(PP^{-1} = I\), and therefore the columns of the unknown matrix \(P^{-1}\) (denoted below \(p'_i\), \(i=1,2,3\)) are solutions of the following three systems of equations:

\[\begin{split} P p'_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \quad P p'_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \quad P p'_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \end{split}\]

We can find the solutions of all three systems by Gaussian elimination performing elementary row operations on an ``extra’’ augmented matrix with three columns in place of the right-hand side.

\[\begin{split} \left(\begin{array}{ccc|ccc} 0 & 2/\sqrt{5} & 1/\sqrt{2} & 1 & 0 & 0 \\ 0 & 1/\sqrt{5} & -1/\sqrt{2} & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 1 \end{array}\right) \longrightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|ccc} 1 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & -\sqrt{5}/\sqrt{2} & 0 & \sqrt{5} & 0 \\ 0 & 2/\sqrt{5} & 1/\sqrt{2} & 1 & 0 & 0 \end{array}\right) \longrightarrow \end{split}\]
\[ \frac{1}{\sqrt{2}} + \left(-\frac{\sqrt{5}}{\sqrt{2}}\right)\left(-\frac{2}{\sqrt{5}}\right) = \frac{1}{\sqrt{2}} + \frac{2}{\sqrt{2}} = \frac{3}{\sqrt{2}} \]
\[\begin{split} \left(\begin{array}{ccc|ccc} 1 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & -\sqrt{5}/\sqrt{2} & 0 & \sqrt{5} & 0 \\ 0 & 0 & 3/\sqrt{2} & 1 & -2 & 0 \end{array}\right) \longrightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|ccc} 1 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & -\sqrt{5}/\sqrt{2} & 0 & \sqrt{5} & 0 \\ 0 & 0 & 1 & \sqrt{2}/3 & -2\sqrt{2}/3 & 0 \end{array}\right) \longrightarrow \end{split}\]
\[ \sqrt{5} + \frac{\sqrt{5}}{\sqrt{2}} \left(-\frac{2 \sqrt{2}}{3} \right) = \frac{\sqrt{5}}{3} \]
\[\begin{split} \left(\begin{array}{ccc|ccc} 1 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & \sqrt{5}/3 & \sqrt{5}/3 & 0 \\ 0 & 0 & 1 & \sqrt{2}/3 & -2\sqrt{2}/3 & 0 \end{array}\right) \longrightarrow \end{split}\]

Therefore, the inverse of the \(P\) matrix is given by

\[\begin{split} P^{-1} = \begin{pmatrix} 0 & 0 & 1 \\ \sqrt{5}/3 & \sqrt{5}/3 & 0 \\ \sqrt{2}/3 & -2\sqrt{2}/3 & 0 \end{pmatrix} \end{split}\]

Additional exercise: verify that \(PP^{-1} = I\).

Now we can compute \(P^{-1} T_1 P\):

\[\begin{split} \begin{pmatrix} 0 & 0 & 1 \\ \sqrt{5}/3 & \sqrt{5}/3 & 0 \\ \sqrt{2}/3 & -2\sqrt{2}/3 & 0 \end{pmatrix} \begin{pmatrix} 4/3 & -2/3 & 0 \\ -1/3 & 5/3 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} 0 & 2/\sqrt{5} & 1/\sqrt{2} \\ 0 & 1/\sqrt{5} & -1/\sqrt{2} \\ 1 & 0 & 0 \end{pmatrix} = \end{split}\]
\[\begin{split} \begin{pmatrix} 0 & 0 & -1 \\ \frac{4\sqrt{5}}{9}-\frac{\sqrt{5}}{9} & \frac{-2\sqrt{5}}{9}+\frac{5\sqrt{5}}{9} & 0 \\ \frac{4\sqrt{2}}{9}+\frac{2\sqrt{2}}{9} & \frac{-2\sqrt{2}}{9}-\frac{10\sqrt{2}}{9} & 0 \end{pmatrix} \begin{pmatrix} 0 & 2/\sqrt{5} & 1/\sqrt{2} \\ 0 & 1/\sqrt{5} & -1/\sqrt{2} \\ 1 & 0 & 0 \end{pmatrix} = \end{split}\]
\[\begin{split} \begin{pmatrix} 0 & 0 & -1 \\ \frac{\sqrt{5}}{3} & \frac{\sqrt{5}}{3} & 0 \\ \frac{2\sqrt{2}}{3} & \frac{-4\sqrt{2}}{3} & 0 \end{pmatrix} \begin{pmatrix} 0 & 2/\sqrt{5} & 1/\sqrt{2} \\ 0 & 1/\sqrt{5} & -1/\sqrt{2} \\ 1 & 0 & 0 \end{pmatrix} = \end{split}\]
\[\begin{split} \begin{pmatrix} -1 & 0 & 0 \\ 0 & \frac{2}{3} + \frac{1}{3} & \frac{\sqrt{5}}{3\sqrt{2}} - \frac{\sqrt{5}}{3\sqrt{2}} \\ 0 & \frac{4\sqrt{2}}{3\sqrt{5}} - \frac{-4\sqrt{2}}{3\sqrt{5}} & \frac{2}{3} + \frac{4}{3} \end{pmatrix} = \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} \end{split}\]

\(P^{-1} T_1 P\) is diagonal with eigenvalues on the main diagonal!

\(T_2\)

1.

To find eigenvalues solve

\[\begin{split} \det \begin{pmatrix} 4-\lambda & 0 & 1 \\ -2 & 1-\lambda & 0 \\ -2 & 0 & 1-\lambda \end{pmatrix} = \end{split}\]

(expanding along the top row)

\[\begin{split} =(4-\lambda)(1-\lambda)^1 + 2(1-\lambda) = \\ (1-\lambda)(\lambda^2-5\lambda+6) = \\ (1-\lambda)(\lambda-2)(\lambda-3) \end{split}\]

Therefore the eigenvalues are \(\lambda_1=1\), \(\lambda_2=2\) and \(\lambda_3=3\)

2.

To find eigenvectors plug the eigenvalues one by one to \(T_2-\lambda I\) and investigate the implications of the resulting system of equations. We should expect to find multiple eigenvectors for each eigenvalue, and therefore are looking for a formula rather than a usual answer.

\[\begin{split} (T_2 - \lambda_1 I) \begin{pmatrix} x \\ y \\ z \end{pmatrix} =0 \iff \begin{pmatrix} 3 & 0 & 1 \\ -2 & 0 & 0 \\ -2 & 0 & 0 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{split}\]

Doing Gaussian-Jordan elimination we have

\[\begin{split} \left(\begin{array}{ccc|c} 3 & 0 & 1 & 0 \\ -2 & 0 & 0 & 0 \\ -2 & 0 & 0 & 0 \end{array}\right) \rightarrow \left(\begin{array}{ccc|c} 1 & 0 & \frac{1}{3} & 0 \\ 0 & 0 & \frac{2}{3} & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \rightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|c} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \iff \left\{ \begin{array}{l} x = 0 \\ y \text{ is free} \\ z = 0 \end{array} \right. \end{split}\]

In other words, eigenvector \(v_1\) is of the form \((0,p,0)\) where \(p\in\mathbb{R}\). Let us this time not impose the normalization and take \(v_1 = (0,1,0)\) as the first eigenvector.

\[\begin{split} (T_2 - \lambda_2 I) \begin{pmatrix} x \\ y \\ z \end{pmatrix} =0 \iff \begin{pmatrix} 2 & 0 & 1 \\ -2 & -1 & 0 \\ -2 & 0 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|c} 2 & 0 & 1 & 0 \\ -2 & -1 & 0 & 0 \\ -2 & 0 & -1 & 0 \end{array}\right) \rightarrow \left(\begin{array}{ccc|c} 1 & 0 & \frac{1}{2} & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \rightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|c} 1 & 0 & \frac{1}{2} & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \iff \left\{ \begin{array}{l} x = -\frac{1}{2} z \\ y =z \\ z \text{ is free} \end{array} \right. \end{split}\]

In other words, eigenvector \(v_2\) is of the form \((-\frac{1}{2}p,p,p)\) where \(p\in\mathbb{R}\). Again, let us not impose the normalization and take \(v_2 = (-1,2,2)\) as the second eigenvector.

\[\begin{split} (T_2 - \lambda_3 I) \begin{pmatrix} x \\ y \\ z \end{pmatrix} =0 \iff \begin{pmatrix} 1 & 0 & 1 \\ -2 & -2 & 0 \\ -2 & 0 & -2 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ -2 & -2 & 0 & 0 \\ -2 & 0 & -2 & 0 \end{array}\right) \rightarrow \left(\begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ 0 & -2 & 2 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \rightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \iff \left\{ \begin{array}{l} x = -z \\ y =z \\ z \text{ is free} \end{array} \right. \end{split}\]

In other words, eigenvector \(v_3\) is of the form \( (-p,p,p)\) where \(p\in\mathbb{R}\). Let us take \(v_3 = (-\frac{1}{2},1,1)\) as the second eigenvector.

3.

We have chosen the eigenvectors in a way that they are not normalized, and let’s try to see if this approach results in a diagonal matrix \(T'\) in the new basis anyway.

\[\begin{split} \{v_1, v_2, v_3\} = \left\{ \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} -1 \\ 2 \\ 2 \end{pmatrix}, \begin{pmatrix} -1 \\ 1 \\ 1 \end{pmatrix} \right\} \end{split}\]

forms a basis in \(\mathbb{R}^3\)

4.

The transformation matrix is a matrix with columns formed from the vectors of the new basis expressed in coordinates (``the language’’) of the old basis.

\[\begin{split} P = \begin{pmatrix} 0 & -1 & -1 \\ 1 & 2 & 1 \\ 0 & 2 & 1 \end{pmatrix} \end{split}\]

5.

Again, the matrix \(T\) and the matrix \(T'\) in the new basis are related as

\[ T = P T' P^{-1} \quad \iff \quad T' = P^{-1} T P \]

We find \(P^{-1}\) in the same way by going Gaussian-Jordan elimination

\[\begin{split} \left(\begin{array}{ccc|ccc} 0 & -1 & -1 & 1 & 0 & 0 \\ 1 & 2 & 1 & 0 & 1 & 0 \\ 0 & 2 & 1 & 0 & 0 & 1 \end{array}\right) \longrightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|ccc} 1 & 2 & 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & -1 & 0 & 0 \\ 0 & 0 & -1 & 2 & 0 & 1 \end{array}\right) \longrightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|ccc} 1 & 0 & -1 & 2 & 1 & 0 \\ 0 & 1 & 1 & -1 & 0 & 0 \\ 0 & 0 & 1 & -2 & 0 & -1 \end{array}\right) \longrightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|ccc} 1 & 0 & 0 & 0 & 1 & -1 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & -2 & 0 & -1 \end{array}\right) \longrightarrow \end{split}\]

Therefore, the inverse of the \(P\) matrix is given by

\[\begin{split} P^{-1} = \begin{pmatrix} 0 & 1 & -1 \\ 1 & 0 & 1 \\ -2 & 0 & -1 \end{pmatrix} \end{split}\]

Additional exercise: verify that \(PP^{-1} = I\).

Now we can compute \(P^{-1} T_2 P\):

\[\begin{split} \begin{pmatrix} 0 & 1 & -1 \\ 1 & 0 & 1 \\ -2 & 0 & -1 \end{pmatrix} \begin{pmatrix} 4 & 0 & 1 \\ -2 & 1 & 0 \\ -2 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & -1 & -1 \\ 1 & 2 & 1 \\ 0 & 2 & 1 \end{pmatrix} = \end{split}\]
\[\begin{split} \begin{pmatrix} 0 & 1 & -1 \\ 2 & 0 & 2 \\ -6 & 0 & -3 \end{pmatrix} \begin{pmatrix} 0 & -1 & -1 \\ 1 & 2 & 1 \\ 0 & 2 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{pmatrix} \end{split}\]

\(P^{-1} T_2 P\) is diagonal with eigenvalues on the main diagonal!

\(T_3\)

1.

To find eigenvalues solve

\[\begin{split} \det \begin{pmatrix} 5-\lambda & 0 & 1 \\ 1 & 1-\lambda & 0 \\ -7 & 1 & -\lambda \end{pmatrix} = (5-\lambda)(1-\lambda)(-\lambda)+1+7(1-\lambda) = \end{split}\]
\[\begin{split} =(1-\lambda)(\lambda^2-5\lambda+7) + 1 = \lambda^2-5\lambda+7 - \lambda^3+5\lambda^2-7\lambda +1 =\\ =-\lambda^3+6\lambda^2-12\lambda+8 = -(\lambda-2)^3 \end{split}\]

Therefore the only eigenvalue is \(\lambda=2\), this root is repeated three times.

2.

To find eigenvectors plug the eigenvalues one by one to \(T_3-\lambda I\) and investigate the implications of the resulting system of equations.

Because the eigenvalue is repeated, we should expect difficulties finding enough eigenvectors to form a new basis β€” we need at least three linearly independent eigenvectors in \(\mathbb{R}^3\)!

\[\begin{split} (T_3 - \lambda I) \begin{pmatrix} x \\ y \\ z \end{pmatrix} =0 \iff \begin{pmatrix} 3 & 0 & 1 \\ 1 & -1 & 0 \\ -7 & 1 & -2 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{split}\]

Doing Gaussian-Jordan elimination we have

\[\begin{split} \left(\begin{array}{ccc|c} 3 & 0 & 1 & 0 \\ 1 & -1 & 0 & 0 \\ -7 & 1 & -2 & 0 \end{array}\right) \rightarrow \left(\begin{array}{ccc|c} 1 & 0 & \frac{1}{3} & 0 \\ 0 & -1 & -\frac{1}{3} & 0 \\ 0 & 1 & \frac{1}{3} & 0 \end{array}\right) \rightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|c} 1 & 0 & \frac{1}{3} & 0 \\ 0 & 1 & \frac{1}{3} & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \iff \left\{ \begin{array}{l} x = -\frac{1}{3}z \\ y = - \frac{1}{3}z \\ z = \text{ is free} \end{array} \right. \end{split}\]

In other words, all eigenvectors have the form \((-\frac{1}{3}p,-\frac{1}{3}p,p)\) where \(p\in\mathbb{R}\).

In order to form a basis from eigenvectors, we need three linearly independent of them, which is impossible in this case because there is only one free parameter! In other words, all eigenvectors we can come up with will lie within the same line (one degree of freedom), and thus we can not even have two linearly inpendent eigenvectors, let alone three.

3. - 5.

\(T_3\) is not diagonalizable through eigendecomposition.

\(T_4\)

1.

First of all, note that \(T_4\) is symmetric with real entries, therefore eigenvalue decomposition definitely be possible – unlike the last case. To find eigenvalues solve

\[\begin{split} \det \begin{pmatrix} 2-\lambda & 0 & 0 \\ 0 & 3-\lambda & -1 \\ 0 & -1 & 3-\lambda \end{pmatrix} = (2-\lambda)\big((3-\lambda)^2-1\big) = \end{split}\]
\[\begin{split} =(2-\lambda)(9-6\lambda+\lambda^2-1) = (2-\lambda)(\lambda^2-6\lambda+8) = \\ =(2-\lambda)(\lambda -2)(\lambda-4) = -(\lambda-2)^2(\lambda-4) \end{split}\]

Therefore eigenvalues are \(\lambda_1=4\) and a repeated one \(\lambda_{2,3}=2\). Yet, again, we should be able to diagonalize \(T_4\) because it is symmetric!

2.

To find eigenvectors plug the eigenvalues one by one to \(T_4-\lambda I\) and investigate the implications of the resulting system of equations.

Because the eigenvalue is repeated, we should expect to do more work than usual, but the basis using eigenvectors should still be possible to find.

\[\begin{split} (T_4 - \lambda_1 I) \begin{pmatrix} x \\ y \\ z \end{pmatrix} =0 \iff \begin{pmatrix} -2 & 0 & 0 \\ 0 & -1 & -1 \\ 0 & -1 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|c} -2 & 0 & 0 & 0 \\ 0 & -1 & -1 & 0 \\ 0 & -1 & -1 & 0 \end{array}\right) \rightarrow \left(\begin{array}{ccc|c} 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \rightarrow \iff \left\{ \begin{array}{l} x = 0 \\ y = -z \\ z = \text{ is free} \end{array} \right. \end{split}\]

In other words, the eigenvectors corresponding to \(\lambda_1\) have the form \((0,-p,p)\) where \(p\in\mathbb{R}\).

\[\begin{split} (T_4 - \lambda_{2,3} I) \begin{pmatrix} x \\ y \\ z \end{pmatrix} =0 \iff \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & -1 \\ 0 & -1 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{split}\]

It is clear immediately, that the only restriction placed by this linear system of equations is that \(y=z\). In other words, there are no restrictions on two of the three variables, and we can express the eigenvectors corresponding to \(\lambda_{2,3}\) as \((p,q,q)\) where \(p, q\in\mathbb{R}\) are parameters.

3.

In order to form a basis from eigenvectors, we need three linearly independent of them. Fortunately, there is enough degrees of freedom in the parameters (one from the first eigenvalue and two from the second) to have three linearly independent eigenvectors. For example, \((0,-1,1)\), \((1,0,0)\) and \((0,1,1)\)

4.

The transformation matrix is a matrix with columns formed from the vectors of the new basis expressed in coordinates (``the language’’) of the old basis.

\[\begin{split} P = \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 1 \\ 1 & 0 & 1 \end{pmatrix} \end{split}\]

5.

Again, the matrix \(T\) and the matrix \(T'\) in the new basis are related as

\[ T = P T' P^{-1} \quad \iff \quad T' = P^{-1} T P \]

We find \(P^{-1}\) in the same way by going Gaussian-Jordan elimination

\[\begin{split} \left(\begin{array}{ccc|ccc} 0 & 1 & 0 & 1 & 0 & 0 \\ -1 & 0 & 1 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 0 & 1 \end{array}\right) \longrightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|ccc} 1 & 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 & 1 & 1 \end{array}\right) \longrightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|ccc} 1 & 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & \frac{1}{2} & \frac{1}{2} \end{array}\right) \longrightarrow \end{split}\]
\[\begin{split} \left(\begin{array}{ccc|ccc} 1 & 0 & 0 & 0 & -\frac{1}{2} & \frac{1}{2} \\ 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & \frac{1}{2} & \frac{1}{2} \end{array}\right) \longrightarrow \end{split}\]
\[\begin{split} P^{-1} = \begin{pmatrix} 0 & -\frac{1}{2} & \frac{1}{2} \\ 1 & 0 & 0 \\ 0 & \frac{1}{2} & \frac{1}{2} \end{pmatrix} \end{split}\]

Additional exercise: verify that \(PP^{-1} = I\).

Now we can compute \(P^{-1} T_2 P\):

\[\begin{split} \begin{pmatrix} 0 & -\frac{1}{2} & \frac{1}{2} \\ 1 & 0 & 0 \\ 0 & \frac{1}{2} & \frac{1}{2} \end{pmatrix} \begin{pmatrix} 2 & 0 & 0 \\ 0 & 3 & -1 \\ 0 & -1 & 3 \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 1 \\ 1 & 0 & 1 \end{pmatrix} = \end{split}\]
\[\begin{split} \begin{pmatrix} 0 & -\frac{1}{2} & \frac{1}{2} \\ 1 & 0 & 0 \\ 0 & \frac{1}{2} & \frac{1}{2} \end{pmatrix} \begin{pmatrix} 0 & 2 & 0 \\ -4 & 0 & 2 \\ 4 & 0 & 2 \end{pmatrix} = \begin{pmatrix} 4 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{pmatrix} \end{split}\]

We see again that \(P^{-1} T_4 P\) is diagonal with eigenvalues on the main diagonal, even though one of the eigenvalues is repeated twice.

\(\epsilon\).5#

Compute 10th power of the following matrix

\[\begin{split} A = \begin{pmatrix} 2 & 1 & 3\\ 1 & 2 & 3\\ 1 & 1 & 2 \end{pmatrix} \end{split}\]

No way you should compute tenth power directly. Consider diagonalization of \(A\).

Let’s diagonalize the matrix \(A\) by finding a nonsingular matrix \(T\) such that \(A = TDT^{-1}\) where \(D\) is diagonal. Then due to \(T^{-1} T= I\) we have

\[ A^{10} = (TDT^{-1})^{10} = (TDT^{-1})(TDT^{-1}) \dots (TDT^{-1})(TDT^{-1}) = TD^{10}T^{-1} \]

and computing \(D^{10}\) is simply taking the 10th power of the diagonal elements.

Let’s find the eigenvalues of \(A\) and form a new basis from the eigenvectors to find \(T\).

\[\begin{split} \det \begin{pmatrix} 2-\lambda & 1 & 3\\ 1 & 2-\lambda & 3\\ 1 & 1 & 2-\lambda \end{pmatrix} = (2-\lambda)^3 -7(2-\lambda) +6 \end{split}\]

Set \(t=2-\lambda\), then we have to solve the equation

\[\begin{split} \begin{array}{rcl} t^3 - 7t + 6 &=& 0 \\ t^3 - 1 - 7t + 7 &=& 0 \\ (t-1)(t^2+t+1) - 7 (t-1) &=& 0 \\ (t-1)(t^2+t-6) &=& 0 \\ (t-1)(t-2)(t+3) &=& 0 \end{array} \end{split}\]

Therefore, the eigenvalues given by \(\lambda = 2-t\) are \(\lambda_1=1\), \(\lambda_2=0\), and \(\lambda_3=5\).

Now we need to find a basis formed of eigenvectors. Each time let’s solve the corresponding system \((A-\lambda_i I)x = {\bf 0}\) using Gaussian elimination. For \(\lambda_1=1\), we have

\[\begin{split} \left( \begin{array}{ccc|c} 1 & 1 & 3 & 0 \\ 1 & 1 & 3 & 0 \\ 1 & 1 & 1 & 0 \end{array} \right) \to \left( \begin{array}{ccc|c} 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right) \end{split}\]

We conclude that the vectors of the form \((p,-p,0)\) where \(p\) is a free parameter are eigenvectors corresponding to \(\lambda_1=1\). Let’s choose \(p=1\) to get the eigenvector \((1,-1,0)\).

For \(\lambda_2=0\), we have

\[\begin{split} \left( \begin{array}{ccc|c} 2 & 1 & 3 & 0 \\ 1 & 2 & 3 & 0 \\ 1 & 1 & 2 & 0 \end{array} \right) \to \left( \begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right) \end{split}\]

In this case the vectors of the form \((-q,-q,q)\) where \(q\) is a free parameter are eigenvectors corresponding to \(\lambda_2=0\). Let’s choose \(q=-1\) to get the eigenvector \((1,1,-1)\).

For \(\lambda_3=5\), we have

\[\begin{split} \left( \begin{array}{ccc|c} -3 & 1 & 3 & 0 \\ 1 & -3 & 3 & 0 \\ 1 & 1 & -3 & 0 \end{array} \right) \to \left( \begin{array}{ccc|c} 1 & 0 & -\tfrac{3}{2} & 0 \\ 0 & 1 & -\tfrac{3}{2} & 0 \\ 0 & 0 & 0 & 0 \end{array} \right) \end{split}\]

Vectors of the form \((\tfrac{3}{2}s,\tfrac{3}{2}s,s)\) where \(s\) is a free parameter are eigenvectors corresponding to \(\lambda_2=5\). Let’s choose \(s=\tfrac{2}{3}\) to get the eigenvector \((1,1,\tfrac{2}{3})\).

The transformation matrix \(T\) is formed by placing the eqigenvectors as its columns. Therefore, we have

\[\begin{split} T= \left( \begin{array}{rrr} 1 & 1 & 1 \\ -1 & 1 & 1 \\ 0 & -1 & \tfrac{2}{3} \end{array} \right) \end{split}\]

Performing Gaussian elimination further to get the inverse matrix \(T^{-1}\) we have

\[\begin{split} \left( \begin{array}{ccc|ccc} 1 & 1 & 1 & 1 & 0 & 0\\ -1 & 1 & 1 & 0 & 1 & 0\\ 0 & -1 & \tfrac{2}{3} & 0 & 0 & 1 \end{array} \right) \to \left( \begin{array}{ccc|ccc} 1 & 1 & 1 & 1 & 0 & 0\\ 0 & 2 & 2 & 1 & 1 & 0\\ 0 & -1 & \tfrac{2}{3} & 0 & 0 & 1 \end{array} \right) \to \end{split}\]
\[\begin{split} \left( \begin{array}{ccc|ccc} 1 & 0 & 0 & \tfrac{1}{2} & -\tfrac{1}{2} & 0\\ 0 & 1 & 1 & \tfrac{1}{2} & \tfrac{1}{2} & 0\\ 0 & 0 & \tfrac{5}{3} & \tfrac{1}{2} & \tfrac{1}{2} & 1 \end{array} \right) \to \left( \begin{array}{ccc|ccc} 1 & 0 & 0 & \tfrac{1}{2} & -\tfrac{1}{2} & 0\\ 0 & 1 & 0 & \tfrac{1}{5} & \tfrac{1}{5} & -\tfrac{3}{5}\\ 0 & 0 & 1 & \tfrac{3}{10} & \tfrac{3}{10} & \tfrac{3}{5} \end{array} \right) \end{split}\]
\[\begin{split} T^{-1} = \left( \begin{array}{rrr} \tfrac{1}{2} & -\tfrac{1}{2} & 0\\ \tfrac{1}{5} & \tfrac{1}{5} & -\tfrac{3}{5}\\ \tfrac{3}{10} & \tfrac{3}{10} & \tfrac{3}{5} \end{array} \right) \end{split}\]

Now we have all the components to compute \(A^{10}\):

\[\begin{split} A^{10} = \left( \begin{array}{rrr} 1 & 1 & 1 \\ -1 & 1 & 1 \\ 0 & -1 & \tfrac{2}{3} \end{array} \right) \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 0\\ 0 & 0 & 5^{10} \end{array} \right) \left( \begin{array}{rrr} \tfrac{1}{2} & -\tfrac{1}{2} & 0\\ \tfrac{1}{5} & \tfrac{1}{5} & -\tfrac{3}{5}\\ \tfrac{3}{10} & \tfrac{3}{10} & \tfrac{3}{5} \end{array} \right) = \end{split}\]
\[\begin{split} = \left( \begin{array}{rrr} 1 & 0 & 5^{10} \\ -1 & 0 & 5^{10} \\ 0 & 0 & \tfrac{2}{3}5^{10} \end{array} \right) \left( \begin{array}{rrr} \tfrac{1}{2} & -\tfrac{1}{2} & 0\\ \tfrac{1}{5} & \tfrac{1}{5} & -\tfrac{3}{5}\\ \tfrac{3}{10} & \tfrac{3}{10} & \tfrac{3}{5} \end{array} \right) = \end{split}\]
\[\begin{split} = \left( \begin{array}{lll} \tfrac{3}{2} 5^{9} + \tfrac{1}{2} & \tfrac{3}{2} 5^{9} - \tfrac{1}{2} & 3 \cdot 5^{9} \\ \tfrac{3}{2} 5^{9} - \tfrac{1}{2} & \tfrac{3}{2} 5^{9} + \tfrac{1}{2} & 3 \cdot 5^{9} \\ 5^{9} & 5^{9} & 2 \cdot 5^{9} \end{array} \right), \end{split}\]

where \(5^9 = 1953125\).

The final answer is

\[\begin{split} A^{10} = \begin{pmatrix} 2929688 & 2929687 & 5859375\\ 2929687 & 2929688 & 5859375\\ 1953125 & 1953125 & 3906250 \end{pmatrix} \end{split}\]

\(\epsilon\).6#

A stochastic matrix is a square matrix, whose rows sum up to 1.

Consider the following \(n \times n\) stochastic matrix:

\[\begin{split} A_n = \left( \begin{array}{cccccc} \alpha_1 & 0 & 0 & \dots & 0 & 1-\alpha_1 \\ 0 & \alpha_2 & 0 & \dots & 0 & 1-\alpha_2 \\ 0 & 0 & \alpha_3 & \dots & 0 & 1-\alpha_3 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & \alpha_{n-1} & 1-\alpha_{n-1} \\ 1-\alpha_n & 0 & 0 & \dots & 0 & \alpha_n \end{array} \right), \end{split}\]

where \(\alpha_i \in [0,1]\) for \(i=1,2,\dots,n\).

Show that the maximum eigenvalue of \(A_n\) is 1 for all \(n \in \mathbb{N}\).

Both direct proof and proof my mathematical induction will work. In both cases it is worth starting with the simple case of \(n=2\).

Start with \(n=2\) in which case the matrix takes the form

\[\begin{split} A_2 = \left( \begin{array}{cc} \alpha_1 & 1-\alpha_1 \\ 1-\alpha_2 & \alpha_2 \end{array} \right) \end{split}\]

The eigenvalues \(\{\lambda_j\}\) are solutions to the characteristic equation

\[\begin{split} \det \left( \begin{array}{cc} \alpha_1-\lambda & 1-\alpha_1 \\ 1-\alpha_2 & \alpha_2-\lambda \end{array} \right) = 0 \end{split}\]

We have

\[\begin{split} \det \left( \begin{array}{cc} \alpha_1-\lambda & 1-\alpha_1 \\ 1-\alpha_2 & \alpha_2-\lambda \end{array} \right) = \\ = (\alpha_1-\lambda)(\alpha_2-\lambda) - (1-\alpha_1)(1-\alpha_2) = \\ = \lambda^2 - (\alpha_1+\alpha_2)\lambda - 1 + \alpha_1 + \alpha_2 = \\ = (\lambda - 1)(\lambda - \alpha_1 - \alpha_2+1) \end{split}\]

The last line with factorization can be obtained by the quadratic formula or by Vieta’s formula about the product and the sum of the roots of a quadratic equation.

\[\begin{split} (x-a)(x-b) = x^2-x(a+b)+ab \; \implies \; \begin{cases} x_1 + x_2 = -(a+b) \\ x_1 x_2 = ab \end{cases} \end{split}\]

It is clear from the characteristic equation that the two eigenvalues are \(\lambda_1 = 1\) and \(\lambda_2 = \alpha_1 + \alpha_2 - 1 \leqslant 1\) because \(\alpha_i \in [0,1]\). Thus, the maximum eigenvalue of \(A_2\) is \(\lambda_1 = 1\).

Now consider the general case of \(n >2\). The characteristic polynomial is given by

\[\begin{split} \det \left( \begin{array}{cccccc} \alpha_1-\lambda & 0 & 0 & \dots & 0 & 1-\alpha_1 \\ 0 & \alpha_2-\lambda & 0 & \dots & 0 & 1-\alpha_2 \\ 0 & 0 & \alpha_3-\lambda & \dots & 0 & 1-\alpha_3 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & \alpha_{n-1}-\lambda & 1-\alpha_{n-1} \\ 1-\alpha_n & 0 & 0 & \dots & 0 & \alpha_n-\lambda \end{array} \right) = \end{split}\]

Expanding this determinant along the second column we get

\[\begin{split} = (\alpha_2-\lambda) \det \left( \begin{array}{cccccc} \alpha_1-\lambda & 0 & \dots & 0 & 1-\alpha_1 \\ 0 & \alpha_3-\lambda & \dots & 0 & 1-\alpha_3 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \dots & \alpha_{n-1}-\lambda & 1-\alpha_{n-1} \\ 1-\alpha_n & 0 & \dots & 0 & \alpha_n-\lambda \end{array} \right) = \end{split}\]

And again expanding this determinant along the second column we get

\[\begin{split} = (\alpha_2-\lambda)(\alpha_3-\lambda) \det \left( \begin{array}{cccccc} \alpha_1-\lambda & \dots & 0 & 1-\alpha_1 \\ \vdots & \ddots & \vdots & \vdots \\ 0 & \dots & \alpha_{n-1}-\lambda & 1-\alpha_{n-1} \\ 1-\alpha_n & \dots & 0 & \alpha_n-\lambda \end{array} \right) = \end{split}\]

And again and so forth until \((n-1)\)-th row

\[\begin{split} = (\alpha_2-\lambda)(\alpha_3-\lambda) \dots (\alpha_{n-1}-\lambda) \det \left( \begin{array}{cccccc} \alpha_1-\lambda & 1-\alpha_1 \\ 1-\alpha_n & \alpha_n-\lambda \end{array} \right) = \end{split}\]
\[ = (\alpha_2-\lambda)(\alpha_3-\lambda) \dots (\alpha_{n-1}-\lambda) (\lambda -1)(\lambda - \alpha_1 - \alpha_n + 1) \]

In the end we are left with nearly the same determinant as in the case of \(n=2\).

The eigenvalues which are the roots of the equation

\[ (\alpha_2-\lambda)(\alpha_3-\lambda) \dots (\alpha_{n-1}-\lambda) (\lambda -1)(\lambda - \alpha_1 - \alpha_n + 1) =0 \]

are

\[\begin{split} \begin{array}{l} \lambda_1 = 1 \\ \lambda_2 = \alpha_2 \\ \lambda_3 = \alpha_3 \\ \quad\vdots\\ \lambda_{n-1} = \alpha_{n-1} \\ \lambda_n = \alpha_1+\alpha_n-1 \\ \end{array} \end{split}\]

Given that \(\alpha_i \in [0,1]\) for \(i=1,2,\dots,n\) we have \(\lambda_i \leqslant 1\) for \(i \in \{2,3,\dots,n-1\}\) and \(\lambda_n = \alpha_1+\alpha_n-1 \leqslant 1\).

Therefore \(\lambda_1=1\) is the maximum eigenvalue.

\(\blacksquare\)