# π¬ Tutorial problems *zeta*#

## \(\zeta\).1#

Consider the matrix \(A\) defined by

Do the columns of this matrix form a basis of \(\mathbb{R}^3\)? Why or why not?

Check all relevant definitions and facts

No, these two vectors do not form a basis of \(\mathbb{R}^3\).

If they did then \(\mathbb{R}^3\) would be spanned by just two vectors. This is impossible.

If two vectors were enough to form a basis of \(\mathbb{R}^3\), then all bases would have to have two elements and the dimension of the space \(\mathbb{R}^3\) would have to be equal 2. But we know that the set of \(N\) canonical basis vectors form the basis in \(\mathbb{R}^N\), and thus the dimension of \(\mathbb{R}^3\) is equal to 3.

## \(\zeta\).2#

Is \(\mathbb{R}^2\) a linear subspace of \(\mathbb{R}^3\)? Why or why not?

This is a bit of a trick question, but to solve it you just need to look carefully at the definitions (as always).

A linear subspace of \(\mathbb{R}^3\) is a subset of \(\mathbb{R}^3\) with certain properties. \(\mathbb{R}^3\) is a collection of 3-tuples \((x_1, x_2, x_3)\) where each \(x_i\) is a real number. Elements of \(\mathbb{R}^2\) are 2-tuples (pairs), and hence not elements of \(\mathbb{R}^3\).

Therefore \(\mathbb{R}^2\) is not a subset of \(\mathbb{R}^3\), and in particular not a linear subspace of \(\mathbb{R}^3\).

## \(\zeta\).3#

Show that if \(T \colon \mathbb{R}^K \to \mathbb{R}^N\) is a linear function then \({\bf 0} \in \mathrm{kernel}(T)\).

Let \(T\) be as in the question. We need to show that \(T {\bf 0} = {\bf 0}\). Hereβs one proof. We know from the definition of scalar multiplication that \(0 {\bf x} = {\bf 0}\) for any vector \({\bf x}\). Hence, letting \({\bf x}\) and \({\bf y}\) be any vectors in \(\mathbb{R}^K\) and applying the definition of linearity,

## \(\zeta\).4#

Let \(S\) be any nonempty subset of \(\mathbb{R}^N\) with the following two properties:

\({\bf x}, {\bf y} \in S \implies {\bf x} + {\bf y} \in S\)

\(c \in \mathbb{R}\) and \({\bf x} \in S \implies c{\bf x} \in S\)

Is \(S\) a linear subspace of \(\mathbb{R}^N\)?

Yes, \(S\) must be a linear subspace of \(\mathbb{R}^N\). To see this, pick any \({\bf x}\) and \({\bf y}\) in \(S\) and any scalars \(\alpha, \beta\). To establish our claim we need to show that \({\bf z} := \alpha {\bf x} + \beta {\bf y}\) is in \(S\). To see that this is so observe that by (\(\text{ such that }ar\text{ such that }ar\)) we have \({\bf u} := \alpha{\bf x} \in S\) and \({\bf v} := \beta{\bf y} \in S\). By (\(\text{ such that }ar\)) we then have \({\bf u} + {\bf v} \in S\). In other words, \({\bf z} \in S\) as claimed.

## \(\zeta\).5#

If \(S\) is a linear subspace of \(\mathbb{R}^N\) then any linear combination of \(K\) elements of \(S\) is also in \(S\). Show this for the case \(K = 3\).

Let \({\bf x}_i \in S\) and \(\alpha_i \in \mathbb{R}\) for \(i=1,2,3\). We claim that

To see this let \({\bf y} := \alpha_1 {\bf x}_1 + \alpha_2 {\bf x}_2\). By the definition of linear subspaces we know that \({\bf y} \in S\). Using the definition of linear subspaces again we have \({\bf y} + \alpha_3 {\bf x}_3 \in S\). Hence the expression above is confirmed.

## \(\zeta\).6#

Let \(\{{\bf x}_1, {\bf x}_2\}\) be a linearly independent set in \(\mathbb{R}^2\) and let \(\gamma\) be a nonzero scalar. Is it true that \(\{\gamma {\bf x}_1, \gamma {\bf x}_2\}\) is also linearly independent?

The answer is yes. Hereβs one proof: Suppose to the contrary that \(\{\gamma {\bf x}_1, \gamma {\bf x}_2\}\) is linearly dependent. Then one element can be written as a linear combination of the others. In our setting with only two vectors, this translates to \(\gamma {\bf x}_1 = \alpha \gamma {\bf x}_2\) for some \(\alpha\). Since \(\gamma \ne 0\) we can multiply each side by \(1/\gamma\) to get \({\bf x}_1 = \alpha {\bf x}_2\). But now each \({\bf x}_i\) is a multiple of the other. This contradicts linear independence of \(\{{\bf x}_1, {\bf x}_2\}\).

Hereβs another proof: Take any \(\alpha_1, \alpha_2 \in \mathbb{R}\) with

We need to show that \(\alpha_1 = \alpha_2 = 0\). To see this, observe that

Hence \(\gamma (\alpha_1 {\bf x}_1 + \alpha_2 {\bf x}_2) = {\bf 0}\). Since \(\gamma \ne 0\), the only way this could occur is that \(\alpha_1 {\bf x}_1 + \alpha_2 {\bf x}_2 = {\bf 0}\). But \(\{{\bf x}_1, {\bf x}_2\}\) is linearly independent, so this implies that \(\alpha_1 = \alpha_2 = 0\). The proof is done.

## \(\zeta\).7#

Is

in the span of \(X:=\{{\bf x}_1, {\bf x}_2, {\bf x}_3\}\), where

The direct way to answer the question is to check whether the given vector is a linear combination of the other three. If this is the case, then by definition it is in the required span. To establish this, we have to solve a system of linear equations of the form

But there is an easier way to do this!

We know that any linearly independent set of 3 vectors in \(\mathbb{R}^3\) will span \(\mathbb{R}^3\). Since \({\bf z} \in \mathbb{R}^3\), this will include \({\bf z}\). So all we need to do is show that \(X\) is linearly independent. To this end, take any scalars \(\alpha_1, \alpha_2, \alpha_3\) with

Write as a linear system of 3 equations and show that the only solution is \(\alpha_1=\alpha_2=\alpha_3=0\).

In this case the set would be linearly independent.

Clearly, the second system is much easier to solve than the first.

## \(\zeta\).8#

What is the rank of the \(N \times N\) identity matrix \({\bf I}\)?

What about the upper-triangular matrix which diagonal elements are 1?

By definition, \(\mathrm{rank}({\bf I})\) is equal to the dimension of the span of its columns. Its columns are the \(N\) canonical basis vectors in \(\mathbb{R}^N\), which we know span all of \(\mathbb{R}^N\). Hence

*Draft of the proof for the second question:*
For the upper triangular matrix start by showing that the columns are linearly independent, and because there are \(N\) of them, they span the whole space \(\mathbb{R}^N\), thus the expression above applies again, and the rank is \(N\).

## \(\zeta\).9#

Show that if \(T: \mathbb{R}^N \to \mathbb{R}^N\) is nonsingular, i.e. linear bijection, the inverse map \(T^{-1}\) is also linear.

Let \(T \colon \mathbb{R}^N \to \mathbb{R}^N\) be nonsingular and let \(T^{-1}\) be its inverse. To see that \(T^{-1}\) is linear we need to show that for any pair \({\bf x}, {\bf y}\) in \(\mathbb{R}^N\) (which is the domain of \(T^{-1}\)) and any scalars \(\alpha\) and \(\beta\), the following equality holds:

In the proof we will exploit the fact that \(T\) is by assumption a linear bijection.

So pick any vectors \({\bf x}, {\bf y} \in \mathbb{R}^N\) and any two scalars \(\alpha, \beta\). Since \(T\) is a bijection, we know that \({\bf x}\) and \({\bf y}\) have unique preimages under \(T\). In particular, there exist unique vectors \({\bf u}\) and \({\bf v}\) such that

Using these definitions, linearity of \(T\) and the fact that \(T^{-1}\) is the inverse of \(T\), we have

This chain of equalities confirms