πŸ”¬ Tutorial problems delta \(\delta\)#

Note

This problems are designed to help you practice the concepts covered in the lectures. Not all problems may be covered in the tutorial, those left out are for additional practice on your own.

\(\delta\).1#

This example appears to be part of an application of a linear version of a Keynesian cross model.

Solve the following system of linear equations:

\[\begin{split} \left\{\begin{array}{ccccc} 2 Y-5 C+0.8 T & = & 580 & \cdots & (\text { Equation } 1) \\ -Y+C+0.6 T+340 & = & 0 & \cdots & (\text { Equation } 2) \\ 0.4 Y-T & = & 100 & \cdots & (\text { Equation } 3) \end{array}\right\} \end{split}\]

[Bradley, 2013] Progress Exercises 9.3, Question 5.

It would be helpful to review Gauss-Jordan elimination technique, for example here

Consider the following system of three linear equations in three unknown variables:

\[\begin{split} \left\{\begin{array}{ccccc} 2 Y-5 C+0.8 T & = & 580 & \cdots & (\text { Equation } 1) \\ -Y+C+0.6 T+340 & = & 0 & \cdots & (\text { Equation } 2) \\ 0.4 Y-T & = & 100 & \cdots & (\text { Equation } 3) \end{array}\right\} . \end{split}\]

This system of equations can be rewritten as

\[\begin{split} \left\{\begin{array}{ccccc} 2 Y-5 C+0.8 T & = & 580 & \cdots & (\text { Equation } 1) \\ -Y+C+0.6 T & = & -340 & \cdots & (\text { Equation } 2) \\ 0.4 Y-T & = & 100 & \cdots & (\text { Equation } 3) \end{array}\right\} . \end{split}\]

The augmented row matrix representation of this system of equations is

\[\begin{split} (A \mid b)=\left(\begin{array}{ccc|c} 2 & -5 & 0.8 & 580 \\ -1 & 1 & 0.6 & -340 \\ 0.4 & 0 & -1 & 100 \end{array}\right) \end{split}\]

Note that

\[\begin{split} \begin{array}{ll} & (A \mid b)=\left(\begin{array}{ccc|c} 2 & -5 & 0.8 & 580 \\ -1 & 1 & 0.6 & -340 \\ 0.4 & 0 & -1 & 100 \end{array}\right) \begin{array}{c} R 1 \rightarrow R 2 \\ R 2 \rightarrow R 1 \\ \\ \end{array} \\ & \longrightarrow\left(\begin{array}{ccc|c} -1 & 1 & 0.6 & -340 \\ 2 & -5 & 0.8 & 580 \\ 0.4 & 0 & -1 & 100 \end{array}\right) \begin{array}{c} R 1 \rightarrow(-1) R 1 \\ \\ \\ \end{array} \\ & \longrightarrow\left(\begin{array}{ccc|c} 1 & -1 & -0.6 & 340 \\ 2 & -5 & 0.8 & 580 \\ 0.4 & 0 & -1 & 100 \end{array}\right) \begin{array}{c} \\ R 2 \rightarrow R 2-2 R 1 \\ R 3 \rightarrow R 3-(0.4) R 1 \end{array} \\ & \longrightarrow\left(\begin{array}{ccc|c} 1 & -1 & -0.6 & 340 \\ 0 & -3 & 2 & -100 \\ 0 & 0.4 & \frac{-76}{100} & -36 \end{array}\right) \begin{array}{c} \\ R 2 \rightarrow\left(\frac{-1}{3}\right) R 2 \\ R 3 \rightarrow\left(\frac{10}{4}\right) R 3 \end{array} \\ & \longrightarrow\left(\begin{array}{ccc|c} 1 & -1 & -0.6 & 340 \\ 0 & 1 & \frac{-2}{3} & \frac{100}{3} \\ 0 & 1 & \frac{-19}{10} & -90 \end{array}\right) \begin{array}{c} \\ \\ R 3 \rightarrow R 3-R 2 \end{array} \\ & \longrightarrow\left(\begin{array}{ccc|c} 1 & -1 & -0.6 & 340 \\ 0 & 1 & \frac{-2}{3} & \frac{100}{3} \\ 0 & 0 & \frac{-37}{30} & \frac{-370}{3} \end{array}\right) \begin{array}{c} \\ \\ R 3 \rightarrow\left(\frac{-30}{37}\right) R 3 \end{array} \\ & \longrightarrow\left(\begin{array}{ccc|c} 1 & -1 & -0.6 & 340 \\ 0 & 1 & \frac{-2}{3} & \frac{100}{3} \\ 0 & 0 & 1 & 100 \end{array}\right) \begin{array}{l} R 1 \rightarrow R 1+0.6 R 3 \\ R 2 \rightarrow R 2+\left(\frac{2}{3}\right) R 3 \\ \\ \end{array} \\ & \longrightarrow\left(\begin{array}{ccc|c} 1 & -1 & 0 & 400 \\ 0 & 1 & 0 & 100 \\ 0 & 0 & 1 & 100 \end{array}\right) \begin{array}{l} \quad R 1 \rightarrow R 1+R 2 \\ \\ \\ \end{array} \\ & \longrightarrow\left(\begin{array}{lll|l} 1 & 0 & 0 & 500 \\ 0 & 1 & 0 & 100 \\ 0 & 0 & 1 & 100 \end{array}\right) \\ & \longrightarrow(I \mid c) . \end{array} \end{split}\]

Thus we can conclude that the unique solution to this system of equations is

\[\begin{split} \left\{\begin{array}{l} Y=500 \\ C=100 \\ T=100 \end{array}\right\} . \end{split}\]

\(\delta\).2#

Find the inverse matrix for the following matrix or show that it does not exist:

\[\begin{split} C=\left(\begin{array}{ccc} 1 & 1 & -3 \\ 2 & 1 & -3 \\ 2 & 2 & 1 \end{array}\right) \end{split}\]

[Sydsæter, Hammond, Strøm, and Carvajal, 2016] Section 16.6, Problem 2

It would be helpful to review Gauss-Jordan elimination technique for computing inverse matrices, for example here

Note that

\[\begin{split} \begin{array}{ll} \operatorname{det}(C) &= c_{11} C_{11}+c_{12} C_{12}+c_{13} C_{13} \\ &= 1 C_{11}+1 C_{12}+(-3) C_{13} \\ &= 1(-1)^{1+1}\left|\begin{array}{cc} 1 & -3 \\ 2 & 1 \end{array}\right|+1(1)(-1)^{1+2}\left|\begin{array}{cc} 2 & -3 \\ 2 & 1 \end{array}\right| +1(-1)^{1+3}\left|\begin{array}{cc} 2 & 1 \\ 2 & 2 \end{array}\right| \\ & = (1)(-1)^{2}(1+6)+1(-1)^{3}(2+6)-3(-1)^{4}(4-2) \\ & = 1(1)(7)+1(-1)(8)-3(1)(2) \\ & = 7-8-6 \\ & = -7 \\ & \neq 0 . \end{array} \end{split}\]

Since the determinant of the matrix \(C\) is not zero, we know that \(C\) is a non-singular matrix. This means that it is invertible, so that \(C^{-1}\) does exist. We will apply Gauss-Jordan elimination to an appropriate augmented row matrix to find \(C^{-1}\).

The augmented row matrix with which we will begin is

\[\begin{split} \left(C \mid I_{3}\right)=\left(\begin{array}{ccc|ccc} 1 & 1 & -3 & 1 & 0 & 0 \\ 2 & 1 & -3 & 0 & 1 & 0 \\ 2 & 2 & 1 & 0 & 0 & 1 \end{array}\right) \end{split}\]

Note that

\[\begin{split} \begin{array}{ll} \left(C \mid I_{3}\right) &=\left(\begin{array}{ccc|ccc} 1 & 1 & -3 & 1 & 0 & 0 \\ 2 & 1 & -3 & 0 & 1 & 0 \\ 2 & 2 & 1 & 0 & 0 & 1 \end{array}\right) \begin{array}{l} \\ R 2 \rightarrow R 2-2 R 1 \\ R 3 \rightarrow R 3-2 R 1 \end{array} \\ & \longrightarrow\left(\begin{array}{ccc|ccc} 1 & 1 & -3 & 1 & 0 & 0 \\ 0 & -1 & 3 & -2 & 1 & 0 \\ 0 & 0 & 7 & -2 & 0 & 1 \end{array}\right) \begin{array}{c} \\ R 2 \rightarrow(-1) R 2 \\ R 3 \rightarrow\left(\frac{1}{7}\right) R 2 \end{array} \\ & \longrightarrow\left(\begin{array}{ccc|ccc} 1 & 1 & -3 & 1 & 0 & 0 \\ 0 & 1 & -3 & 2 & -1 & 0 \\ 0 & 0 & 1 & \frac{-2}{7} & 0 & \frac{1}{7} \end{array}\right) \begin{array}{l} R 1 \rightarrow R 1-R 2 \\ R 2 \rightarrow R 2+3 R 1 \\ \\ \end{array} \\ & \longrightarrow\left(\begin{array}{ccc|ccc} 1 & 0 & 0 & -1 & 1 & 0 \\ 0 & 1 & 0 & \frac{8}{7} & -1 & \frac{3}{7} \\ 0 & 0 & 1 & \frac{-2}{7} & 0 & \frac{1}{7} \end{array}\right) \end{array} \end{split}\]

Thus we can conclude that the inverse matrix for \(C\) is given by

\[\begin{split} C^{-1}=\left(\begin{array}{ccc} -1 & 1 & 0 \\ \frac{8}{7} & -1 & \frac{3}{7} \\ \frac{-2}{7} & 0 & \frac{1}{7} \end{array}\right) \end{split}\]

As an exercise, you should check that \(C C^{-1}=C^{-1} C=I\).

\(\delta\).3#

Consider the matrix \(A\) defined by

\[\begin{split} % A = \begin{pmatrix} 1 & 0 \\ 0.5 & -2 \\ 0 & 3 \end{pmatrix} % \end{split}\]

Do the columns of this matrix form a basis of \(\mathbb{R}^3\)?

Why or why not?

Check all relevant definitions and facts, and apply them

No, these two vectors do not form a basis of \(\mathbb{R}^3\).

If they did then \(\mathbb{R}^3\) would be spanned by just two vectors. This is impossible.

If two vectors were enough to form a basis of \(\mathbb{R}^3\), then all bases would have to have two elements and the dimension of the space \(\mathbb{R}^3\) would have to be equal 2. But we know that the set of \(N\) canonical basis vectors form the basis in \(\mathbb{R}^N\), and thus the dimension of \(\mathbb{R}^3\) is equal to 3.

\(\delta\).4#

Is \(\mathbb{R}^2\) a linear subspace of \(\mathbb{R}^3\)?

Why or why not?

Check all relevant definitions and facts, and apply them

This is a bit of a trick question, but to solve it you just need to look carefully at the definitions (as always).

A linear subspace of \(\mathbb{R}^3\) is a subset of \(\mathbb{R}^3\) with certain properties. \(\mathbb{R}^3\) is a collection of 3-tuples \((x_1, x_2, x_3)\) where each \(x_i\) is a real number. Elements of \(\mathbb{R}^2\) are 2-tuples (pairs), and hence not elements of \(\mathbb{R}^3\).

Therefore \(\mathbb{R}^2\) is not a subset of \(\mathbb{R}^3\), and in particular not a linear subspace of \(\mathbb{R}^3\).

\(\delta\).5#

Show that if \(T \colon \mathbb{R}^K \to \mathbb{R}^N\) is a linear function then \(0 \in \mathrm{kernel}(T)\).

Check all relevant definitions and facts, and apply them

Let \(T\) be as in the question. We need to show that \(T 0 = 0\). Here’s one proof. We know from the definition of scalar multiplication that \(0 x = 0\) for any vector \(x\). Hence, letting \(x\) and \(y\) be any vectors in \(\mathbb{R}^K\) and applying the definition of linearity,

\[ % T0 = T(0 x + 0 y) = 0 Tx + 0 T y = 0 + 0 = 0 % \]

\(\delta\).6#

Let \(S\) be any nonempty subset of \(\mathbb{R}^N\) with the following two properties:

  • \(x, y \in S \implies x + y \in S\)

  • \(c \in \mathbb{R}\) and \(x \in S \implies cx \in S\)

Is \(S\) a linear subspace of \(\mathbb{R}^N\)?

Check all relevant definitions and facts, and apply them

Yes, \(S\) must be a linear subspace of \(\mathbb{R}^N\). To see this, pick any \(x\) and \(y\) in \(S\) and any scalars \(\alpha, \beta\). To establish our claim we need to show that \(z := \alpha x + \beta y\) is in \(S\). To see that this is so observe that by (\(\text{ such that }ar\text{ such that }ar\)) we have \(u := \alpha x \in S\) and \(v := \beta y \in S\). By (\(\text{ such that }ar\)) we then have \(u + v \in S\). In other words, \(z \in S\) as claimed.

\(\delta\).7#

If \(S\) is a linear subspace of \(\mathbb{R}^N\) then any linear combination of \(K\) elements of \(S\) is also in \(S\). Show this for the case \(K = 3\).

Check all relevant definitions and facts, and apply them

Let \(x_i \in S\) and \(\alpha_i \in \mathbb{R}\) for \(i=1,2,3\). We claim that

\[ \alpha_1 x_1 + \alpha_2 x_2 + \alpha_3 x_3 \in S \]

To see this let \(y := \alpha_1 x_1 + \alpha_2 x_2\). By the definition of linear subspaces we know that \(y \in S\). Using the definition of linear subspaces again we have \(y + \alpha_3 x_3 \in S\). Hence the expression above is confirmed.

\(\delta\).8#

Let \(\{x_1, x_2\}\) be a linearly independent set in \(\mathbb{R}^2\) and let \(\gamma\) be a nonzero scalar.

Is it true that \(\{\gamma x_1, \gamma x_2\}\) is also linearly independent?

Check all relevant definitions and facts, and apply them

The answer is yes. Here’s one proof: Suppose to the contrary that \(\{\gamma x_1, \gamma x_2\}\) is linearly dependent. Then one element can be written as a linear combination of the others. In our setting with only two vectors, this translates to \(\gamma x_1 = \alpha \gamma x_2\) for some \(\alpha\). Since \(\gamma \ne 0\) we can multiply each side by \(1/\gamma\) to get \(x_1 = \alpha x_2\). But now each \(x_i\) is a multiple of the other. This contradicts linear independence of \(\{x_1, x_2\}\).

Here’s another proof: Take any \(\alpha_1, \alpha_2 \in \mathbb{R}\) with

\[ \alpha_1 \gamma x_1 + \alpha_2 \gamma x_2 = 0 \]

We need to show that \(\alpha_1 = \alpha_2 = 0\). To see this, observe that

\[ \alpha_1 \gamma x_1 + \alpha_2 \gamma x_2 = \gamma (\alpha_1 x_1 + \alpha_2 x_2) \]

Hence \(\gamma (\alpha_1 x_1 + \alpha_2 x_2) = 0\). Since \(\gamma \ne 0\), the only way this could occur is that \(\alpha_1 x_1 + \alpha_2 x_2 = 0\). But \(\{x_1, x_2\}\) is linearly independent, so this implies that \(\alpha_1 = \alpha_2 = 0\). The proof is done.

\(\delta\).9#

Is

\[\begin{split} z= \begin{pmatrix} -3.98 \\ 11.73 \\ -4.32 \end{pmatrix} \end{split}\]

in the span of \(X:=\{x_1, x_2, x_3\}\), where

\[\begin{split} x_1= \begin{pmatrix} -4 \\ 0 \\ 0 \end{pmatrix}, \;\; x_2= \begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix}, \;\; x_3= \begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix}? \end{split}\]

Check all relevant definitions and facts, and apply them

The direct way to answer the question is to check whether the given vector is a linear combination of the other three. If this is the case, then by definition it is in the required span. To establish this, we have to solve a system of linear equations of the form

\[\begin{split} \alpha_1 \begin{pmatrix} -4 \\ 0 \\ 0 \end{pmatrix} + \alpha_2 \begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix} + \alpha_3 \begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix} = \begin{pmatrix} -3.98 \\ 11.73 \\ -4.32 \end{pmatrix} \end{split}\]

But there is an easier way to do this!

We know that any linearly independent set of 3 vectors in \(\mathbb{R}^3\) will span \(\mathbb{R}^3\). Since \(z \in \mathbb{R}^3\), this will include \(z\). So all we need to do is show that \(X\) is linearly independent. To this end, take any scalars \(\alpha_1, \alpha_2, \alpha_3\) with

\[\begin{split} \alpha_1 \begin{pmatrix} -4 \\ 0 \\ 0 \end{pmatrix} + \alpha_2 \begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix} + \alpha_3 \begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix} = 0 := \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{split}\]

Write as a linear system of 3 equations and show that the only solution is \(\alpha_1=\alpha_2=\alpha_3=0\).

In this case the set would be linearly independent.

Clearly, the second system is much easier to solve than the first.