π¬ Tutorial problems delta \(\delta\)#
Note
This problems are designed to help you practice the concepts covered in the lectures. Not all problems may be covered in the tutorial, those left out are for additional practice on your own.
\(\delta\).1#
This example appears to be part of an application of a linear version of a Keynesian cross model.
Solve the following system of linear equations:
[Bradley, 2013] Progress Exercises 9.3, Question 5.
It would be helpful to review Gauss-Jordan elimination technique, for example here
Consider the following system of three linear equations in three unknown variables:
This system of equations can be rewritten as
The augmented row matrix representation of this system of equations is
Note that
Thus we can conclude that the unique solution to this system of equations is
\(\delta\).2#
Find the inverse matrix for the following matrix or show that it does not exist:
[Sydsæter, Hammond, Strøm, and Carvajal, 2016] Section 16.6, Problem 2
It would be helpful to review Gauss-Jordan elimination technique for computing inverse matrices, for example here
Note that
Since the determinant of the matrix \(C\) is not zero, we know that \(C\) is a non-singular matrix. This means that it is invertible, so that \(C^{-1}\) does exist. We will apply Gauss-Jordan elimination to an appropriate augmented row matrix to find \(C^{-1}\).
The augmented row matrix with which we will begin is
Note that
Thus we can conclude that the inverse matrix for \(C\) is given by
As an exercise, you should check that \(C C^{-1}=C^{-1} C=I\).
\(\delta\).3#
Consider the matrix \(A\) defined by
Do the columns of this matrix form a basis of \(\mathbb{R}^3\)?
Why or why not?
Check all relevant definitions and facts, and apply them
No, these two vectors do not form a basis of \(\mathbb{R}^3\).
If they did then \(\mathbb{R}^3\) would be spanned by just two vectors. This is impossible.
If two vectors were enough to form a basis of \(\mathbb{R}^3\), then all bases would have to have two elements and the dimension of the space \(\mathbb{R}^3\) would have to be equal 2. But we know that the set of \(N\) canonical basis vectors form the basis in \(\mathbb{R}^N\), and thus the dimension of \(\mathbb{R}^3\) is equal to 3.
\(\delta\).4#
Is \(\mathbb{R}^2\) a linear subspace of \(\mathbb{R}^3\)?
Why or why not?
Check all relevant definitions and facts, and apply them
This is a bit of a trick question, but to solve it you just need to look carefully at the definitions (as always).
A linear subspace of \(\mathbb{R}^3\) is a subset of \(\mathbb{R}^3\) with certain properties. \(\mathbb{R}^3\) is a collection of 3-tuples \((x_1, x_2, x_3)\) where each \(x_i\) is a real number. Elements of \(\mathbb{R}^2\) are 2-tuples (pairs), and hence not elements of \(\mathbb{R}^3\).
Therefore \(\mathbb{R}^2\) is not a subset of \(\mathbb{R}^3\), and in particular not a linear subspace of \(\mathbb{R}^3\).
\(\delta\).5#
Show that if \(T \colon \mathbb{R}^K \to \mathbb{R}^N\) is a linear function then \(0 \in \mathrm{kernel}(T)\).
Check all relevant definitions and facts, and apply them
Let \(T\) be as in the question. We need to show that \(T 0 = 0\). Hereβs one proof. We know from the definition of scalar multiplication that \(0 x = 0\) for any vector \(x\). Hence, letting \(x\) and \(y\) be any vectors in \(\mathbb{R}^K\) and applying the definition of linearity,
\(\delta\).6#
Let \(S\) be any nonempty subset of \(\mathbb{R}^N\) with the following two properties:
\(x, y \in S \implies x + y \in S\)
\(c \in \mathbb{R}\) and \(x \in S \implies cx \in S\)
Is \(S\) a linear subspace of \(\mathbb{R}^N\)?
Check all relevant definitions and facts, and apply them
Yes, \(S\) must be a linear subspace of \(\mathbb{R}^N\). To see this, pick any \(x\) and \(y\) in \(S\) and any scalars \(\alpha, \beta\). To establish our claim we need to show that \(z := \alpha x + \beta y\) is in \(S\). To see that this is so observe that by (\(\text{ such that }ar\text{ such that }ar\)) we have \(u := \alpha x \in S\) and \(v := \beta y \in S\). By (\(\text{ such that }ar\)) we then have \(u + v \in S\). In other words, \(z \in S\) as claimed.
\(\delta\).7#
If \(S\) is a linear subspace of \(\mathbb{R}^N\) then any linear combination of \(K\) elements of \(S\) is also in \(S\). Show this for the case \(K = 3\).
Check all relevant definitions and facts, and apply them
Let \(x_i \in S\) and \(\alpha_i \in \mathbb{R}\) for \(i=1,2,3\). We claim that
To see this let \(y := \alpha_1 x_1 + \alpha_2 x_2\). By the definition of linear subspaces we know that \(y \in S\). Using the definition of linear subspaces again we have \(y + \alpha_3 x_3 \in S\). Hence the expression above is confirmed.
\(\delta\).8#
Let \(\{x_1, x_2\}\) be a linearly independent set in \(\mathbb{R}^2\) and let \(\gamma\) be a nonzero scalar.
Is it true that \(\{\gamma x_1, \gamma x_2\}\) is also linearly independent?
Check all relevant definitions and facts, and apply them
The answer is yes. Hereβs one proof: Suppose to the contrary that \(\{\gamma x_1, \gamma x_2\}\) is linearly dependent. Then one element can be written as a linear combination of the others. In our setting with only two vectors, this translates to \(\gamma x_1 = \alpha \gamma x_2\) for some \(\alpha\). Since \(\gamma \ne 0\) we can multiply each side by \(1/\gamma\) to get \(x_1 = \alpha x_2\). But now each \(x_i\) is a multiple of the other. This contradicts linear independence of \(\{x_1, x_2\}\).
Hereβs another proof: Take any \(\alpha_1, \alpha_2 \in \mathbb{R}\) with
We need to show that \(\alpha_1 = \alpha_2 = 0\). To see this, observe that
Hence \(\gamma (\alpha_1 x_1 + \alpha_2 x_2) = 0\). Since \(\gamma \ne 0\), the only way this could occur is that \(\alpha_1 x_1 + \alpha_2 x_2 = 0\). But \(\{x_1, x_2\}\) is linearly independent, so this implies that \(\alpha_1 = \alpha_2 = 0\). The proof is done.
\(\delta\).9#
Is
in the span of \(X:=\{x_1, x_2, x_3\}\), where
Check all relevant definitions and facts, and apply them
The direct way to answer the question is to check whether the given vector is a linear combination of the other three. If this is the case, then by definition it is in the required span. To establish this, we have to solve a system of linear equations of the form
But there is an easier way to do this!
We know that any linearly independent set of 3 vectors in \(\mathbb{R}^3\) will span \(\mathbb{R}^3\). Since \(z \in \mathbb{R}^3\), this will include \(z\). So all we need to do is show that \(X\) is linearly independent. To this end, take any scalars \(\alpha_1, \alpha_2, \alpha_3\) with
Write as a linear system of 3 equations and show that the only solution is \(\alpha_1=\alpha_2=\alpha_3=0\).
In this case the set would be linearly independent.
Clearly, the second system is much easier to solve than the first.