πŸ”¬ Tutorial problems beta \beta

πŸ”¬ Tutorial problems beta \(\beta\)#

Note

This problems are designed to help you practice the concepts covered in the lectures. Not all problems may be covered in the tutorial, those left out are for additional practice on your own.

\(\beta\).1#

Compute the following limits:

  1. \(\quad \lim_{n \to \infty} \frac{1}{n}\)

  2. \(\quad \lim_{n \to \infty} \frac{n+2}{2n+1}\)

  3. \(\quad \lim_{n \to \infty} \frac{2n^2(n-2)}{(1-3n)(2+n^2)}\)

  4. \(\quad \lim_{n \to \infty} \frac{(n+1)!}{n! - (n+1)!}\)

  5. \(\quad \lim_{n \to \infty} \sqrt{\frac{9+n^2}{4n^2}}\)

Fact

  1. \(x_n \to a\) in \(\mathbb{R}^N\) if and only if \(\|x_n - a\| \to 0\) in \(\mathbb{R}\)

  2. If \(x_n \to x\) and \(y_n \to y\) then \(x_n + y_n \to x + y\)

  3. If \(x_n \to x\) and \(\alpha \in \mathbb{R}\) then \(\alpha x_n \to \alpha x\)

  4. If \(x_n \to x\) and \(y_n \to y\) then \(x_n y_n \to xy\)

  5. If \(x_n \to x\) and \(y_n \to y\) then \(x_n / y_n \to x/y\), provided \(y_n \ne 0\), \(y \ne 0\)

  6. If \(x_n \to x\) then \(x_n^p \to x^p\)

Let’s prove that \(\lim_{n \to \infty} \frac{1}{n} = 0\) using the definition of a limit.

First pick an arbitrary \(\epsilon > 0\). Now we have to come up with an \(N\) such that $\( n \geq N \implies |1/n - 0| < \epsilon \)$

Let \(N\) be the first integer greater than \(1/\epsilon\). Then $\( n \geq N \implies n > 1/\epsilon \implies 1/n < \epsilon \implies |1/n - 0| < \epsilon \)$

\[\begin{split} \begin{array}{l} \lim_{n \to \infty} \frac{n+2}{2n+1} = \lim_{n \to \infty} \frac{1+2/n}{2+1/n} = \\ = \frac{1+\lim_{n \to \infty}2/n}{2+\lim_{n \to \infty}1/n} = \frac{1+0}{2+0} = \frac{1}{2} \end{array} \end{split}\]
\[\begin{split} \begin{array}{l} \lim_{n \to \infty} \frac{2n^2(n-2)}{(1-3n)(2+n^2)} = \lim_{n \to \infty} \frac{2n^3-4n^2}{2+n^2-6n-3n^3} = \\ = \lim_{n \to \infty} \frac{2-4/n}{2/n^3+1/n-6/n^2-3} = \frac{2-\lim_{n \to \infty}4/n}{\lim_{n \to \infty}2/n^3+\lim_{n \to \infty}1/n-\lim_{n \to \infty}6/n^2-3} = \\ = \frac{2-0}{0+0-0-3} = -\frac{2}{3} \end{array} \end{split}\]

Note that the factorial operation is defined as \((n+1)! = (n+1)n(n-1)\cdot .. \cdot 1\)

\[\begin{split} \begin{array}{l} \lim_{n \to \infty} \frac{(n+1)!}{n! - (n+1)!} = \lim_{n \to \infty} \frac{(n+1)n!}{n!(1- (n+1))} = \\ = \lim_{n \to \infty} \frac{n+1}{1- n -1} = \lim_{n \to \infty} \frac{1+1/n}{(-1)} = \\ = \frac{1+\lim_{n \to \infty} 1/n}{(-1)} = -\frac{1+0}{1} = -1 \end{array} \end{split}\]
\[\begin{split} \begin{array}{l} \lim_{n \to \infty} \sqrt{\frac{9+n^2}{4n^2}} = \sqrt{\lim_{n \to \infty} \frac{9/n^2+1}{4}} = \\ = \sqrt{\frac{\lim_{n \to \infty}9/n^2+1}{4}} = \sqrt{\frac{0+1}{4}} = \frac{1}{2} \end{array} \end{split}\]

\(\beta\).2#

Show that the Cobb-Douglas production function \(f(k,l) = k^\alpha l^\beta\) from \([0,\infty) \times [0,\infty)\) to \(\mathbb{R}\) is continuous everywhere in its domain.

You can use the fact that, for any \(a \in \mathbb{R}\) the function \(g(x) = x^a\) is continuous at any \(x \in [0,\infty)\).

Also, remember that norm convergence implies element by element convergence.

Let \((k, \ell)\) be any point in \(A\), and let \(\{(k_n, \ell_n)\}\) be any sequence converging to \((k, \ell)\) in the sense of convergence in \(\mathbb{R}^2\). We wish to show that

\[ f(k_n, \ell_n) \to f(k, \ell) \]

Since \((k_n, \ell_n) \to (k, \ell)\) in \(\mathbb{R}^2\), we know from the facts on convergence in norm that the individual components converge in \(\mathbb{R}\). That is,

\[ k_n \to k \quad \text{and} \quad \ell_n \to \ell \]

We also know from the facts that, for any \(a\), the function \(g(x) = x^a\) is continuous at \(x\). It follows from the definition of continuity and the convergence in \(\mathbb{R}\) above that \(k_n^\alpha \to k^{\alpha}\) and \(\ell^{\beta}_n \to \ell^\beta\).

Moreover, we know that, for any sequences \(\{y_n\}\) and \(\{z_n\}\), if \(y_n \to y\) and \(z_n \to z\), then \(y_n z_n \to yz\). Hence

\[ k_n^\alpha \ell^{\beta}_n \to k^{\alpha}\ell^\beta \]

That is, \(f(k_n, \ell_n) \to f(k, \ell)\). Hence \(f\) satisfies the definition of continity.

\(\beta\).3#

Let \(A\) be the set of all consumption pairs \((c_1,c_2)\) such that \(c_1 \ge 0\), \(c_2 \ge 0\) and \(p_1 c_1 + p_2 c_2 \le M\) Here \(p_1\), \(p_2\) and \(M\) are positive constants. Show that \(A\) is a closed subset of \(\mathbb{R}^2\).

Weak inequalities are preserved under limits:

If \(x_n \leq y_n\) for all \(n\) then \(\lim_{n \to \infty} x_n \leq \lim_{n \to \infty} y_n\), including the case of constant sequence \(y_n=a\) for all \(n\).

To show that \(A\) is closed, we need to show that the limit of any sequence contained in \(A\) is also in \(A\). To this end, let \(\{{\bf x}_n\}\) be an arbitrary sequence in \(A\) coverging to a point \({\bf x} \in \mathbb{R}^2\). Since \({\bf x}_n \in A\) for all \(n\) we have \({\bf x}_n \geq {\bf 0}\) in the sense of the component-vise vector inequality and \({\bf x}_n' {\bf p} \leq m\), where \({\bf p} = (p_1, p_2)\). We need to show that the same is true for \({\bf x}\).

Since \({\bf x}_n \to {\bf x}\), we have \({\bf x}_n' {\bf p} \to {\bf x}' {\bf p}\). Since limits preserve weak inequalities and \({\bf x}_n' {\bf p} \leq m\) for all \(n\), we have \({\bf x}' {\bf p} \leq m\). Hence it remains only to show that \({\bf x} \geq 0\). Again using the fact that weak inequalities are preserved under limits, combined with \({\bf x}_n \geq {\bf 0}\) for all \(n\), gives \({\bf x} \geq {\bf 0}\) as required.

\(\beta\).4#

Let \(f \colon [-1, 1] \to \mathbb{R}\) be defined by \(f(x) = 1 - |x|\), where \(|x|\) is the absolute value of \(x\).

  • Is the point \(x = 0\) a maximizer of \(f\) on \([-1, 1]\)?

  • Is it a unique maximizer?

  • Is it an interior maximizer?

  • Is it stationary?

Draw a graph of function \(f\).

The point \(x=0\) is indeed a maximizer, since \(f(x) = 1 -|x| \leq 1 = f(0)\) for any \(x \in [-1, 1]\) (\(|x|=0\) if and only if \(x=0\)).

It is also a unique maximizer, since no other point is a maximizer (because \(1 -|x| < 1\) for any other \(x\)).

It is an interior maximizer since \(0\) is not an end point of \([-1, 1]\).

It is not stationary because \(f\) is not differentiable at this point (sketch the graph if you like) and hence cannot satisfy \(f'(x)=0\).

\(\beta\).5#

Consider function \(f \colon X \to \mathbb{R}\) defined by \(f(x) = \frac{1}{x} e^x\).

Find the minimizer(s) and the maximizer(s) of this function on \(X = (0, 2]\).

Follow all the required steps and explain your reasoning.

Review the algorithm for univariate optimization in the lecture notes

Following the algorithm for the univariate optimization from the lecture notes

  1. Locate all stationary points

  • according to the definition the stationary points are those interior points where \(f'(x) = 0\)

\[ f'(x) = \frac{d}{dx} \left( \frac{e^x}{x} \right) = \frac{e^x}{x} -\frac{1}{x^2} e^x = \frac{e^x}{x^2} \left( x - 1 \right) \]
\[ f'(x) =0 \iff x=1 \]
  1. Evaluate the function at the stationary points and the boundaries, in our case only one boundary \(x=2\).

\[ f(1) = e \approx 2.718, \; f(2) = \frac{e^2}{2} \approx 3.694 \ \]

Evaluating the function at \(x=0\) is not possible because the function is not defined there! We have to be careful and investigate the behavior of the function as \(x\) approaches \(0\).

Because \(exp(x)\) is always positive, for small values of \(x\) the function takes on large numbers. The smaller \(x\) is, the larger the function becomes. This means that the function is unbounded as \(x\) approaches \(0\) (from the right). Therefore, we could move forward taking \(f(0) = \infty\).

  1. Compare the values of the function at the stationary points and the boundaries to pick out the solution.

The minimizer of the function on \((0,2]\) is \(x=1\) because \(\min\{e,e^2/2,\infty\} = e\).

The maximizer of the function could be \(x=0\) because \(\max\{e,e^2/2,\infty\} = \infty\), but as the function is not defined at \(x=0\). We showed that the function grows without bound as \(x\) becomes closer and closer to 0, therefore it is impossible to find a precise \(x\) where it attains the maximum value (there is always possible to make a step towards zero to increase the function a little more). The conclusion is that there is no maximizer on \([0,2]\).

\(\beta\).6#

Find an example of a nonlinear univariate function \(f \colon D \subset \mathbb{R} \to \mathbb{R}\) that:

  • (a) has exactly one maximizer and one minimizer

  • (b) has has neither a maximizer nor a minimizer

  • (c) has an infinite number of maximizers and minimizers

  • (d) has exactly finite number \(n\) of maximizers and \(n\) minimizers

Remember to define both the function \(f(x)\) and its domain \(D\) for each case.

First, review the relevant definitions. Then, try to draft some ideas on a piece of paper. Think of how they can be expressed in mathematical terms.

This is a creative problem which has many possible correct answers.

As always, start with the definitionsβ€”in this case definitions of maximizer and minimizer.

Here is one possible solution:

(a) any linear (affine) function \(f(x)=ax+b\), \(a \ne 0\) on any close interval \([A,B]\) has no stationary points, and therefore exactly one maximizer and one minimizer at the edges of the interval. (b) any linear (affine) function \(f(x)=ax+b\), \(a \ne 0\) on any open interval \((A,B)\) has no maximizer and no minimizer similarly to the no existence example in the lecture notes. A different idea would be to rely on positive and negative infinity in the domain \(D\), for example, let \(f(x) = \tan(\pi x)\) which takes values \(0\) on all integer points, and approaches a vertical line at every half-integer point. (c) A constant function \(f(x) = C\) should immediately come to mind. Another possibility is the cyclic trigonometric functions like \(f(x) = \sin(x)\) or \(f(x) = \cos(x)\) on the entire real line. The latter only return values between -1 and 1, and attain these two values infinitely many times. (d) This is the most tricky question, but one solution is to adjust the domain of the trig function such as \(f(x) = \cos(\pi x)\). This function attains 1 at \(x=\{...,-2,0,2,4,...\}\) and attains 0 at \(x=\{...,-3,-1,1,3,5,...\}\). Therefore, it has exactly \(n\) maximizers and \(n\) minimizers if we define \(D = [0,2n-1]\).