π Envelope theorem#
β± | words
Value function and parameters of optimization problems#
Letβs start with recalling the definition of a general optimization problem
Definition
The general form of the optimization problem is
where:
\(f(x,\theta) \colon \mathbb{R}^N \times \mathbb{R}^K \to \mathbb{R}\) is an objective function
\(x \in \mathbb{R}^N\) are decision/choice variables
\(\theta \in \mathbb{R}^K\) are parameters
\(g_i(x,\theta) = 0, \; i\in\{1,\dots,I\}\) where \(g_i \colon \mathbb{R}^N \times \mathbb{R}^K \to \mathbb{R}\), are equality constraints
\(h_j(x,\theta) \le 0, \; j\in\{1,\dots,J\}\) where \(h_j \colon \mathbb{R}^N \times \mathbb{R}^K \to \mathbb{R}\), are inequality constraints
\(V(\theta) \colon \mathbb{R}^K \to \mathbb{R}\) is a value function
This lecture focuses on the value function in the optimization problem \(V(\theta)\), and how it depends on the parameters \(\theta\).
In economics we are interested how the optimized behavior changes when the circumstances of the decision-making process change
income/budget/wealth changes
intertemporal effects of changes in other time periods
We would like to establish the properties of the value function \(V(\theta)\):
continuity \(\rightarrow\) The maximum theorem (not covered here, see additional lecture notes)
changes/derivative (if differentiable) \(\rightarrow\) Envelope theorem
monotonicity \(\rightarrow\) Supermodularity and increasing differences (not covered here, see Sundaram ch.10)
Unconstrained optimization case#
Letβs start with an unconstrained optimization problem
where:
\(f(x,\theta) \colon \mathbb{R}^N \times \mathbb{R}^K \to \mathbb{R}\) is an objective function
\(x \in \mathbb{R}^N\) are decision/choice variables
\(\theta \in \mathbb{R}^K\) are parameters
\(V(\theta) \colon \mathbb{R}^K \to \mathbb{R}\) is a value function
Envelope theorem for unconstrained problems
Let \(f(x,\theta) \colon \mathbb{R}^N \times \mathbb{R}^K \to \mathbb{R}\) be a differentiable function, and \(x^\star(\theta)\) be the maximizer of \(f(x,\theta)\) for every \(\theta\). Suppose that \(x^\star(\theta)\) is differentiable function itself. Then the value function of the problem \(V(\theta) = f\big(x^\star(\theta),\theta)\) is differentiable w.r.t. \(\theta\) and
In other words, the marginal changes in the value function are given by the partial derivative of the objective function evaluated at the maximizer.
Proof
See Theorem 19.4 in Simon and Blume (1994), pp. 453-454
Note
When \(K=1\), so that \(\theta\) is a scalar, the envelope theorem can be written as
so that the meaning is carried only by the derivative symbol change
Example
Consider \(f(x,a) = -x^2 +2ax +4a^2 \to \max_x\).
What is the (approximate) effect of a unit increase in \(a\) on the attained maximum?
FOC: \(-2x+2a=0\), giving \(x^\star(a) = a\).
So, \(V(a) = f(a,a) = 5a^2\), and \(V'(a)=10a\). The value increases at a rate of \(10a\) per unit increase in \(a\).
Using the envelope theorem, we could go directly to
Constrained optimization case#
Envelope theorem for constrained problems
Consider an equality constrained optimization problem
where:
\(f(x,\theta) \colon \mathbb{R}^N \times \mathbb{R}^K \to \mathbb{R}\) is an objective function
\(x \in \mathbb{R}^N\) are decision/choice variables
\(\theta \in \mathbb{R}^K\) are parameters
\(g_i(x,\theta) = 0, \; i\in\{1,\dots,I\}\) where \(g_i \colon \mathbb{R}^N \times \mathbb{R}^K \to \mathbb{R}\), are equality constraints
\(V(\theta) \colon \mathbb{R}^K \to \mathbb{R}\) is a value function
Assume that the maximizer correspondence \(\mathcal{D}^\star(\theta) = \mathrm{arg}\max f(x,\theta)\) is single-valued and can be represented by the function \(x^\star(\theta) \colon \mathbb{R}^K \to \mathbb{R}^N\), with the corresponding Lagrange multipliers \(\lambda^\star(\theta) \colon \mathbb{R}^K \to \mathbb{R}^I\).
Assume that both \(x^\star(\theta)\) and \(\lambda^\star(\theta)\) are differentiable, and that the constraint qualification assumption holds. Then
where \(\mathcal{L}(x,\lambda,\theta)\) is the Lagrangian of the problem.
Proof
Here is a version of proof. The Lagrangian is
Start by exploring its partial derivatives with respect to \(\theta_{j}\), \(j=1,\dots,K\)
We use the fact that the equality constraints \(g_{i}(x^\star(\theta), \theta)=0\) hold for all \(i\).
Now we differentiate the value function w.r.t. \(\theta_{j}\) and show that it is equal to the same expression as above.
Here we use the fact that first order conditions hold at \((x^\star,\lambda^\star)\) and we have
Continuing the main line
Here we use the fact that the constraints hold, and thus differentiating both side we have
Continuing the main line
\(\blacksquare\)
Note
What about the inequality constraints?
Well, if the solution is interior and none of the constrains are binding, the unconstrained version of the envelope theorem applies. If any of the constrains are binding, their combination can be represented as a set of equality constraint, and the constrained version of the envelope theorem applies. Care needs to be taken to avoid the changes in the parameter that lead to a switch in the binding constraints. Such points are most likely non-differentiable, and the envelope theorem does not apply there at all!
Example
Back to the log utility case
The Lagrangain is
Solution is
Value function is
We can verify the Envelope theorem by noting that direct differentiation gives
And applying the envelope theorem we have
Lagrange multipliers as shadow prices#
In the equality constrained optimization problem, the Lagrange multiplier \(\lambda_i\) can be interpreted as the shadow price of the constraint \(g_i(x,\theta) = a\), i.e. the change in the value function resulting from a change in parameter \(a\), in other words relaxing the constraint \(g_i(x,\theta) = a\).
Exercise: prove this statement
References and reading#
References
[Simon and Blume, 1994]: 19.1, 19.2
[Sundaram, 1996]: chapter 9, 5.2.3