1. Homogenous of Degree \(k\)
Definition (Homogeneity of degree \(k\)). A utility function \(u:\mathbb{r}^n\rightarrow \mathbb{R}\) is homogeneous of degree \(k\) if and only if for all \(x \in \mathbb{R}^n\) and all \(\lambda>0\), \(u(\lambda x)=\lambda^ku(x)\).
$$f(\lambda x_1,…,\lambda x_n)=\lambda^kf(x_1,…,x_n)$$
Property
- Constant Return to Scale: CRTS production function is homogenous of degree 1. IRTS is homogenous of degree \(k>1\). DRTS is homogenous of degree \(k<1\).
- The Marishallian demand is homogeneous of degree zero. \(x(\lambda p,\lambda w)=x(p,w)\). (Maximise \(u(x)\) s.t. \(px<w\). “No Money Illusion”.
- Excess demand is also homogeneous degree of zero. Easy to prove by the Marshallian Demand.
$$CRTS:\quad F(aK,aL)=aF(K,L) \quad a>0$$
$$IRTS:\quad F(aK,aL)>aF(K,L) \quad a>1$$
$$DRTS:\quad F(aK,aL)<aF(K,L) \quad a>1$$
2. Euler’s Theorem
Theorem (Euler’s Theorem) Let \(f(x_1,…,x_n)\) be a function that is homogeneous of degree k. Then,
$$ x_1\frac{\partial f(x)}{\partial x_1}+…+ x_n\frac{\partial f(x)}{\partial x_n} =kf(x) $$
or, in gradient notation,
$$ x\cdot \nabla f(x)=kf(x) $$
Proof: Differentiate \(f(tx_1,…,tx_n)=t^k f(x_1,…,x_n)\) w.r.t \(t\) and then set \(t=1\).
P.S. We use Euler’s Theorem in the proof of the Solow Model.
3. Envelop Theorem
Motivation:
Given \(y=ax^2+bx+c, a>0, b,c \in \mathbb{R}\), we need to know how does a change in the parameter \(a\) affect the maximum value of \(y\), \(y^*\)?
We first define \(y^*=\max_{x} y= \max_{x} ax^2+bx+c \). The solution is \(x^*=-\frac{b}{2a}\), and plug it back into \(y\), we get \(y^*=f(x^*)=\frac{4ac-b^2}{4a}\). Now, we take derivative w.r.t. \(a\). \(\frac{\partial y^*}{\partial a}=\frac{b^2}{4a^2}\). We would find that,
$$\frac{\partial y^*}{\partial a}= {\frac{\partial y}{\partial a}}|_{x=x^*} $$
A Simple Envelop Theorem
$$v(q)=\max_{x} f(x,q)$$
$$=f(x^*(q),q)$$
$$ \frac{d}{dq}v(q)=\underbrace{\frac{\partial}{\partial x}f(x^*(q),q)}_{=0\ by\ f.o.c.}\frac{\partial}{\partial q}x^*(q)+\frac{\partial}{\partial q} f(x^*(q),q) $$
$$ \frac{d}{dq}v(q) =\frac{\partial}{\partial q}f(x^*(q),q) $$
Think of the ET as an application of the chain rule and then F.O.C., our goal is to find how does parameter affect the already maximised function \(v(q)=f(x^*(q),q)\).
A formal expression
Theorem (Envelope Theorem). Consider a constrained optimisation problem \(v(\theta)=\max_x f(x,\theta)\) such that \(g_1(x,\theta)\geq0,…,g_K(x,\theta)\geq0\).
Comparative statics on the value function are given by: (\(v(\theta)=f(x,\theta)|_{x=x^*(\theta)}=f(x^*(\theta),\theta)\))
$$ \frac{\partial v}{\partial \theta_i}=\sum_{k=1}^{K}\lambda_k \frac{\partial g_k}{\partial \theta_i}|_{x^*}+{\frac{\partial f}{\partial \theta_i}}|_{x^*}=\frac{\partial \mathcal{L}}{\partial \theta_i}|_{x^*} $$
(for Lagrangian \(\mathcal{L}(x,\theta,\lambda)\equiv f(x,\theta)+\sum_{k}\lambda_k g_k(x,\theta)\)) for all \(\theta\) such that the set of binding constraints does not change in an open neighborhood.
Roughly, the derivative of the value function is the derivative of the Lagrangian w.r.t. parameters, \(\theta\), while argmax those unknows (\(x=x^*\)).
4. Hicksian and Marshallian demand + Shepherd’s Lemma
To be continued.
https://www.bilibili.com/video/BV1VJ411J7ZL?spm_id_from=333.999.0.0
5. KKT
6. Taylor Series
A Taylor series is a series expansion of a function about a point. A one-dimensional Taylor series is an expansion of a real function \(f(x)\) about point \(x=a\) is given by,
$$ f(x)=f(a)+f'(a)(x-a)+\frac{f”(a)}{2!}(x-a)^2\\+\frac{f^{(3)}(a)}{3!}(x-a)^3+…+\frac{f^{(n)}(a)}{n!}(x-a)^n+… $$
Taylor expansion is a way to approximate a functional curve around a certain point, by taking derivatives. We focus the function around this point.
For example, we approximate \(f(x)=x^3\) around \(x=2\).
$$ f(x)\approx f(2)+\frac{f'(2)}{1!}(x-2)+\frac{f”(2)}{2!}(x-2)^2\\+frac{f^{(2)}(1)}{3!}(x-2)^3+… $$
$$ f(x)\approx 8+\frac{12}{1}(x-2)+\frac{12}{2}(x-2)^2+\frac{6}{3\times2}(x-2)^3 $$
Simplifying it, we get \(f(x)\) around \(x=2\) is,
$$ f(x)=x^3 $$
That is a coincidence that the original function and the Taylor polynomial are exactly the same if \(f(x)=x^3\).
Another Example, we take first order Taylor approximation to \(f(k)=ln(k)\) at \(k^*\),
$$ ln(k)\approx ln(k^*)+\frac{1}{ln(k^*)}(k-k^*) $$
Thus, we know,
$$ ln(k)- ln(k^*)\approx\frac{k-k^*}{k^*} $$
New Study and Idea of Taylor Expension
Taylor Expansion aims to use polynomial to approximate a certain function.
For example, in order to describe the shape of function \(cos(x)\) at x=0, we would first construct a polynomial.
(P.S. We let \(c_0=1\) as we need to pin the polynomial equal to 1 at x=0.)
$$ P(x)=c_0+c_1x+c_2x^2 \ and\ at\ x=0\ P(0)=c_0 $$
, where those coefficients are free to change, and the magnitude of those coefficients would affect how the approcimated curve looks like.
To get a better approximation, we would adjust those coefficients. Thus, we consider using different orders of derivatives to simulate our target function.
We need the first order derivate of \(cos'(x)|_{x=0}=sin(x)|_{x=0}\) to be zero, so we set the first-order derivative of our polynomial function to equal to zero as well!
$$\frac{\partial P(x)}{\partial x}|_{x=0}=c_1 \times 1 |_{x=0}=c_1$$
Therefore, \(c_1\) must be zero.
Let’s go one more step. As the second derivative of \(cos^{(2)}(x)=-1\), we need the second derivative (, which is also the second derivate of the second-order term of our constructed polynomial function) of our polynomial function to be also -1.
$$\frac{\partial^2 P(x)}{\partial x^2}|_{x=0}=2\times c_2$$
We adjust that to be negative one, so \(c_2=-\frac{1}{2}\).
Therefore, we get,
$$cos(x)|_{x=0}\approx P(x)=c_0+c_1 x +c_2 x^2 = 1-\frac{1}{2}x^2$$
Great! If we need a more accurate approximation, then we keep on going to more derivates and calculate the coefficient of the higher-order term. However, I would do that, so I just simply add a term \(O(x^3)\) to represent there are other terms that are less equal than \(x^3\). (There are accurate descriptions that I will update in later posts).
3 thoughts on “Math Tools”
Comments are closed.