## 4.8 Elasticities

SW 8.2

Economists are often interested in **elasticities**, that is, the percentage change in \(Y\) when \(X\) changes by 1%.

Recall that the definition of percentage change of moving from, say, \(x_{old}\) to \(x_{new}\) is given by

\[ \textrm{% change} = \frac{x_{new} - x_{old}}{x_{old}} \times 100 \]

Elasticities are closely connected to natural logarithms; following the most common notation in economics, we’ll refer to the natural logarithm using the notation: \(\log\). Further, recall that the derivative of the \(\log\) function is given by

\[ \frac{d \, \log(x)}{d \, x} = \frac{1}{x} \implies d\, \log(x) = \frac{d \, x}{x} \] which further implies that

\[ \Delta \log(x) := \log(x_{new}) - \log(x_{old}) \approx \frac{x_{new} - x_{old}}{x_{old}} \] and, thus, that

\[ 100 \cdot \Delta \log(x) \approx \textrm{% change} \] where the approximation is better when \(x_{new}\) and \(x_{old}\) are close to each other.

Now, we’ll use these properties of logarithms in order to interpret several linear models

For simplicity, I am going to not include an error term or extra covariates, but you should continue to interpret parameter estimates as “on average” and “holding other regressors constant” (if there are other regressors in the model).

**Log-Log**Model\[ \log(Y) = \beta_0 + \beta_1 \log(X) \]

In this case,

\[ \begin{aligned} \beta_1 &= \frac{ d \, \log(Y) }{d \, \log(X)} \\ &= \frac{ d \, \log(Y) \cdot 100 }{d \, \log(X) \cdot 100} \\ &\approx \frac{ \% \Delta Y}{ \% \Delta X} \end{aligned} \]

All that to say, in a regression of the log of an outcome on the log of a regressor, you should interpret the corresponding coefficient as the average percentage change in the outcome when the regressor changes by 1%. The log-log model is sometimes called a

**constant elasticity**model.**Log-Level**model\[ \log(Y) = \beta_0 + \beta_1 X \]

In this case,

\[ \begin{aligned} \beta_1 &= \frac{ d \, \log(Y) }{d \, X} \\ \implies 100 \beta_1 &= \frac{ d \, \log(Y) \cdot 100 }{d \, X} \\ \implies 100 \beta_1 &\approx \frac{ \% \Delta Y}{ d \, X} \end{aligned} \]

Thus, in a regression of the log of an outcome on the

*level*of a regressor, you should multiply the corresponding coefficient by 100 and interpret it as the average percentage change in the outcome when the regressor changes by 1 unit.**Level-Log**model\[ Y = \beta_0 + \beta_1 \log(X) \]

In this case,

\[ \begin{aligned} \beta_1 &= \frac{d\, Y}{d \, \log(X)} \\ \implies \frac{\beta_1}{100} &= \frac{d \, Y}{d \, \log(X) \cdot 100} \\ \implies \frac{\beta_1}{100} &\approx \frac{d \, Y}{\% \Delta X} \end{aligned} \]

Thus, in a regression of the level of an outcome on the log of a regressor, you should divide the corresponding coefficient by 100 and interpret it as the average change in the outcome when the regressor changes by 1%.

**Example 4.5 **Let’s continue the same example on intergenerational income mobility where \(Y\) denotes child’s income, \(X_1\) denotes parents’ income and \(X_2\) denotes mother’s education. We’ll consider how to interpret several different models.

\[ \log(Y) = 8.8 + 0.4 \log(X_1) + 0.008 X_2 + U \] In this model, we estimate that, on average, when parents’ income increases by 1%, child’s income increases by 0.4% holding mother’s education constant.

Next, consider, \[ \log(Y) = 8.9 + 0.00004 X_1 + 0.007 X_2 + U \] In this model, we estimate that, on average, when parents’ income increases by $1, child’s income increases by 0.004% (alternatively, when parents’ income increase by $1000, child’s income increases by 4%) holding mother’s education constant.

Finally, consider

\[ Y = -1,680,000 + 160,000 \log(X_1) + 900 X_2 + U \] In this case, we estimate that, on average, when parents’ income increases by 1%, child’s income increases by $1,600 holding mother’s education constant.

### 4.8.1 Computation

Estimating models that include logarithms in `R`

is straightforward.

```
<- lm(log(mpg) ~ log(hp) + wt, data=mtcars)
reg5 <- lm(log(mpg) ~ hp + wt, data=mtcars)
reg6 <- lm(mpg ~ log(hp) + wt, data=mtcars) reg7
```

Let’s show the results all at once using the `modelsummary`

function from the `modelsummary`

package.

```
library(modelsummary)
<- list(reg5, reg6, reg7)
model_list modelsummary(model_list)
```

Model 1 | Model 2 | Model 3 | |
---|---|---|---|

(Intercept) | 4.832 | 3.829 | 59.571 |

(0.222) | (0.069) | (4.977) | |

log(hp) | −0.266 | −5.922 | |

(0.056) | (1.266) | ||

wt | −0.179 | −0.201 | −3.286 |

(0.027) | (0.027) | (0.615) | |

hp | −0.002 | ||

(0.000) | |||

Num.Obs. | 32 | 32 | 32 |

R2 | 0.885 | 0.869 | 0.859 |

R2 Adj. | 0.877 | 0.860 | 0.849 |

AIC | 140.3 | 144.5 | 150.0 |

BIC | 146.1 | 150.4 | 155.9 |

Log.Lik. | 28.501 | 26.395 | −71.017 |

F | 111.812 | 96.232 | 88.442 |

RMSE | 0.10 | 0.11 | 2.23 |

In the first model, we estimate that, on average, a 1% increase in horsepower decreases miles per gallon by 0.266% holding weight constant.

In the second model, we estimate that, on average, a 1 unit increase in horsepower decreases miles per gallon by 0.2% holding weight constant.

In the third model, we estimate that, on average, a 1% increase in horsepower decreases miles per gallon by .059 holding weight constant.