## 3.10 Multiple Random Variables

SW 2.3

Most often in economics, we want to consider two (or more) random variables jointly rather than just a single random variable. For example, mean income is interesting, but mean income as a function of education is more interesting.

When there is more than one random variable, you can define **joint pmfs**, **joint pdfs**, and **joint cdfs**.

Let’s quickly go over these for the case where \(X\) and \(Y\) are two discrete random variables.

**Joint pmf:** \(f_{X,Y}(x,y) := \mathrm{P}(X=x, Y=y)\)

**Joint cdf:** \(F_{X,Y}(x,y) := \mathrm{P}(X \leq x, Y \leq y)\)

**Conditional pmf:** \(f_{Y|X}(y|x) := \mathrm{P}(Y=y | X=x)\)

**Properties**

We use the notation that \(\mathcal{X}\) denotes the support of \(X\) and \(\mathcal{Y}\) denotes the support of \(Y\).

\(0 \leq f_{X,Y}(x,y) \leq 1\) for all \(x,y\)

In words: the probability of \(X\) and \(Y\) taking any particular values can’t be less than 0 or greater than 1 (because these are probabilities)

\(\displaystyle \sum_{x \in \mathcal{X}} \sum_{y \in \mathcal{Y}} f_{X,Y}(x,y) = 1\)

In words: If you add up \(\mathrm{P}(X=x, Y=y)\) across all possible values of \(x\) and \(y\), they sum up to 1 (again, this is just a property of probabilities)

If you know the joint pmf, then you can recover the

**marginal pmf**, that is,\[ f_Y(y) = \sum_{x \in \mathcal{X}} f_{X,Y}(x,y) \]

This amounts to just adding up the joint pmf across all values of \(x\) while holding \(y\) fixed. A main takeaway from this property is the following: if you know the joint pmf of two random variables, then it implies that you know the pmf of each random variable individuals. Thus, if you know the joint pmf, it implies that you know more than if you only knew the marginal pmfs.

**Example 3.12 **Suppose that you roll a die, and based on this roll, you create the following random variables.

\[ X = \begin{cases} 0 \quad \textrm{if roll is 3 or lower} \\ 1 \quad \textrm{if roll is greater than 3} \end{cases} \qquad Y = \begin{cases} 0 \quad \textrm{if roll is odd} \\ 1 \quad \textrm{if roll is even} \end{cases} \]

Let’s consider what values \(X\) and \(Y\) take for different rolls:

roll |
X |
Y |
---|---|---|

1 | 0 | 0 |

2 | 0 | 1 |

3 | 0 | 0 |

4 | 1 | 1 |

5 | 1 | 0 |

6 | 1 | 1 |

Thus,

\[ \begin{aligned} f_{X,Y}(0, 0) = \frac{2}{6} \qquad \qquad f_{X,Y}(0,1) = \frac{1}{6} \\ f_{X,Y}(1, 0) = \frac{1}{6} \qquad \qquad f_{X,Y}(1,1) = \frac{2}{6} \end{aligned} \] and you can immediately see that the first two properties hold here. For the third property, suppose that we want to calculate \(f_Y(1)\) (i.e., the probability that we roll an even number). The property says that we can calculate it \[ f_Y(1) = \sum_{x=0}^1 f_{X,Y}(x,1) = f_{X,Y}(0,1) + f_{X,Y}(1,1) = \frac{1}{6} + \frac{2}{6} = \frac{1}{2} \] which, as we know, is the right answer.

\(X\) and \(Y\) are said to be **independent** if \(f_{Y|X}(y|x) = f_Y(y)\). In other words, if knowing the value of \(X\) doesn’t provide any information about the distribution \(Y\).