# Thursday January 23rd Let $\mch$ a family of non-negative function. Then $\mch$ is a $\labda\dash$system iff 1. $1 \in \mch$ 2. $X_1, X_2 \in \mch$ with $c_1 X_1 + c_2 X_2 \geq 0 \implies c_1 X_1 + c_2 X_2 \in \mch$. 3. If $x_n \in \mch$ and $x_n \nearrow x$ then $x \in \mch$ If $\mch \supset$ all indicator functions of sets in $\mcd$, $\mcd$ is a $\pi\dash$class, and $\mch$ is a $\lambda\dash$system, then $\mch$ contains all non-negative $\sigma(\mcd)$ measurable functions. Note that if $\mcg$ be a class, $\sigma(\mcg)$ is the sigma-algebra generated by that class. Definition: Let $X$ be a real measurable function on $(\Omega, \mca)$ and $\mcb$ be the Borel sets. Then \begin{align*} \sigma(X) &= \sigma(\theset{X\in B \suchthat B \in \mcb}) \\ &= \sigma(\theset{X \leq x \suchthat \forall x}) \end{align*} is the sigma-algebra generated by the random variable $X$. > Moral: if you know all of the outcomes of $X$, you know the sigma-algebra it generates. We can then extend this to define $\sigma(X_1, X_2, \cdots)$ Theorem: 1. Given measurable function $X_i$ and a Borel measurable function $f(X_1, \cdots,)$ on $\RR^n$ then $f(X_1, \cdots)$ is $\sigma(X_1, \cdots)\dash$measurable. 2. If $X$ is $\sigma(X_1, \cdots)\dash$measurable, then there exists a Borel $f(X_1, \cdots)$ on $\RR^n$ such that $X = f(X_1, \cdots)$. Proof: Let - $\mcg$ be the set of Borel functions on $\RR^n$, - $\mch = \theset{f(X_1, \cdots) \geq 0 \suchthat f\in\mcg}$, - $\mcd = \theset{ X_i \leq x_i \suchthat i = 1, \ldots, n}$. Note that $\mcd$ is a $\pi\dash$class. For all $D\in \mcd$, we have $\indic{D} \in \mch$, and therefore \begin{align*} f(X_1, \cdots) = \indic{\theset{X_1 \leq x_1, X_2 \leq x_2, \cdots}} = \indic{\theset{[-\infty, x_1) \cross [-\infty, x_2) \cross \cdots }}(X_1, \cdots) .\end{align*} where we note that $f(x) = \indic{[-\infty, a)}(x)$ and similarly $\indic{z\leq a} = f(z)$. Since we can write any $X$ as $X^+ - X^-$, which are both non-negative, it suffices to show that $\mch$ is a $\lambda\dash$system. Proof of 1: Property 1: We are given that $1 \in \mch$. Property 2: If $f_1, f_2 \in \mch$ and $c_1, c_2$ are real and finite, then $\sum c_i f_i \geq 0$. So take $f(x_1, \cdots) = (c_1 f_1(x_1, \cdots) + c_2 f_2(x_1, \cdots) )\indic{\theset{c_1f_1 = -c_2 f_2 = \pm \infty}^c}$. Then $f \in \mch$. Property 3: If $f_k \in \mch$ and $f_k \nearrow f$, it's not necessarily the case that $f_k$ are monotone. But take $g_k = \max(f_1, \cdots, f_k)$, then $g_k \nearrow g \in \mcg$ for some $g$. Then $0 \leq f_k(X_1, \cdots) = g_k(X_1, \cdots) \leq f_{k+1})(X_1, \cdots)$, so taking limits yields $f(X_1, \cdots) = g(X_1, \cdots)$. $\qed$ Definition: Take a probability space $(\Omega, \mca, P)$ where $\mca$ is a sigma-algebra, $A\in \mca$ are referred to as events. Then $X$ is a random variable if $X$ is a measurable function and $P(\abs{X} = \infty) = 0$. Any event $A$ for which $P(A) = 0$ is a null event. $P$ is a probability measure iff $P(\Omega) = 1$ and $P(\emptyset) = 0$. If $\theset{A_n}$ are disjoint events, i.e. $A_i A_j = \emptyset$, then $P(\disjoint_{n\in \NN} A_n) = \sum_{n\in \NN} P(A_n)$ (sigma-additivity). Observation: $$ P(\Omega) = P(\Omega \disjoint \emptyset) = P(\Omega) + P(\emptyset) \implies P(\emptyset) = 0 .$$ For $A^c \definedas \sigma - A$, then $P(A^c) = 1 - P(A)$. Proposition: For non-disjoint events, we have $P(A \union B) = P(A) + P(B) - P(AB)$. Proof: Write $A \union B = A \union (B - AB)$, then $B = AB \union (B-AB)$, so $P(B-AB) = P(B) - P(AB)$, and substitution into the first equation yields the identity. For any $A \subseteq B$, we have $\P(A) \leq P(B)$ (subadditivity). Thus $P(B-A) = P(B) - P(A) \geq 0$. Inductively, $P(\union A_n) \leq \sum P(A_n)$ for any sequence of events $A_n$. Continuity: $\lim P(A_n) = P(\lim A_n)$ if the limit exists, i.e. $\limsup A_n = \liminf A_n$. > Notation: $\bar{\lim} = \limsup$?