Uniform Integrability
In the integration section, we saw the dominated convergence theorem stating that for every sequence \((X_n)\) such that \(X_n \to X\) in probability, if \(|X_n| \leq Y\) for \(Y\) in \(L^1\), then \(X_n \to X\) in \(L^1\). The reciprocal is however not true in the sence that under convergence in probability the convergence in \(L^1\) does not implies uniform boundedness by an element in \(L^1\).
This fact is related to a deeper issue with \(L^1\) that is not encountered for any other \(L^p\) spaces for \(1<p<\infty\). The following concept of uniform integrability is the correct way to describe those sets that are stable under \(L^1\) convergence.
Note first that if \(X\) is in \(L^1\), then it holds that \(E[|X|1_{X>n}] \to 0\) as \(n\) goes to \(\infty\). Uniform integrability brings this concept to whole families.
Definition: Uniformly Integrable Families
A subset \(\mathcal{H}\) of \(L^1\) is called uniformly integrable if
We first state two additional equivalent way to state that a family is uniformly integrable. The second one—Boundedness and Tightness—is sometimes refered to as the \(\varepsilon\)-\(\delta\)-criteria. The third one is refered to as the De la Vallee-Poussin criteria.
Proposition
For a subsets \(\mathcal{H}\) of \(L^1\), the following assertions are equivalent:
- Uniform Integrability: \(\mathcal{H}\) is uniformly integrable.
-
Boundedness and Tightness:
-
\(\mathcal{H}\) is bounded in \(L^1\), that is,
\[ \sup_{X \in \mathcal{H}}E[\left\vert X\right\vert]<\infty. \] -
For every \(\varepsilon>0\), there exists \(\delta>0\) such that
\[ E\left[ \left\vert X\right\vert1_A \right]\leq \varepsilon \]for all \(X\) in \(\mathcal{H}\) and event \(A\) such that \(P[A]\leq \delta\).
-
-
De la Vallee Poussin:
There exists a Borel measurable function \(\varphi\colon \mathbb{R}_+\to \mathbb{R}_+\) such that \(\varphi(x)/x\to \infty\) as \(x \to \infty\) for which\[ \sup_{X \in \mathcal{H}}E\left[ \varphi(\left\vert X\right\vert) \right]<\infty. \]This function \(\varphi\) can be chosen increasing and convex.
Proof
Step 1: Uniform integrability implies boundedness and tightness:
For sufficiently large \(n\), we have \(E[|X| 1_{\{|X|\geq n\}}]\leq 1\) for all \(X\) in \(\mathcal{H}\). Hence, \(E[|X|]\leq n+1\) for all \(X\) in \(\mathcal{H}\), showing that \(\mathcal{H}\) is bounded in \(L^1\). Let further \(\varepsilon>0\) and choose \(n\) large enough such that \(E[| X|1_{\{|X|\geq n\}}]\leq \varepsilon/2\) for every \(X\) in \(\mathcal{H}\). Setting \(\delta=\varepsilon/(2n)\), for every event \(A\) in \(\mathcal{F}\) such that \(P[A]\leq \delta\), it follows that
showing that uniform integrability implies boundedness and tightness.
Step 2: Boundedness and tightness implies uniform integrability:
Denote by \(M=\sup_{X\in \mathcal{H}} E[|X|]<\infty\) and let \(\varepsilon >0\). There exists \(\delta>0\) such that \(E[|X|1_A]\leq \varepsilon\) for any event \(A\) in \(\mathcal{F}\) with \(P[A]\leq \delta\) and every \(X\) in \(\mathcal{H}\). Then for any \(n\) greater than \(M/\delta\) and any \(X\) in \(\mathcal{H}\), Markov's inequality yields
Hence, \(\sup_{X\in \mathcal{H}}E\left[ \left\vert X\right\vert1_{\{\left\vert X\right\vert\geq n\}} \right]\leq \varepsilon\) showing the uniform integrability of \(\mathcal{H}\).
Step 3: De la Vallee-Poussin criteria implies uniform integrability:
Denote by \(M=\sup_{X \in \mathcal{H}}E[\varphi(|X|)]\). For \(\varepsilon>0\), there exists \(n_\varepsilon\) such that \(\varphi(x)\geq x M/\varepsilon\) for every \(x\geq n_\varepsilon\). Hence,
showing that
and so the uniform integrability of \(\mathcal{H}\).
Step 4: Uniform integrability implies de la Valle-Poussin criteria: Choose a sequence \((c_n)\), which can always be chosen increasing, such that
Define the function \(\varphi:\mathbb{R}_+\) as a piecewise linear function, equal to \(0\) on \([0,c_1]\) and with derivative equal to \(n\) on \([c_{n},c_{n+1}]\), which implies that \(\varphi(x)/x \to \infty\) as \(x\to \infty\). Note that this function is convex and increasing. It follows that
However
showing that \(\sup_{X \in \mathcal{H}}E[\varphi(|X|)]\leq \sum 2n/n^3<\infty\).
Theorem
Let \((X_n)\) be a sequence of integrable random variables such that \(X_n\to X\) in probability.(1)
- That is, \(P[\left\vert X_n-X\right\vert\geq \varepsilon]\to 0\) for every \(\varepsilon\).
Then, the following assertions are equivalent:
- The sequence is uniformly integrable;
- \(X_n\to X\) in \(L^1\);
- \(\|X_n\|_1\to \|X\|_1\).
Proof
Step1: Uniform integrability implies \(L^1\) convergence: We know that we can find a subsequence \((Y_n)\) of \((X_n)\) that converges \(P\)-almost surely to \(X\). In particular, \((Y_n)\) is uniformly integrable. Using Fatou's lemma and the \(L^1\) boundedness of the family \((X_n)\), it follows that
showing that \(X \in L^1\). It follows that the sequence \((X_n-X)\) is uniformly integrable, and therefore, without loss of generality, we can assume that \((X_n)\) is a uniformly integrable family converging in probability to \(0\). For \(\varepsilon>0\), it holds
By uniform integrability of the family \((X_n)\) and making use of the \(\varepsilon\)-\(\delta\) criteria, let \(\delta>0\) such that
for every \(A\in \mathcal{F}\) with \(P[A]\leq \delta\). Further, by the convergence of \((X_n)\) in probability to \(0\), there exists \(n_0\) such that
for every \(n\geq n_0\). Thus, for every \(n\geq n_0\), it holds
showing that \(X_n\) converges to \(0\) in \(L^1\).
Step 2: Convergence in \(L^1\) implies the convergence of the norms: This step is trivial from
Step 3: Convergence of the norms implies uniform integrability: For \(M>0\), define \(\varphi_M\) as the identity on \([0,M-1]\), \(0\) on \([M,\infty)\), and linearly interpolated elsewhere. Let \(\varepsilon>0\). Using the dominated convergence theorem, choose \(M\) such that
since \(\varphi_M(\left\vert X\right\vert)\) converges to \(|X|\) and is dominated by \(\left\vert X\right\vert \in L^1\). By continuity of \(\varphi_M\), it follows that
[ \varphi_M(\left\vert X_n\right\vert)\to \varphi_M(\left\vert X\right\vert) ] in probability. Since \(\varphi_M(\left\vert X_n\right\vert)\leq M\) for every \(n\), the dominated convergence theorem yields
Hence, together with \(E[\left\vert X_n\right\vert]\to E[\left\vert X\right\vert]\), there exists some integer \(n_0\) such that
for every \(n\geq n_0\). Henceforth,
for every \(n\geq n_0\). Increasing the value of \(M\) ensures this inequality remains true for the remaining \(n\geq n_0\), concluding the uniform integrability of \((X_n)\).
Theorem
-
Let \(X\) be an integrable random variable and \((\mathcal{F}_i)\) an arbitrary family of \(\sigma\)-algebras \(\mathcal{F}_i\subseteq \mathcal{F}\). Then, \((E[X|\mathcal{F}_i])\) is uniformly integrable.
-
Let \((X_i)\) be a family of random variables bounded in \(L^p\) for \(1<p\leq \infty\). Then, \((X_i)\) is uniformly integrable.
Proof
Since \(X\) is integrable, it is in particular uniformly integrable. Therefore, there exists a convex function \(\varphi\) with \(\varphi(x)/x\to \infty\) such that \(E[\varphi(|X|)]<\infty\). By the conditional version of Jensen's inequality and the tower property, it follows that
showing by de la Vallée Poussin's criterion that \((E[X|\mathcal{F}_i])\) is uniformly integrable.
If \((X_i)\) is bounded in \(L^p\), then \(\sup E[|X_i|^p]<\infty\) which, for \(\varphi(x)=x^p\) satisfying \(\varphi(x)/x\to \infty\), satisfies de la Vallée Poussin's criterion. Hence, \((X_i)\) is uniformly integrable.
We finish this section with an extension of Fatou's lemma for conditional expectation. While in the classical case the sequence must be bounded from below by an integrable random variable, in the conditional case, the negative part of the sequence of conditional expectation must be uniformly integrable.
Conditional Fatou's Lemma for Uniformly Integrable Lower Bound
Let \((X_n)\) be a sequence of random variables and \(\mathcal{G}\subseteq \mathcal{F}\) be a sub-\(\sigma\)-algebra. Suppose that \((X_n^-)\) is uniformly integrable conditionally with respect to \(\mathcal{G}\), in the sense that for every \(\varepsilon>0\), there exists \(M>0\) such that
Then it holds
Remark
Note that this is an extension of Fatou's Lemma for a uniformly integrable negative part of the sequence by taking \(\mathcal{G}\) as the trivial \(\sigma\)-algebra.
Furthermore, note that the conditional expectation can be defined for every positive random variable by conditional monotone convergence. It can also be defined for any random variable bounded from below by some positive random variable.
In this case, \((X_n^-)\) is in particular uniformly integrable. It follows that \(\liminf X_n^-\) is integrable so that the inequality is well defined.
Proof
Let \(X = \liminf X_n\) and \(\varepsilon >0\). By uniform conditional integrability of \((X_n^-)\), let \(M>0\) such that
Using Fatou's Lemma for conditional expectation for positive random variables, and the fact that \(X + M \leq \liminf (X_n+M)^+\), it follows that
Since \((X_n+M)^+=(X_n+M)+(X_n+M)^-\leq X_n+M+X_n^-1_{\{X_n^->M\}}\), it follows that
This completes the proof.