Lebesgue-Stieljes Integral
Deterministic Definition
Theorem: Lebesgue-Stieljes measure
Let \(F:\mathbb{R}\to \mathbb{R}\) be a càd increasing function function (in particular càdlàg). There exists a unique measure \(dF\) on the Borel \(\sigma\)-algebra of the real line such that
for any real \(a\leq b\). This measure is called the Lebesgue-Stieljes measure.
Proof
Let \(\Omega = \mathbb{R}\) and \(\mathcal{B}\) be the Borel \(\sigma\)-algebra which is generated by the semi-ring \(\mathcal{R} = \{(a,b] \colon a \leq b\}\) with the convention that \((a,a] = \emptyset\). Define
for any \((a, b]\) in \(\mathcal{R}\). Straightforward inspection shows that \(P\) is additive, such that \(P[\emptyset] = 0\) and \(P\) is sub-additive. To show that \(P\) extends uniquely to a \(\sigma\)-finite measure on \(\mathcal{B}\), we just have to check that \(P\) is \(\sigma\)-subadditive on \(\mathcal{R}\). Let \(A = (a,b]\) be in \(\mathcal{R}\) and \((A_n) = (]a_n,b_n])\) a countable family in \(\mathcal{R}\) such that \(A \subseteq \cup A_n\). Taking \(\varepsilon > 0\), by right-continuity of \(F\), choose some \(a^\varepsilon \in (a,b[\) such that \(F(a^\varepsilon) - F(a) < \varepsilon/2\). Also using the right-continuity of \(F\), choose \(b_n^\varepsilon > b_n\) for every \(n\) such that \(F(b_n^\varepsilon) - F(b_n) \leq \varepsilon 2^{-n-1}\). It follows that
However, \([a^\varepsilon, b]\) is a compact set, therefore, the open covering \(\cup (a_n, b_n^\varepsilon)\) of \([a^\varepsilon, b]\) can be chosen finite. Hence, there exists \(n_0\) such that
and therefore
showing that \(P\) extends to a measure on the real line. This measure is also \(\sigma\)-finite in the sense that there exists an increasing sequence of sets \((]a_n,b_n])\) such that \(\mathbb{R} = \cup ]a_n, b_n]\) and \(P(]a_n, b_n]) < \infty\) for every \(n\). Hence, this extension is unique and we denote it \(dF\).
Example
-
The classical Lebesgue measure \(dx\) on the real line is derived from the continuous increasing function \(F(x) = x\), for which it holds
\[ dx((a,b]) = b - a \] -
If we consider the increasing right continuous function \(F(x) = 1_{[y,\infty)}(x)\) for a given real \(y\) we get the Dirac measure
\[ dF[A] = \delta_y[A] = \begin{cases} 1 & \text{if } y \in A \\ 0 & \text{otherwise} \end{cases} \]
Clearly, given two càd increasing function, their difference defines a signed measure. The set of those functions are better described as the set of bounded variation functions.
Definition
We say that a function \(F:[0,\infty) \to \mathbb{R}\), \(t \mapsto F(t) := F_t\) is of bounded variation if
for every \(t\) where for \(\Pi = \{0=t_0<t_1<\ldots<t_n = t\}\)
Functions of bounded variations are those that can be defined as a difference of two increasing functions.
Proposition
A function is of bounded variations if and only if it can be written as the difference between two increasing functions.
Proof
Let \(F\) be a function of bounded variations. Inspection shows that \(F^+ = (S + F)/2\) and \(F^- = (S - F)/2\) are two increasing functions whose difference is equal to \(F\). This decomposition is actually the minimal one, in the sense that if \(F = A - B\) for two increasing functions \(A\) and \(B\), then it follows that \(F^+ \leq A\) and \(F^- \leq B\). The reciprocal is easy.
Given a right continuous function \(F\) of bounded variations, we can therefore define a so-called signed measure and the absolute value of this measure
If \(H:[0,\infty) \to \mathbb{R}\) is a locally bounded, that is, bounded on any compact interval \([0,t]\), and \(\mathcal{B}([0,\infty))\)-measurable, we can define the integral
which is called the Lebegues-Stieljes integral of \(H\) with respect to \(F\). The integral is understood over the interval \(]0,t]\) so that \(\int_0^t dF_s = F_t - F_0\).
Proposition: Chain Rule or Integration by Parts
Let \(F\) and \(G\) be two right continuous functions of finite variations, then it holds
where \(F_{s-} = \lim_{u \nearrow s} F_u\) and \(\Delta F_s = F_s - F_{s-} = dF[\{s\}]\).
Remark
This proposition is often stated in differential form, that is
Note that if \(F\) and \(G\) are continuous, the more classical chain rule formula reads as follows
Proof
Considering the product measure \(dF \otimes dG\) on \([0,\infty) \times [0,\infty)\), using the triangular equality
using Fubini-Tonelli, we obtain
showing the first equality. The second follows exactly the same argumentation by swapping \(F\) and \(G\). As for the last one, note that \(F = F_{-} + \Delta F\), and since \(F\) can only have countably many discontinuity points, the last equality follows.
This basic form of the classical chain rule formula leads to the general one
Theorem: Chain Rule Formula
Let \(f:\mathbb{R} \to \mathbb{R}\) be a continuously differentiable function and \(F\colon [0, \infty)\to \mathbb{R}\) be a càd bounded variation function. It follows that
In particular, if \(F\) is continuous, it holds
Remark
The differential form of this chain rule formula reads as follows
and if \(F\) is continuous we obtain the classical chain rule formula
which is the building block for differential equations.
Proof
We here just sketch the idea of the proof, as a more general one will be shown for the Ito-Formula later in the lecture.
-
Step 1: Any differentiable function $ f \colon \mathbb{R} \to \mathbb{R}$ can be approximated on any compact uniformly by interpolation by a polynomial \(f(x) = \sum_{k\leq n} \alpha_k x^k\). Hence, it would be enough to show it for polynomial of arbitrary degree.
-
Step 2: Since the integral is a linear operator, showing the formula for any polynomial is equivalent to show it for any monomial \(x^n\). We show by induction that the chain rule formula holds for any monomial \(x^n\), that is
\[ \begin{equation*} d F^n = n F^{n-1}_{\cdot -} dF + \Delta F^n - n F^{n-1}_{\cdot -}\Delta F \end{equation*} \]For \(n=1\), this is immediate. Suppose that it holds for any \(m \leq n-1\) and we show if for \(n\geq 2\). Defining \(G = F^{n-1}\), from the product chain rule formula and the recursion hypothesis for \(G\) it holds that
\[ \begin{align*} dF^n & = d FG \\ & = F_{\cdot -} dG + G_{\cdot -} dF + \Delta F \Delta G\\ & = F_{\cdot -}\left( (n-1) F^{n-2}_{\cdot -} dF + \Delta F^{n-1} - (n-1)F^{n-2}_{\cdot -}\Delta F \right) + F^{n-1}_{\cdot -}dF + \Delta F \Delta F^{n-1}\\ & = n F^{n-1}dF + F_{\cdot -}F^{n-1} - F_{\cdot - }^n -(n-1)F^{n-1}_{\cdot -}F + (n-1)F^n_{\cdot -}\\ & \quad \quad \quad \quad + F^n + F^n_{\cdot -} - F_{\cdot -}F^{n-1} - F F^{n-1}_{\cdot -}\\ & = n F^{n-1}dF + \Delta F^n - n F^{n-1}_{\cdot -} \Delta F \end{align*} \]
Remark
Note that according to the proof, we can show the chain rule formula for any monomial of the type \(x^n y^m\). Hence the chain rule formula extends for multifariate functions \(f\) which in the case of continuous bounded càd functions \(F\) and \(G\) yields
Stochastic Lebesgue-Stieljes Integral
We can extend this integration procedure for every \(\omega\)-dependent paths as follows.
Definition
A process \(A\) is called increasing if
- \(A\) is adapted with \(A_0 = 0\)
- \(A\) is càdlàg, and almost all sample paths are increasing
An increasing process \(A\) is called integrable if \(E[A_t] < \infty\).
A process \(A\) is called of bounded variations if \(A\) is the difference between two increasing processes.
We denote by \(dA\) the \(\omega\)-wise \(\sigma\)-finite signed measure \(dA_t(\omega)\) induced by \(A\). For every locally measurable process(1) process \(H\) which is locally bounded, that is \((\omega,s) \mapsto H_s(\omega)\) is uniformly bounded for almost all \(\omega\) and all \(s \in [0,t]\), we can define the \(\omega\)-wise Lebesgue-Stieljes Integral
- Recall that a measurable process is a process such that \((\omega,t) \mapsto H_t(\omega)\) is \(\mathcal{F} \otimes \mathcal{B}([0,\infty))\)-measurable; in particular \(t \mapsto H_t(\omega)\) is \(\mathcal{B}([0,\infty[)\)-measurable for every \(\omega\).
for any \(\omega\) and \(0\leq t<\infty\). We usually do not mention the time and omega index and simplify the notation to \(\int_{0}^{t} H \, dA\).
Proposition
If \(H\) is a locally bounded measurable process and \(A\) is a process of bounded variations, then
defines a càdlàg right continuous measurable process. If furthermore \(H\) is progressive, then \(\int H \, dA\) is progressive, càdlàg, and of bounded variations.
Proof
The proof is quite easy, as one approximates \(H\) by sequences of simple step processes with the right measureability. Let us however check that if \(H\) is progressive, then \(\int H \, dA\) is of bounded variations. Since it is càdlàg and adapted with \(A_0 = 0\), we just have to check that it is of bounded variations. The process \(H\) being locally bounded, let \(K\) be such that \(|H_s(\omega)| \leq K\) for every \(0\leq s\leq t\). For any partition \(\Pi\) of \([0,t]\), it holds
showing that \(\int X \, dA\) is of bounded variations.
Exercice
Using Radon-Nikodym, show that for any two increasing processes \(A\) and \(B\) such that \(A - B\) is still increasing, then there exists an adapted measurable process \(H\) such that
In particular, if \(A\) is of bounded variations, there exists \(H\) adapted and measurable such that