Skip to content

Lebesgue-Stieljes Integral

Deterministic Definition

Theorem: Lebesgue-Stieljes measure

Let \(F:\mathbb{R}\to \mathbb{R}\) be a càd increasing function function (in particular càdlàg). There exists a unique measure \(dF\) on the Borel \(\sigma\)-algebra of the real line such that

\[ dF((a,b]) = F(b) - F(a) \]

for any real \(a\leq b\). This measure is called the Lebesgue-Stieljes measure.

Proof

Let \(\Omega = \mathbb{R}\) and \(\mathcal{B}\) be the Borel \(\sigma\)-algebra which is generated by the semi-ring \(\mathcal{R} = \{(a,b] \colon a \leq b\}\) with the convention that \((a,a] = \emptyset\). Define

\[ P((a,b]) = F(a) - F(b) \]

for any \((a, b]\) in \(\mathcal{R}\). Straightforward inspection shows that \(P\) is additive, such that \(P[\emptyset] = 0\) and \(P\) is sub-additive. To show that \(P\) extends uniquely to a \(\sigma\)-finite measure on \(\mathcal{B}\), we just have to check that \(P\) is \(\sigma\)-subadditive on \(\mathcal{R}\). Let \(A = (a,b]\) be in \(\mathcal{R}\) and \((A_n) = (]a_n,b_n])\) a countable family in \(\mathcal{R}\) such that \(A \subseteq \cup A_n\). Taking \(\varepsilon > 0\), by right-continuity of \(F\), choose some \(a^\varepsilon \in (a,b[\) such that \(F(a^\varepsilon) - F(a) < \varepsilon/2\). Also using the right-continuity of \(F\), choose \(b_n^\varepsilon > b_n\) for every \(n\) such that \(F(b_n^\varepsilon) - F(b_n) \leq \varepsilon 2^{-n-1}\). It follows that

\[ [a^\varepsilon, b] \subseteq (a,b] \subseteq \cup (a_n, b_n] \subseteq \cup (a_n, b_n^\varepsilon). \]

However, \([a^\varepsilon, b]\) is a compact set, therefore, the open covering \(\cup (a_n, b_n^\varepsilon)\) of \([a^\varepsilon, b]\) can be chosen finite. Hence, there exists \(n_0\) such that

\[ [a^\varepsilon, b] \subseteq (a^\varepsilon, b] \subseteq \cup_{k \leq n_0} (a_k, b_k^\varepsilon) \subseteq \cup_{k \leq n_0} (a_k, b_k^\varepsilon]. \]

and therefore

\[ \begin{align*} P((a,b]) & = F(b) - F(a)\\ & \leq \varepsilon/2 + F(b) - F(a^\varepsilon)\\ & \leq \varepsilon/2 + \sum_{k=1}^{n_0} (F(b_k^\varepsilon) - F(a_k)) \\ & \leq \varepsilon + \sum (F(b_n) - F(a_n))\\ & = \varepsilon + \sum P((a_n, b_n]). \end{align*} \]

showing that \(P\) extends to a measure on the real line. This measure is also \(\sigma\)-finite in the sense that there exists an increasing sequence of sets \((]a_n,b_n])\) such that \(\mathbb{R} = \cup ]a_n, b_n]\) and \(P(]a_n, b_n]) < \infty\) for every \(n\). Hence, this extension is unique and we denote it \(dF\).

Example

  1. The classical Lebesgue measure \(dx\) on the real line is derived from the continuous increasing function \(F(x) = x\), for which it holds

    \[ dx((a,b]) = b - a \]
  2. If we consider the increasing right continuous function \(F(x) = 1_{[y,\infty)}(x)\) for a given real \(y\) we get the Dirac measure

    \[ dF[A] = \delta_y[A] = \begin{cases} 1 & \text{if } y \in A \\ 0 & \text{otherwise} \end{cases} \]

Clearly, given two càd increasing function, their difference defines a signed measure. The set of those functions are better described as the set of bounded variation functions.

Definition

We say that a function \(F:[0,\infty) \to \mathbb{R}\), \(t \mapsto F(t) := F_t\) is of bounded variation if

\[ S_t = \sup_{\Pi \text{ subdivision of } [0,t]} S_t^{\Pi} < \infty \]

for every \(t\) where for \(\Pi = \{0=t_0<t_1<\ldots<t_n = t\}\)

\[ S_t^{\Pi} = \sum_{1 \leq k \leq n} |F_{t_{k+1}} - F_{t_k}| \]

Functions of bounded variations are those that can be defined as a difference of two increasing functions.

Proposition

A function is of bounded variations if and only if it can be written as the difference between two increasing functions.

Proof

Let \(F\) be a function of bounded variations. Inspection shows that \(F^+ = (S + F)/2\) and \(F^- = (S - F)/2\) are two increasing functions whose difference is equal to \(F\). This decomposition is actually the minimal one, in the sense that if \(F = A - B\) for two increasing functions \(A\) and \(B\), then it follows that \(F^+ \leq A\) and \(F^- \leq B\). The reciprocal is easy.

Given a right continuous function \(F\) of bounded variations, we can therefore define a so-called signed measure and the absolute value of this measure

\[ dF = dF^+ - dF^- \quad \text{and} \quad |dF| = dF^+ + dF^- \]

If \(H:[0,\infty) \to \mathbb{R}\) is a locally bounded, that is, bounded on any compact interval \([0,t]\), and \(\mathcal{B}([0,\infty))\)-measurable, we can define the integral

\[ \int_{0}^{t} H_s dF_s = \int_{(0,t]} H_s dF_s := \int_{0}^{t} H_s dF^+_s - \int_{0}^{t} H_s dF^-_s \]

which is called the Lebegues-Stieljes integral of \(H\) with respect to \(F\). The integral is understood over the interval \(]0,t]\) so that \(\int_0^t dF_s = F_t - F_0\).

Proposition: Chain Rule or Integration by Parts

Let \(F\) and \(G\) be two right continuous functions of finite variations, then it holds

\[ \begin{align*} F_t G_t & = F_0 G_0 + \int_{0}^{t} F_s dG_s + \int_{0}^{t} G_{s-} dF_s\\ & = F_0 G_0 + \int_{0}^{t} F_{s-} dG_s + \int_{0}^{t} G_{s} dF_s\\ & = F_0 G_0 + \int_{0}^{t} F_{s-} dG_s + \int_{0}^{t} G_{s-} dF_s + \sum_{0<s \leq t} \Delta F_s \Delta G_s \end{align*} \]

where \(F_{s-} = \lim_{u \nearrow s} F_u\) and \(\Delta F_s = F_s - F_{s-} = dF[\{s\}]\).

Remark

This proposition is often stated in differential form, that is

\[\begin{equation*} dFG = F_{\cdot -}dG + G_{\cdot -}dF + \Delta F \Delta G \end{equation*}\]

Note that if \(F\) and \(G\) are continuous, the more classical chain rule formula reads as follows

\[\begin{equation*} dFG = FdG + G dF \end{equation*}\]

Proof

Considering the product measure \(dF \otimes dG\) on \([0,\infty) \times [0,\infty)\), using the triangular equality

\[ 1_{]0,t]}(s_1) 1_{]0,t]}(s_2) = 1_{]0,t]}(s_1) 1_{]0,s_1]}(s_2) + 1_{]0,s_2[}(s_1) 1_{]0,t]}(s_2) \]

using Fubini-Tonelli, we obtain

\[ \begin{align*} dF \otimes dG \left[ (0,t] \times (0,t] \right] & = (F_t - F_0)(G_t - G_0) \\ & = \int \int 1_{]0,t]}(s_1) 1_{]0,t]}(s_2) \, dF_{s_1} dG_{s_2} \\ & = \int_{(0,t]} \left( \int_{(0,s_1]} dG_{s_2} \right) dF_{s_1} + \int_{(0,t]} \left( \int_{(0,s_2)} dF_{s_1} \right) dG_{s_2} \\ & = \int_{(0,t]} \left(G_s - G_0\right) \, dF_s + \int_{(0,t]} \left(F_{s-} - F_0\right) \, dG_s \\ & = \int_{(0,t]} G_s \, dF_s + \int_{(0,t]} F_{s-} \, dG_s - G_0 (F_t - F_0) - F_0 (G_t - G_0) \end{align*} \]

showing the first equality. The second follows exactly the same argumentation by swapping \(F\) and \(G\). As for the last one, note that \(F = F_{-} + \Delta F\), and since \(F\) can only have countably many discontinuity points, the last equality follows.

This basic form of the classical chain rule formula leads to the general one

Theorem: Chain Rule Formula

Let \(f:\mathbb{R} \to \mathbb{R}\) be a continuously differentiable function and \(F\colon [0, \infty)\to \mathbb{R}\) be a càd bounded variation function. It follows that

\[ f(F_t) = f(F_0) + \int_{0}^{t} f^\prime(F_{s-}) \, dF_s + \sum_{s \leq t} \left( f(F_s) - f(F_{s-}) - f^\prime(F_{s-}) \Delta F_s \right) \]

In particular, if \(F\) is continuous, it holds

\[ f(F_t) = f(F_0) + \int_{0}^{t} f^\prime(F_s) \, dF_s \]

Remark

The differential form of this chain rule formula reads as follows

\[ \begin{equation*} df(F) = f^\prime(F_{\cdot -}) dF + \Delta f(F) - f^\prime(F_{\cdot -})\Delta F \end{equation*} \]

and if \(F\) is continuous we obtain the classical chain rule formula

\[ df(F) = f(F)dF \]

which is the building block for differential equations.

Proof

We here just sketch the idea of the proof, as a more general one will be shown for the Ito-Formula later in the lecture.

  • Step 1: Any differentiable function $ f \colon \mathbb{R} \to \mathbb{R}$ can be approximated on any compact uniformly by interpolation by a polynomial \(f(x) = \sum_{k\leq n} \alpha_k x^k\). Hence, it would be enough to show it for polynomial of arbitrary degree.

  • Step 2: Since the integral is a linear operator, showing the formula for any polynomial is equivalent to show it for any monomial \(x^n\). We show by induction that the chain rule formula holds for any monomial \(x^n\), that is

    \[ \begin{equation*} d F^n = n F^{n-1}_{\cdot -} dF + \Delta F^n - n F^{n-1}_{\cdot -}\Delta F \end{equation*} \]

    For \(n=1\), this is immediate. Suppose that it holds for any \(m \leq n-1\) and we show if for \(n\geq 2\). Defining \(G = F^{n-1}\), from the product chain rule formula and the recursion hypothesis for \(G\) it holds that

    \[ \begin{align*} dF^n & = d FG \\ & = F_{\cdot -} dG + G_{\cdot -} dF + \Delta F \Delta G\\ & = F_{\cdot -}\left( (n-1) F^{n-2}_{\cdot -} dF + \Delta F^{n-1} - (n-1)F^{n-2}_{\cdot -}\Delta F \right) + F^{n-1}_{\cdot -}dF + \Delta F \Delta F^{n-1}\\ & = n F^{n-1}dF + F_{\cdot -}F^{n-1} - F_{\cdot - }^n -(n-1)F^{n-1}_{\cdot -}F + (n-1)F^n_{\cdot -}\\ & \quad \quad \quad \quad + F^n + F^n_{\cdot -} - F_{\cdot -}F^{n-1} - F F^{n-1}_{\cdot -}\\ & = n F^{n-1}dF + \Delta F^n - n F^{n-1}_{\cdot -} \Delta F \end{align*} \]

Remark

Note that according to the proof, we can show the chain rule formula for any monomial of the type \(x^n y^m\). Hence the chain rule formula extends for multifariate functions \(f\) which in the case of continuous bounded càd functions \(F\) and \(G\) yields

\[\begin{equation*} df(F, G) = \partial_x f(F, G)dF +\partial_y f(F, G) \end{equation*}\]

Stochastic Lebesgue-Stieljes Integral

We can extend this integration procedure for every \(\omega\)-dependent paths as follows.

Definition

A process \(A\) is called increasing if
- \(A\) is adapted with \(A_0 = 0\)
- \(A\) is càdlàg, and almost all sample paths are increasing

An increasing process \(A\) is called integrable if \(E[A_t] < \infty\).

A process \(A\) is called of bounded variations if \(A\) is the difference between two increasing processes.

We denote by \(dA\) the \(\omega\)-wise \(\sigma\)-finite signed measure \(dA_t(\omega)\) induced by \(A\). For every locally measurable process(1) process \(H\) which is locally bounded, that is \((\omega,s) \mapsto H_s(\omega)\) is uniformly bounded for almost all \(\omega\) and all \(s \in [0,t]\), we can define the \(\omega\)-wise Lebesgue-Stieljes Integral

  1. Recall that a measurable process is a process such that \((\omega,t) \mapsto H_t(\omega)\) is \(\mathcal{F} \otimes \mathcal{B}([0,\infty))\)-measurable; in particular \(t \mapsto H_t(\omega)\) is \(\mathcal{B}([0,\infty[)\)-measurable for every \(\omega\).
\[ \int_0^t H_s(\omega) \, dA_s(\omega) \]

for any \(\omega\) and \(0\leq t<\infty\). We usually do not mention the time and omega index and simplify the notation to \(\int_{0}^{t} H \, dA\).

Proposition

If \(H\) is a locally bounded measurable process and \(A\) is a process of bounded variations, then

\[ \int H \, dA = \left( \int_{0}^{t} H \, dA \right) \]

defines a càdlàg right continuous measurable process. If furthermore \(H\) is progressive, then \(\int H \, dA\) is progressive, càdlàg, and of bounded variations.

Proof

The proof is quite easy, as one approximates \(H\) by sequences of simple step processes with the right measureability. Let us however check that if \(H\) is progressive, then \(\int H \, dA\) is of bounded variations. Since it is càdlàg and adapted with \(A_0 = 0\), we just have to check that it is of bounded variations. The process \(H\) being locally bounded, let \(K\) be such that \(|H_s(\omega)| \leq K\) for every \(0\leq s\leq t\). For any partition \(\Pi\) of \([0,t]\), it holds

\[ \begin{align*} \sum_{\Pi} \left| \int_{t_{k-1}}^{t_k} H_s(\omega) \, dA_s(\omega) \right| & \leq \sum_{\Pi} \int_{t_{k-1}}^{t_k} |H_s(\omega)| \, d|A_s(\omega)|\\ & \leq K \sum_{\Pi} \int_{t_{k-1}}^{t_k} d|A_s(\omega)| \\ & = K |A_t(\omega)|< \infty \end{align*} \]

showing that \(\int X \, dA\) is of bounded variations.

Exercice

Using Radon-Nikodym, show that for any two increasing processes \(A\) and \(B\) such that \(A - B\) is still increasing, then there exists an adapted measurable process \(H\) such that

\[ B = \int H dA \]

In particular, if \(A\) is of bounded variations, there exists \(H\) adapted and measurable such that

\[ A = \int H \left| dA \right| \]