In the last set of notes we talked about how to *differentiate* \(k\)-forms using the exterior derivative \(d\). We’d also like some way to *integrate* forms. Actually, there’s surprisingly little to say about integration given the setup we already have. Suppose we want to compute the total area \(A_\Omega\) of a region \(\Omega\) in the plane:

If you remember back to calculus class, the basic idea was to break up the domain into a bunch of little pieces that are easy to measure (like squares) and add up their areas:

\[ A_\Omega \approx \sum_i A_i. \]

As these squares get smaller and smaller we get a better and better approximation, ultimately achieving the true area

\[ A_\Omega = \int_\Omega dA. \]

Alternatively, we could write the individual areas using differential forms — in particular, \(A_i = dx^1 \wedge dx^2(u,v)\). Therefore, the area element \(dA\) is really nothing more than the standard volume form \(dx^1 \wedge dx^2\) on \(\mathbb{R}^2\). (Not too surprising, since the whole point of \(k\)-forms was to measure volume!)

To make things more interesting, let’s say that the contribution of each little square is weighted by some scalar function \(\phi\). In this case we get the quantity

\[ \int_\Omega \phi\ dA = \int_\Omega \phi\ dx^1 \wedge dx^2. \]

Again the integrand \(\phi\ dx^1 \wedge dx^2\) can be thought of as a 2-form. In other words, you’ve been working with differential forms your whole life, even if you didn’t realize it! More generally, integrands on an \(n\)-dimensional space are always \(n\)-forms, since we need to “plug in” \(n\) orthogonal vectors representing the local volume. For now, however, looking at surfaces (i.e., 2-manifolds) will give us all the intuition we need.

**Integration on Surfaces**

If you think back to our discussion of the Hodge star, you’ll remember the *volume form*

\[ \omega = \sqrt{\mathrm{det}(g)} dx^1 \wedge dx^2, \]

which measures the area of little parallelograms on our surface. The factor \(\sqrt{\mathrm{det}(g)}\) reminds us that we can’t simply measure the volume in the domain \(M\) — we also have to take into account any “stretching” induced by the map \(f: M \rightarrow \mathbb{R}^2\). Of course, when we integrate a function on a surface, we should also take this stretching into account. For instance, to integrate a function \(\phi: M \rightarrow \mathbb{R}\), we would write

\[ \int_\Omega \phi \omega = \int_\Omega \phi \sqrt{\mathrm{det}(g)}\ dx^1 \wedge dx^2. \]

In the case of a *conformal* parameterization things become even simpler — since \(\sqrt{\mathrm{det}(g)} = a\) we have just

\[ \int_\Omega \phi a\ dx^1 \wedge dx^2, \]

where \(a: M \rightarrow \mathbb{R}\) is the scaling factor. In other words, we scale the value of \(\phi\) up or down depending on the amount by which the surface locally “inflates” or “deflates.” In fact, this whole story gives a nice geometric interpretation to good old-fashioned integrals: you can imagine that \(\int_\Omega \phi\ dA\) represents the area of some suitably deformed version of the initially planar region \(\Omega\).

**Stokes’ Theorem**

The main reason for studying integration on manifolds is to take advantage of the world’s most powerful tool: *Stokes’ theorem*. Without further ado, Stokes’ theorem says that

\[ \int_\Omega d\alpha = \int_{\partial\Omega} \alpha, \]

where \(\alpha\) is any \(n-1\)-form on an \(n\)-dimensional domain \(\Omega\). In other words, integrating a differential form over the boundary of a manifold is the same as integrating its derivative over the entire domain.

If this trick sounds familiar to you, it’s probably because you’ve seen it time and again in different contexts and under different names: the *divergence theorem*, *Green’s theorem*, *the fundamental theorem of calculus*, *Cauchy’s integral formula*, etc. Picking apart these special cases will really help us understand the more general meaning of Stokes’ theorem.

**Divergence Theorem**

Let’s start with the divergence theorem from vector calculus, which says that

\[ \int_\Omega \nabla \cdot X dA = \int_{\partial\Omega} N \cdot X d\ell, \]

where \(X\) is a vector field on \(\Omega\) and \(N\) represents the (outward-pointing) unit normals on the boundary of \(\Omega\). A better name for this theorem might have been the “what goes in must come out theorem”, because if you think about \(X\) as the flow of water throughout the domain \(\Omega\) then it’s clear that the amount of water being pumped into \(\Omega\) (via pipes in the ground) must be the same as the amount flowing out of its boundary at any moment in time:

Let’s try writing this theorem using exterior calculus. First, remember that we can write the divergence of \(X\) as \(\nabla \cdot X = \star d \star X^\flat\). It’s a bit harder to see how to write the right-hand side of the divergence theorem, but think about what integration does here: it takes tangents to the boundary and sticks them into a 1-form. For instance, \(\int_\Omega X^\flat\) adds up the *tangential* components of \(X\). To get the *normal* component we could rotate \(X^\flat\) by a quarter turn, which conveniently enough is achieved by hitting it with the Hodge star. Overall we get

\[ \int_\Omega d \star X^\flat = \int_{\partial\Omega} \star X^\flat, \]

which, as promised, is just a special case of Stokes’ theorem. Alternatively, we can use Stokes’ theorem to provide a more geometric interpretation of the divergence operator itself: when integrated over *any* region \(\Omega\) — no matter how small — the divergence operator gives the total flux through the region boundary. In the discrete case we’ll see that this boundary flux interpretation is the *only* notion of divergence — in other words, there’s no concept of divergence at a single point.

By the way, why does \(d \star X^\flat\) appear on the left-hand side instead of \(\star d \star X^\flat\)? The reason is that \(\star d \star X^\flat\) is a 0-form, so we have to hit it with another Hodge star to turn it into an object that measures areas (i.e., a 2-form). Applying this transformation is no different from appending \(dA\) to \(\nabla \cdot X\) — we’re specifying how volume should be measured on our domain.

**Fundamental Theorem of Calculus**

The fundamental theorem of calculus is in fact *so* fundamental that you may not even remember what it is. It basically says that for a real-valued function \(\phi: \mathbb{R} \rightarrow \mathbb{R}\) on the real line

\[ \int_a^b \frac{\partial \phi}{\partial x} dx = \phi(b) - \phi(a). \]

In other words, the total change over an interval \([a,b]\) is (as you might expect) how much you end up with minus how much you started with. But soft, behold! All we’ve done is written Stokes’ theorem once again:

\[ \int_{[a,b]} d\phi = \int_{\partial[a,b]} \phi, \]

since the boundary of the interval \([a,b]\) consists only of the two endpoints \(a\) and \(b\).

Hopefully these two examples give you a good feel for what Stokes’ theorem says. In the end, it reads almost like a Zen kōan: what happens on the outside is purely a function of the change within. (Perhaps it is Stokes’ that deserves the name, “fundamental theorem of calculus!”)