Many important concepts in differential geometry can be nicely expressed in the language of *exterior calculus*. Initially these concepts will look exactly like objects you know and love from *vector calculus*, and you may question the value of giving them funky new names. For instance, scalar fields are no longer called scalar fields, but are now called *0-forms!* In many ways vector and exterior calculus are indeed “*dual*” to each-other, but it is precisely this duality that makes the language so expressive. In the long run we’ll see that exterior calculus also makes it easy to generalize certain ideas from vector calculus — the primary example being *Stokes’ theorem*. Actually, we already started using this language in our introduction to the geometry of surfaces, but here’s the full story.

Once upon a time there was a vector named \(v\):

What information does \(v\) encode? One way to inspect a vector is to determine its extent or *length* along a given direction. For instance, we can pick some arbitrary direction \(\alpha\) and record the length of the shadow cast by \(v\) along \(\alpha\):

The result is simply a number, which we can denote \(\alpha(v)\). The notation here is meant to emphasize the idea that \(\alpha\) is a function: in particular, it’s a *linear* function that eats a vector and produces a scalar. Any such function is called a *1-form* (a.k.a. a *covector* or a *cotangent*).

Of course, it’s clear from the picture that the space of all 1-forms looks a lot like the space of all vectors: we just had to pick some direction to measure along. But often there is good reason to distinguish between vectors and 1-forms — the distinction is not unlike the one made between *row vectors* and *column vectors* in linear algebra. For instance, even though rows and column both represent “vectors,” we only permit ourselves to multiply rows with columns:

\[ \left[ \begin{array}{ccc} \alpha_1 & \cdots & \alpha_n \end{array} \right] \left[ \begin{array}{c} v_1 \\ \vdots \\ v_n \end{array} \right]. \]

If we wanted to multiply, say, two columns, we would first have to take the *transpose* of one of them to convert it into a row:

\[ v^T v = \left[ \begin{array}{ccc} v_1 & \cdots & v_n \end{array} \right] \left[ \begin{array}{c} v_1 \\ \vdots \\ v_n \end{array} \right]. \]

Same deal with vectors and 1-forms, except that now we have two different operations: *sharp* (\(\sharp\)), which converts a 1-form into a vector, and *flat* (\(\flat\)) which converts a vector into a 1-form. For instance, it’s perfectly valid to write \(v^\flat(v)\) or \(\alpha(\alpha^\sharp)\), since in either case we’re feeding a vector to a 1-form. The operations \(\sharp\) and \(\flat\) are called the *musical isomorphisms*.

All this fuss over 1-forms versus vectors (or even row versus column vectors) may seem like much ado about nothing. And indeed, in a *flat* space like the plane, the difference between the two is pretty superficial. In *curved* spaces, however, there’s an important distinction between vectors and 1-forms — in particular, we want to make sure that we’re taking “measurements” in the right space. For instance, suppose we want to measure the length of a vector \(v\) along the direction of another vector \(u\). It’s important to remember that tangent vectors get stretched out by the map \(f: \mathbb{R}^2 \supset M \rightarrow \mathbb{R}^3\) that takes us from the plane to some surface in \(\mathbb{R}^3\). Therefore, the operations \(\sharp\) and \(\flat\) should satisfy relationships like

\[ u^\flat(v) = g(u,v) \]

where \(g\) is the metric induced by \(f\). This way we’re really measuring how things behave in the “stretched out” space rather than the initial domain \(M\).

**Coordinates**

Until now we’ve intentionally avoided the use of *coordinates* — in other words, we’ve tried to express geometric relationships without reference to any particular *coordinate system* \(x_1, \ldots, x_n\). Why avoid coordinates? Several reasons are often cited (people will mumble something about “invariance”), but the real reason is quite simply that coordinate-free expressions tend to be shorter, sweeter, and easier to extract meaning from. This approach is also particularly valuable in geometry processing, because many coordinate-free expressions translate naturally to basic operations on meshes.

Yet coordinates are still quite valuable in a number of situations. Sometimes there’s a special coordinate basis that greatly simplifies analysis — recall our discussion of principal curvature directions, for instance. At other times there’s simply no obvious way to prove something *without* coordinates. For now we’re going to grind out a few basic facts about exterior calculus in coordinates; at the end of the day we’ll keep whatever nice coordinate-free expressions we find and politely forget that coordinates ever existed!

Let’s setup our coordinate system. For reasons that will become clear later, we’re going to use the symbols \(\frac{\partial}{\partial x^1}, \ldots, \frac{\partial}{\partial x^n}\) to represent an orthonormal basis for vectors in \(\mathbb{R}^n\), and use \(dx^i, \ldots, dx^n\) to denote the corresponding 1-form basis. In other words, any vector \(v\) can be written as a linear combination

\[ v = v^1 \frac{\partial}{\partial x^1} + \cdots + v^n \frac{\partial}{\partial x^n}, \]

and any 1-form can be written as a linear combination

\[ \alpha = \alpha_1 dx^1 + \cdots + \alpha_n dx^n. \]

To keep yourself sane at this point, you should *completely ignore the fact* that the symbols \(\frac{\partial}{\partial x^i}\) and \(dx^i\) look like derivatives — they’re simply collections of unit-length orthogonal bases, as depicted above. The two bases \(dx^i\) and \(\frac{\partial}{\partial x^i}\) are often referred to as *dual bases*, meaning they satisfy the relationship

\[ dx^i\left(\frac{\partial}{\partial x^j}\right) = \delta^i_j = \begin{cases} 1, & i = j \\ 0, & \mbox{otherwise.} \end{cases} \]

This relationship captures precisely the behavior we’re looking for: a vector \(\frac{\partial}{\partial x^i}\) “casts a shadow” on the 1-form \(dx^j\) only if the two bases point in the same direction. Using this relationship, we can work out that

\[ \alpha(v) = \sum_i \alpha_i dx^i\left( \sum_j v^j \frac{\partial}{\partial x^j} \right) = \sum_i \alpha_i v_i \]

i.e., the *pairing* of a vector and a 1-form looks just like the standard Euclidean inner product.

__Notation__

It’s worth saying a few words about notation. First, vectors and vector fields tend to be represented by letters from the end of the Roman alphabet (\(u\), \(v\), \(w\) or \(X\), \(Y\), \(Z\), repectively), whereas 1-forms are given lowercase letters from the beginning of the Greek alphabet (\(\alpha\), \(\beta\), \(\gamma\), etc.). Although one often makes a linguistic distinction between a “vector” (meaning a single arrow) and a “vector field” (meaning an arrow glued to every point of a space), there’s an unfortunate precedent to use the term “1-form” to refer to both ideas — sadly, nobody ever says “1-form field!” Scalar fields or *0-forms* are often given letters from the middle of the Roman alphabet (\(f\), \(g\), \(h\)) or maybe lowercase Greek letters from somewhere in the middle (\(\phi\), \(\psi\), etc.).

You may also notice that we’ve been very particular about the placement of indices: coefficients \(v^i\) of vectors have indices *up*, coefficients \(\alpha_i\) of 1-forms have indices *down*. Similarly, vector bases \(\frac{\partial}{\partial x^i}\) have indices down (they’re in the denominator), and 1-form bases \(dx^i\) have indices up. The reason for being so neurotic is to take advantage of *Einstein summation notation*: any time a pair of variables is indexed by the same letter \(i\) in both the “up” and “down” position, we interpret this as a sum over all possible values of \(i\):

\[ \alpha_i v^i = \sum_i \alpha_i v^i. \]

The placement of indices also provides a cute mnemonic for the musical isomorphisms \(\sharp\) and \(\flat\). In musical notation \(\sharp\) indicates a half-step increase in pitch, corresponding to an upward movement on the staff. For instance, both notes below correspond to a “C” with the same pitch:

Therefore, to go from a 1-form to a vector we *raise* the indices. For instance, in a *flat* space we don’t have to worry about the metric and so a 1-form

\[ \alpha = \alpha_1 dx^1 + \cdots + \alpha_n dx^n \]

becomes a vector

\[ \alpha^\sharp = \alpha^1 \frac{\partial}{\partial x^1} + \cdots + \alpha^n \frac{\partial}{\partial x^n}. \]

Similarly, \(\flat\) indicates a decrease in pitch and a downward motion on the staff:

and so \(\flat\) *lowers* the indices of a vector to give us a 1-form — e.g.,

\[ v = v^1 \frac{\partial}{\partial x^1} + \cdots + v^n \frac{\partial}{\partial x^n}. \]

becomes

\[ v^\flat = v_1 dx^1 + \cdots + v_n dx^n. \]