In our last set of notes we measured the length of a vector by projecting it onto different coordinate axes; this measurement process effectively defined what we call a *1-form*. But what happens if we have a collection of vectors? For instance, consider a pair of vectors \(u, v\) sitting in \(\mathbb{R}^3\):

We can think of these vectors as defining a *parallelogram*, and much like we did with a single vector we can measure this parallelogram by measuring the size of the “shadow” it casts on some plane:

For instance, suppose we represent this plane via a pair of unit orthogonal 1-forms \(\alpha\) and \(\beta\). Then the projected vectors have components

\[ \begin{array}{rcl}

u^\prime &=& (\alpha(u),\beta(u)), \\

v^\prime &=& (\alpha(v),\beta(v)),

\end{array} \]

hence the (signed) projected area is given by the cross product

\[ u^\prime \times v^\prime = \alpha(u)\beta(v) - \alpha(v)\beta(u). \]

Since we want to measure a lot of projected volumes in the future, we’ll give this operation the special name “\(\alpha \wedge \beta\)”:

\[ \alpha \wedge \beta(u,v) := \alpha(u)\beta(v) - \alpha(v)\beta(u). \]

As you may have already guessed, \(\alpha \wedge \beta\) is what we call a *2-form*. Ultimately we’ll interpret the symbol \(\wedge\) (pronounced “wedge”) as a binary operation on differential forms called the *wedge product*. Algebraic properties of the wedge product follow *directly* from the way signed volumes behave. For instance, notice that if we reverse the order of our axes \(\alpha, \beta\) the sign of the area changes. In other words, the wedge product is *antisymmetric*:

\[ \alpha \wedge \beta = -\beta \wedge \alpha. \]

An important consequence of antisymmetry is that the wedge of any 1-form with itself is zero:

\[ \begin{array}{c}

\alpha \wedge \alpha = -\alpha \wedge \alpha \\

\Rightarrow \alpha \wedge \alpha = 0.

\end{array} \]

But don’t let this statement become a purely algebraic fact! Geometrically, why should the wedge of two 1-forms be zero? Quite simply because it represents projection onto a plane of zero area! (I.e., the plane spanned by \(\alpha\) and \(\alpha\).)

Next, consider the projection onto two different planes spanned by \(\alpha, \beta\) and \(\alpha, \gamma\). The sum of the projected areas can be written as

\[

\begin{array}{rcl}

\alpha \wedge \beta(u,v) + \alpha \wedge \gamma(u,v)

&=& \alpha(u)\beta(v) - \alpha(v)\beta(u) + \alpha(u)\gamma(v) - \alpha(v)\gamma(u) \\

&=& \alpha(u)(\beta(v) + \gamma(v)) - \alpha(v)(\beta(u) + \gamma(u)) \\

&=:& (\alpha \wedge (\beta + \gamma))(u,v),

\end{array}

\]

or in other words \(\wedge\) distributes over \(+\):

\[ \alpha \wedge (\beta + \gamma) = \alpha \wedge \beta + \alpha \wedge \gamma. \]

Finally, consider three vectors \(u, v, w\) that span a volume in \(\mathbb{R}^3\):

We’d like to consider the projection of this volume onto the volume spanned by three 1-forms \(\alpha\), \(\beta\), and \(\gamma\), but the projection of one volume onto another is a bit difficult to visualize! For now you can just cheat and imagine that \(\alpha = dx^1\), \(\beta = dx^2\), and \(\gamma = dx^3\) so that the mental picture for the projected volume looks just like the volume depicted above. One way to write the projected volume is as the determinant of the projected vectors \(u^\prime\), \(v^\prime\), and \(w^\prime\):

\[ \alpha \wedge \beta \wedge \gamma( u, v, w ) := \mathrm{det}\left(\left[ \begin{array}{ccc} u^\prime & v^\prime & w^\prime \end{array} \right]\right) = \mathrm{det}\left( \left[ \begin{array}{ccc} \alpha(u) & \alpha(v) & \alpha(w) \\ \beta(u) & \beta(v) & \beta(w) \\ \gamma(u) & \gamma(v) & \gamma(w) \end{array} \right] \right). \]

(Did you notice that the determinant of the upper-left 2×2 submatrix also gives us the wedge product of two 1-forms?) Alternatively, we could express the volume as the area of one of the faces times the length of the remaining edge:

Thinking about things this way, we might come up with an alternative definition of the wedge product in terms of the *triple product*:

\[

\begin{array}{rcl}

\alpha \wedge \beta \wedge \gamma( u, v, w ) &=& (u^\prime \times v^\prime) \cdot w^\prime \\

&=& (v^\prime \times w^\prime) \cdot u^\prime \\

&=& (w^\prime \times u^\prime) \cdot v^\prime \\

\end{array}

\]

The important thing to notice here is that *order* is not important — we always get the same volume, regardless of which face we pick (though we still have to be a bit careful about *sign*). A more algebraic way of saying this is that the wedge product is *associative*:

\[ (\alpha \wedge \beta) \wedge \gamma = \alpha \wedge (\beta \wedge \gamma). \]

In summary, the wedge product of \(k\) 1-forms gives us a \(k\)-form, which measures the projected volume of a collection of \(k\) vectors. As a result, the wedge product has the following properties for any \(k\)-form \(\alpha\), \(l\)-form \(\beta\), and \(m\)-form \(\gamma\):

**Antisymmetry**: \(\alpha \wedge \beta = (-1)^{kl}\beta \wedge \alpha\)**Associativity**: \(\alpha \wedge (\beta \wedge \gamma) = (\alpha \wedge \beta) \wedge \gamma\)

**Distributivity**: \(\alpha \wedge (\beta + \gamma) = \alpha \wedge \beta + \alpha \wedge \gamma\)

A separate fact is that a \(k\)-form is *antisymmetric* in its arguments — in other words, swapping the relative order of two “input” vectors changes only the *sign* of the volume. For instance, if \(\alpha\) is a 2-form then \(\alpha(u,v) = -\alpha(v,u)\). In general, an *even* number of swaps will preserve the sign; an *odd* number of swaps will negate it. (One way to convince yourself is to consider what happens to the determinant of a matrix when you exchange two of its columns.) Finally, you’ll often hear people say that \(k\)-forms are “multilinear” — all this means is that if you keep all but one of the vectors fixed, then a \(k\)-form looks like a linear map. Geometrically this makes sense: \(k\)-forms are built up from \(k\) *linear* measurements of length (essentially just \(k\) different dot products).

**Vector-Valued Forms**

Up to this point we’ve considered only *real-valued* \(k\)-forms — for instance, \(\alpha(u)\) represents the length of the vector \(u\) along the direction \(\alpha\), which can be expressed as a single real number. In general, however, a \(k\)-form can “spit out” all kinds of different values. For instance, we might want to deal with quantities that are described by complex numbers (\(\mathbb{C}\)) or vectors in some larger vector space (e.g., \(\mathbb{R}^n\)).

A good example of a vector-valued \(k\)-form is our map \(f: M \rightarrow \mathbb{R}^3\) which represents the geometry of a surface. In the language of exterior calculus, \(f\) is an *\(\mathbb{R}^3\)-valued 0-form*: at each point \(p\) of \(M\), it takes *zero* vectors as input and produces a point \(f(p)\) in \(\mathbb{R}^3\) as output. Similarly, the differential \(df\) is an \(\mathbb{R}^3\)-valued 1-form: it takes *one* vector (some direction \(u\) in the plane) and maps it to a value \(df(u)\) in \(\mathbb{R}^3\) (representing the “stretched out” version of \(u\)).

More generally, if \(E\) is a vector space then an *\(E\)-valued \(k\)-form* takes \(k\) vectors to a single value in \(E\). However, we have to be a bit careful here. For instance, think about our definition of a 2-form:

\[ \alpha \wedge \beta(u,v) := \alpha(u)\beta(v) - \alpha(v)\beta(u). \]

If \(\alpha\) and \(\beta\) are both \(E\)-valued 1-forms, then \(\alpha(u)\) and \(\beta(v)\) are both *vectors* in \(E\). But how do you multiply two vectors? In general there may be no good answer: not every vector space comes with a natural notion of multiplication.

However, there are plenty of spaces that *do* come with a well-defined product — for instance, the product of two complex numbers \(a + bi\) and \(c+di\) is given by \((ac-bd)+(ad+bc)i\), so we have no trouble explicitly evaluating the expression above. In other cases we simply have to say which product we want to use — in \(\mathbb{R}^3\) for instance we could use the cross product \(\times\), in which case an \(\mathbb{R}^3\)-valued 2-form looks like this:

\[ \alpha \wedge \beta(u,v) := \alpha(u) \times \beta(v) - \alpha(v) \times \beta(u). \]

The Wedge product look a lot like a Lie Bracket!!! Can you please tell the difference (if any)?

Well, for starters the Lie bracket takes a pair of vectors to a vector, but the wedge product takes a k-vector and an l-vector to a (k+l)-vector. Interesting exercise: can you use the wedge and the Hodge star to construct a Lie bracket? Think about the Lie algebra for the rotation group. What about other groups?

Also, in continuum mechanics they have the so called deformation mapping F=Grad(x), where Grad refers to the reference configuration with points X and x belongs to the deformed configuration with points x. Hence

dx=FdX

Could you please explain it in terms of differential forms and vector (tensor) fields?

Yes, wonderful: the picture you just described is

identicalto our setup for surfaces. You can think of the immersion f as the configuration of a mechanical system and the differential df as the deformation mapping. Connecting the geometric and mechanical picture can be very helpful, and can really clarify your thinking on both ends.