In this homework we’ll take a closer look at *curvature flow*. We already saw one example of curvature flow (*mean* curvature flow) while studying the Poisson equation. The general idea behind curvature flow is that we have an *energy* \(E\) measuring the smoothness of our geometry, and can reduce this energy by traveling along the direction of steepest descent. Conceptually, you can imagine that \(E\) is some kind of potential — surfaces with many wrinkles have a lot of energy, and want to reduce this energy by “relaxing” into a smoother state. This description suggests a sort of “energy landscape,” where high peaks correspond to wrinkly surfaces and low valleys correspond to smooth ones. Here’s a two-dimensional cartoon of what this landscape might look like:

To smooth out the geometry we can “ski” downhill, discovering a sequence of smoother and smoother surfaces along the way. Visually the effect is akin to a pat of butter slowly melting away on a hot piece of toast, or a water droplet buckling into a perfectly round sphere.

To be more concrete, let \(f\) be an immersion of a manifold \(M\) (e.g., a curve or surface) into Euclidean space, and suppose that \(E\) is a real-valued function of \(f\). Then a curvature flow is the solution to the partial differential equation

\[ \dot{f} = -\nabla E(f) \]

starting with some initial immersion \(f_0\), where \(\dot{f}\) denotes the derivative in time. In words, this equation just says that the difference in position of the surface at two consecutive points in time is equal to the change in position that reduces the energy quickest.

For surfaces, two common energies are the *Dirichlet energy*

\[ E_D(f) = \frac{1}{4} \int_M |\nabla f|^2 dA \]

and the *Willmore energy*

\[ E_W(f) = \frac{1}{4} \int_M (\kappa_1 - \kappa_2)^2 dA. \]

where as usual \(\kappa_1\) and \(\kappa_2\) are the principal curvatures induced by \(f\). Both energies somehow measure the “wrinkliness” of a surface, but how are they related?

**Exercise 4.1**

For a surface \(M\) without boundary show that, up to an additive constant, the Willmore energy can be expressed as

\[ E_W = \int_M H^2\ dA \]

and explain why this constant does not matter in the context of curvature flow. *Hint: Gauss-Bonnet.*

In other words, Dirichlet energy looks like the (squared) \(L^2\) norm of the gradient, whereas Willmore energy looks like the (squared) \(L^2\) norm of mean curvature. Superficially these quantities look quite different, but in fact they are quite similar!

**Exercise 4.2**

For a surface \(M\) without boundary show that (again up to constant factors)

\[ E_D = \langle \Delta f, f \rangle, \]

\[ E_W = \langle \Delta^2 f, f \rangle, \]

*Hint: Green’s first identity and the definition of the mean curvature normal.*

From these expressions, it may appear that Dirichlet and Willmore energy are nice, simple, quadratic functions of \(f\). Don’t be fooled! The Laplace-Beltrami operator \(\Delta\) depends on the immersion \(f\) itself, which means that the corresponding gradient flows are rather nasty and nonlinear. Later on we’ll look at a couple ways to deal with this nonlinearity.

**Gradient Descent**

Now that have a couple energies to work with, how do we derive gradient flow? Previously we defined the gradient of a function \(\phi: \mathbb{R}^n \rightarrow \mathbb{R}\) as

\[ \nabla\phi = \left[ \begin{array}{c} \partial\phi/\partial x^1 \\ \vdots \\ \partial\phi/\partial x^n \end{array} \right], \]

i.e., as just a list of all the partial derivatives. This definition works pretty well when \(\phi\) is defined over a nice finite-dimensional vector space like \(\mathbb{R}^n\), but what about something more exotic like the Willmore energy, which operates on an infinite-dimensional vector space of functions? In general, the gradient of a function \(\phi\) at a point \(x\) can be defined as the unique vector \(\nabla \phi(x)\) satisfying

\[ \langle \nabla \phi(x), u \rangle = \lim_{h \rightarrow 0} \frac{\phi(x+hu)-\phi(x)}{h}, \]

for all vectors \(u\), where \(\langle \cdot, \cdot \rangle\) denotes the inner product on the vector space. In other words, taking the inner product with the gradient should yield the directional derivative in the specified direction. Notice that this definition actually serves as a *definition* of differentiability: a function is differentiable at \(x\) if and only if all directional derivatives can be characterized by a single vector \(\nabla\phi(x)\). Geometrically, then, differentiability means that if we “zoom in” far enough the function looks almost completely flat.

**Exercise 4.3**

Explain why the gradient is the direction of steepest ascent.

**Exercise 4.4**

Consider the function

\[ \phi: \mathbb{R}^2 \rightarrow \mathbb{R}; (x_1, x_2) \mapsto x_1^2 - x_2^2. \]

Confirm that the gradient found using the expression above agrees with the usual gradient found via partial derivatives.

**Exercise 4.5**

Let \(M\) be a surface without boundary. Assume that the Laplace-Beltrami operator \(\Delta\) is constant with respect to the immersion \(f\) and use our definition of \(\nabla\) above to show that

\[ \nabla E_D(f) \approx HN. \]

In other words, gradient flow on Dirichlet energy looks roughly like the mean curvature flow \(\dot{f} = -HN\) that we studied in the previous assignment.

The approximate gradient \(HN\) might be called a *linearization* of the true gradient — in general the idea is that we keep some quantity or some piece of an equation fixed so that the rest comes out to be a nice linear expression. This trick can be quite helpful in a practical setting, but it is typically worth understanding where and how the approximation affects the final result.

One final thing to mull over is the fact that the gradient depends on our particular choice of inner product \(\langle \cdot, \cdot \rangle\), which appears on the left-hand side in our definition. Why does the inner product matter? Intuitively, the gradient picks out the direction in which the energy increases fastest. But what does “fastest” mean? For instance, if we use a vector of real numbers \(\mathsf{x} \in \mathbb{R}^m\) to encode the vertices of a discrete curve, then what we really care about is the energy increase with respect to a change in the length of the *curve* — not the Euclidean length of the vector \(\mathsf{x}\) itself. In terms of our energy landscape we end up with a picture like the one below — you can imagine, for instance, that arrows on the left have unit norm with respect to the standard Euclidean inner product whereas arrows on the right have unit norm with respect to the \(L^2\) inner product on our discrete curve. As a result, gradient descent will proceed along two different trajectories:

**Exercise 4.6**

Consider an inner product \(\langle \cdot, \cdot \rangle\) on \(\mathbb{R}^n\) defined by a positive definite matrix \(\mathsf{B} \in \mathbb{R}^{n \times n}\), i.e., \(\langle u, v \rangle = \mathsf{u^T B v}\). Show that the gradient \(\nabla_B\) induced by this inner product is related to the standard gradient \(\nabla\) via

\[ \nabla_\mathsf{B} = \mathsf{B}^{-1} \nabla. \]

In the discrete setting, the matrix \(\mathsf{B}\) is sometimes referred to as the *mass matrix*, because it encodes the amount of “mass” each degree of freedom contributes to the total. When working with discrete differential forms, one possible choice mass matrix is given by (an appropriate constant multiple of) the diagonal Hodge star. This choice corresponds to applying piecewise-constant interpolation and then taking the usual \(L^2\) inner product. For instance, here’s what piecewise constant interpolation looks like for a primal 1-form on a triangulated surface — the integrated value stored on a given edge gets “spread out” over the so-called *diamond region* associated with that edge:

In general, let \(\star_k\) be a real diagonal matrix with one entry for each \(k\)-dimensional simplex \(\sigma_i\). The nonzero entries are

\[ \left( \star_k \right)_{ii} = \frac{|\sigma_i^\star|}{|\sigma_i|}, \]

where \(\sigma_i^\star\) is the circumcentric dual of \(\sigma\), and \(|\cdot|\) denotes the (unsigned) volume. The corresponding mass matrix on primal discrete \(k\)-forms in \(n\) dimensions is then

\[ B_k = \left( \begin{array}{c} n \\ k \end{array} \right) \star_k, \]

i.e., a binomial coefficient times a (primal) diagonal Hodge star. The mass matrices for *dual* \(k\)-forms can likewise be expressed as constant multiples of the inverse:

\[ B^\star_k = \left( \begin{array}{c} n \\ k \end{array} \right) \star_{n-k}^{-1}. \]

These matrices will come in handy when deriving equations for discrete curvature flow.

**Flow on Curves**

For the remainder of this assignment, we’re going to make life simpler by working with planar curves instead of surfaces. As discussed earlier, we can describe the geometry of a curve via an immersion

\[ \gamma: I \rightarrow \mathbb{R}^2; s \mapsto \gamma(s)\]

of some interval \(I = [0,L] \subset \mathbb{R}\) into the Euclidean plane \(\mathbb{R}^2\). A common energy for curves is simply the integral of the curvature \(\kappa\), squared:

\[ E(\gamma) = \int_0^L \kappa^2\ ds. \]

Let’s first establish some facts about curvature in the smooth setting.

**Exercise 4.7**

The unit tangent field on a smooth curve \(\gamma\) can be expressed as \(T = (\cos\theta,\sin\theta)\) for some function \(\theta: I \rightarrow \mathbb{R}\). Show that the normal curvature can be expressed as

\[ \kappa = d\theta(X) \]

where \(X\) is a positively-oriented unit vector field. In other words, the scalar curvature is change in the direction of the tangent.

**Exercise 4.8**

Explain in words why the total curvature of any closed immersed curve \(\gamma\) (whether discrete or not) is an integer multiple of \(2\pi\), i.e., \(\gamma(0) = \gamma(L)\)

\[ \int_0^L \kappa ds = 2\pi k,\ k \in \mathbb{Z}. \]

(The number \(k\) is called the *turning number* of the curve.)

A stronger statement is the *Whitney-Graustein theorem* which says that the turning number of a curve will be preserved by any *regular homotopy*, i.e., by any continuous motion that keeps the curve immersed. For instance, here’s an example of a motion that is *not* a regular homotopy — note that the curve gets “pinched” into a sharp cusp halfway thorugh the motion, at which point the turning number goes from \(k=2\) to \(k=1\):

We’ll keep these ideas in mind as we develop algorithms for curvature flow.

**Discrete Curves**

In the discrete setting, \(\gamma\) is simply a collection of line segments connecting a sequence of vertices with coordinates \( \gamma_1, \ldots, \gamma_n \in \mathbb{R}^2 \):

Note that in the provided code framework, a curve is represented by a half edge mesh consisting of a single polygon. Therefore, to iterate over the curve you might write something like

```
```

```
FaceIter gamma = mesh.faces.begin();
HalfEdgeIter he = gamma->he;
do {
// do something interesting here!
he = he->next;
}
while( he != gamma->he );
```

```
```

As with surfaces, we can consider both a primal and dual “mesh” associated with this curve — this time, each primal edge is associated with a dual vertex at its midpoint, and each primal vertex is associated with a dual edge connecting the two adjacent dual vertices:

In the language of discrete exterior calculus, then, \(\gamma \in (\mathbb{R}^2)^n\) is a \(\mathbb{R}^2\)-valued primal 0-form, i.e., a value associated with each vertex.

**Exercise 4.9**

Show that the nonzero entries of the diagonal Hodge star on primal 0-forms are given by

\[ (\star_0)_{ii} = L_i, \]

where

\[ L_i = \frac{1}{2}( |\gamma_{i+1}-\gamma_i| + |\gamma_i-\gamma_{i-1}| ). \]

**Coding 4.1**

Implement the methods `Edge::length()` and `Vertex::dualLength()`, which should return the primal edge length and the circumcentric dual edge length, respectively. (The latter should be a one-liner that just calls the former!) Finally, implement the method `IsometricWillmoreFlow1D::buildMassMatrix()`, which builds the diagonal Hodge star on primal 0-forms.

**Exercise 4.10**

Show that on a discrete curve the total curvature along a dual edge \(e^\star_{ij}\) is equal to the exterior angle \(\varphi_{ij} \in \mathbb{R}\) at the corresponding vertex, i.e., the difference in angle between the two consecutive tangents:

\[ \varphi_{ij} = \theta_j - \theta_i = \int_{e^\star_{ij}} \kappa\ ds. \]

*(Hint: Stokes’ theorem!)*

In other words, the exterior angle \(\varphi\) gives us the *integrated* curvature. Applying the discrete Hodge star yields *pointwise* curvatures \(\kappa\), which we will use as the degrees of freedom in our numerical curvature flow:

\[ \kappa = \star\varphi. \]

**Coding 4.2**

Implement the method `Vertex::curvature()`, which returns the pointwise curvature as defined above. *Hint: in the language of discrete exterior calculus, what kind of quantity is \(\varphi\)? And what kind of quantity is \(\kappa\)?*

The next exercise should solidify your understanding of where all these different quantities live, and which operators take you back and forth from one space to another!

**Exercise 4.11**

Show that for a discrete curve \(E(\gamma)\) can be written explicitly as

\[ E(\gamma) = \sum_i \varphi_i^2 / L_i, \]

assuming we use piecewise constant interpolation of curvature.

Now that we have a discrete curvature energy, let’s derive an expression for the gradient.

**Exercise 4.12**

Let \(\varphi\) be the angle made by two vectors \(u,v \in \mathbb{R}^2\). Show that the gradient of \(\varphi\) with respect to \(u\) can be expressed as

\[ \nabla_u \varphi = -\frac{v_{\perp u}}{2A} \]

where \(v_{\perp u}\) denotes the component of \(v\) orthogonal to \(u\) and \(A\) is the area of a triangle with sides \(u\) and \(v\).

**Exercise 4.13**

Let \(L\) be the length of a vector \(u = b-a\), where \(a\) and \(b\) are a pair of points in \(\mathbb{R}^2\). Show that

\[ \nabla_a L = -\hat{u} \]

and

\[ \nabla_b L = \hat{u}. \]

**Exercise 4.14**

Collecting the results of the past few exercises, show that the gradient of the \(i\)th term of our curvature energy

\[ E_i = \varphi_i^2 / L_i \]

with respect to vertex coordinates \(\gamma_{i-1}\), \(\gamma_i\), and \(\gamma_{i+1}\) is given explicitly by

\[

\begin{array}{rcr}

\nabla_{\gamma_{i-1}} E_i &=& \frac{\varphi_i}{L_i L_{i-1}} \left( \frac{v_{\perp u}}{A_i} + \frac{\varphi_i}{2L_i} \hat{u} \right) \\

\nabla_{\gamma_{i+1}} E_i &=& \frac{\varphi_i}{L_i^2} \left( \frac{u_{\perp v}-v_{\perp u}}{A_i} + \frac{\varphi_i}{2L_i} (\hat{v}-\hat{u}) \right) \\

\nabla_{\gamma_i} E_i &=& -\frac{\varphi_i}{L_i L_{i+1}} \left( \frac{u_{\perp v}}{A_i} + \frac{\varphi_i}{2L_i} \hat{v} \right) \\

\end{array}

\]

where \(\varphi_i\) is the exterior angle at vertex \(i\), \(L_i\) is the dual edge length, and \(A_i\) is the area of a triangle with edges \(u = \gamma_i-\gamma_{i-1}\) and \(v = \gamma_{i+1}-\gamma_i\). The gradient of \(E_i\) with respect to all other vertices \(\gamma_j\) is zero. Why? *(Hint: remember to take the gradient with respect to the right metric!)*

**Coding 4.3**

Implement the method `WillmoreFlow1D::computeGradient()` using the expressions above. The gradient of energy with respect to a given vertex should be stored in the member `Vertex::energyGradient`. Remember that the overall energy is a *sum* over the terms \(E_i\), which means you will need to add up the contributions to the gradient at each vertex.

**Coding 4.4**

Implement the method `WillmoreFlow1D::integrate()`, which should integrate the flow equation

\[ \dot{\gamma} = -\nabla E(\gamma) \]

using the forward Euler scheme. (See the end of the previous assignment for a brief discussion of time integration.) Run the code on the provided meshes, and report the maximum stable time step in each case, i.e., the largest time step for which the flow succeeds at smoothing out the curve. (The time step size can be adjusted using the keys ‘`-`‘, ‘`=`‘, ‘`\_`‘, and ‘`+`.’)

**Curvature Flow in Curvature Space**

If you feel exhausted at this point in the assignment, you’re not alone! Taking derivatives by hand can be a royal pain (and it gets even worse when you want second derivatives, which are required for more sophisticated algorithms like Newton descent). But it’s worth grinding out this kind of expression at least once in your life so that you really understand what’s involved. In practice there are a variety of alternatives, including numerical differentiation, automatic differentiation, and symbolic differentiation — these methods all have their place, and it’s well-worth understanding the tradeoffs they offer in terms of accuracy, efficiency, and code complexity.

But before getting mired in the bedraggled business of computer-based derivatives, it’s worth realizing that there is a tantalizing fourth alternative: come up with a simpler formulation of your problem! In particular, the mention of a *convex-quadratic energy* should make your mouth water and your heart beat faster, since these things make your life easier in a number of ways. “Convex-quadratic” means that your energy can be expressed as a real-valued homogeneous quadratic polynomial, i.e., as

\[ E(x) = \langle Ax, x \rangle \]

for some positive-semidefinite self-adjoint linear operator \(A\) that does not depend whatsoever on the argument \(x\). For instance, suppose that in the discrete setting the degrees of freedom \(x\) of our system are encoded by a vector \(\mathsf{x} \in \mathbb{R}^n\). Then a quadratic energy can always be represented as

\[ E(x) = \mathsf{x^T A x} \]

for some fixed symmetric positive-semidefinite matrix \(\mathsf{A} \in \mathbb{R}^{n \times n}\). Earlier we visualized definiteness in terms of the graph of the energy in two dimensions:

Independent of definiteness, the gradient of a quadratic energy has a simple *linear* expression

\[ \nabla E(x) = 2\mathsf{B^{-1}Ax}, \]

where the matrix \(\mathsf{B} \in \mathbb{R}^{n \times n}\) encodes the inner product. This setup not only simplifies the business of taking derivatives, but also makes things inexpensive to evaluate at the numerical level — for instance, we can apply the backward Euler method by just solving a linear system, instead of performing some kind of nasty nonlinear root finding. Moreover, since the matrix \(\mathsf{A}\) is constant, we can save a lot of computation by prefactoring it once and applying backsubstitution many times. This setup also has some nice analytical features. For one thing, any local minimum of a convex energy is guaranteed to be a global minimum, which means that gradient descent will ultimately lead to an optimal solution. For another, there is an extremely well-established theory of linear PDEs, which allows one to easily answer questions about things like numerical stability. In contrast, the general theory of nonlinear PDEs is kind of a zoo.

Ok, enough religion! Let’s see how a quadratic formulation can help us with the specific problem of curvature flow. Actually, we already have a quadratic energy — it’s

\[ E(\kappa) = \int_0^L \kappa^2 ds. \]

The only difference between this energy and the one we’ve been working with all along is that it’s a function of the curvature \(\kappa\) rather than the immersion \(f\) — as a result, we avoid all the nonlinearity associated with expressing \(\kappa\) in terms of \(f\). More concretely, at the discrete level we’re going to store and manipulate a single number \(\kappa_i\) at each vertex, rather than computing it indirectly from the vertex coordinates \(\gamma_i \in \mathbb{R}^2\).

**Coding 4.5**

Implement the method `IsometricWillmoreFlow1D::getCurvature()`, which simply evaluates the (pointwise) curvature at each vertex and stores it in the member `Vertex::kappa`. This method should call `Vertex::curvature()`.

One nice consequence of this setup is that the gradient becomes extremely simple! Taking the gradient with respect to the \(L^2\) inner product on 0-forms, we get

\[ \nabla E(\kappa) = -2\kappa. \]

Gradient flow then becomes a simple, linear equation involving no spatial derivatives:

\[ \dot{\kappa} = -2\kappa. \]

**Coding 4.6**

Implement the methods `IsometricWillmoreFlow1D::computeFlowDirection()` and `IsometricWillmoreFlow1D::integrate()`, which integrate the above flow equation using the forward Euler scheme. *Hint: this step should be very easy!*

If we want to actually draw the curve, we can integrate curvature to get tangents, then integrate tangents to get positions. In other words, we can recover the direction \(\theta\) of the tangent via

\[ \theta(s) = \theta_0 + \int_0^s d\theta = \theta_0 + \int_0^s \kappa\ ds, \]

where \(\theta_0\) specifies the direction of the first tangent on our curve. The tangent vectors themselves are given by \(T(s) = (\cos\theta(s),\sin\theta(s))\) as before. Once we have the tangents, we can recover the immersion itself via

\[ f(s) = f_0 + \int_I T(s)\ ds, \]

where again \(f_0\) specifies a “starting point” for the curve. In the discrete setting, these two steps correspond to a very simple reconstruction procedure: start out at some initial vertex and join the edges end to end, rotating by the exterior angles \(\varphi_i\) at each step:

More explicitly, let

\[ \theta_i = \sum_{k=0}^i \varphi_k \]

and let

\[ T_i = L_i (\cos\theta_i,\sin\theta_i), \]

where \(L_i\) is the length of the \(i\)th primal edge. Then the vertex positions along the curve can be recovered via

\[ \gamma_i = \sum_{k=0}^i T_k, \]

mirroring the continuous formulae above. (Some questions to ponder: can you interpret these sums as piecewise integrals? What kind of quantity is \((\cos\varphi_i,\sin\varphi_i)\)? What kind of quantity is \(T_i\)? What’s the relationship between the two?)

**Coding 4.7**

Implement the methods `IsometricWillmoreFlow1D::recoverTangents()` and `IsometricWillmoreFlow1D::recoverPositions()`, which should compute the values \(T_i\) and \(\gamma_i\), respectively. If you use an \(O(n^2)\) algorithm to implement either of these methods you will get **zero** points! In other words, do not just evaluate the whole sum once for each vertex — there is obviously a better way to do it!

Note that the length of each edge is preserved *by construction* — after all, we build the curve out of segments that have the same length as in the previous curve! In other words, we get not only a curvature flow but an *isometric* curvature flow (in the smooth case, isometry is reflected in the fact that \((\cos\alpha,\sin\alpha)\) is always a unit vector).

Ok, sounds pretty good so far: we simply subtract some fraction of the curvature each vertex, and compute a couple cumulative sums. As an added bonus, we preserve length. Why haven’t people been doing this all along? The answer is: when something sounds too good to be true, it probably is!

In particular, let’s take a look at what happens when we work with a *closed curve*, i.e., a loop in the plane. If we make a completely arbitrary change to the curvature \(\kappa\), there’s no reason to expect that segments joined end-to-end will close back up. In other words, the final vertex may be somewhere different from where the inital vertex appeared:

A fancy way of describing this situation is to say that the tangents we recover from this procedure are not *integrable* — they do not “integrate up” to form a closed loop. Similarly, the curvature itself is not integrable: the cumulative curvature function \(\alpha\) does not describe the tangent direction of any closed loop. Why did this happen? Well, let’s go back and take a look at our condition on total curvature. We said that the curvature \(\kappa\) of any closed loop \(\gamma\) satisfies

\[ \int_{0}^L \kappa\ ds = 2\pi k \]

for some turning number \(k \in \mathbb{Z}\). Another way of saying the same thing is that the first and last tangents of our curve must match up: \(T(0) = T(L)\). But if we change \(\kappa\) arbitrarily, this condition will no longer hold.

**Exercise 4.15**

Suppose that at time zero, the curvature function \(\kappa\) on a smooth curve \(\gamma\) satisfies our condition on total curvature. Show that any change in curvature \(\dot{\kappa}\) orthogonal to the constant function \(1: [0,L] \rightarrow \mathbb{R}; s \mapsto 1\) with respect to the \(L^2\) inner product will preserve this condition.

We also need a condition that ensures the endpoints will meet up, i.e., \(\gamma(0) = \gamma(L)\). Although we will not derive it here, this condition again turns out to have a simple form:

\[ \int_0^L \kappa \gamma = 0; \]

equivalently, \(\dot{\kappa}\) must be (\(L^2\)-)orthogonal to the \(x-\) and \(y-\) coordinate functions of the immersion. Overall, then, we’re saying that the change in curvature must avoid a three-dimensional linear subspace of directions:

\[ \langle \dot{\kappa}, 1 \rangle = \langle \dot{\kappa}, \gamma_x \rangle = \langle \dot{\kappa}, \gamma_y \rangle = 0. \]

Like convex-quadratic energies, linear constraints are particularly easy to work with — in the case of our flow, we can simply remove the component of \(\dot{\kappa}\) that sits in this “forbidden” space. More specifically, suppose that this space is spanned by an orthonormal basis \(\{c_i\}\). Then we can simply travel in the augmented direction

\[ \dot{\kappa}_c = \dot{\kappa} - \sum_{i=1}^3 \langle \dot{\kappa}, \hat{c}_i \rangle \hat{c}_i. \]

**Coding 4.8**

Implement the method `IsometricWillmoreFlow1D::buildConstraints()`, which constructs the three constraint directions \(1\), \(\gamma_x\), \(\gamma_y\) as dense column vectors.

**Coding 4.9**

Implement the method `IsometricWillmoreFlow1D::orthogonalizeConstraints()`, which builds an orthonormal basis \(\{\hat{c}_1,\hat{c}_2,\hat{c}_3\}\) spanning the same space as the three constraint directions. *Hint: use the Gram-Schmidt process — remember to use the correct inner product!*

**Coding 4.10**

Implement the method `IsometricWillmoreFlow::enforceConstraints()`, which removes the forbidden directions from the flow using the orthogonal basis and the procedure outlined above. Try running the isometric Willmore flow (you can switch to this flow in the GUI either by right-clicking to get the contextual menu, or by hitting the ‘`i`‘ key). Report the maximum stable time step that can be achieved for each of the provided meshes. Does the flow preserve the turning number of each of the input curves? (In other words, did we faithfully capture the Whitney-Graustein theorem in the discrete setting?) Try running the flow with and without constraints (by modifying `IsometricWillmoreFlow::enforceConstraints()`). What happens if you turn off all the constraints? Is there a strict subset of the constraints that is sufficient to keep the loop closed, or are they all needed?

One thing you might have noticed about this new flow is that, while it still smooths out the curve, it looks very different from the one you implemented in the `WillmoreFlow1D` class. Why is there a difference? In either case, aren’t we doing gradient descent on the same energy? Well, if you paid close attention, you might already know the answer: yes, the energy stays the same, but the *metric* we used to define the gradient is different! (And if you *really* paid close attention, you may even know how to modify the second flow to make it look like the first one — and how to implement it!) Beyond that, there are all sorts of nice ways to improve the algorithm that involve discrete Laplacians and Poisson equations and… you know what? You’ve worked hard enough already. Enjoy the break, and see you next year!

**Skeleton code:** ddg_hw4_skeleton.zip