Part of your course grade is determined by participation, which can include both in-class participation as well as discussion here on the course webpage. Therefore, your first assignment is to:
- create an account on the course webpage (you must use your Andrew email address, so we can give you participation credit!),
- sign up for Piazza and Discord,
- read carefully through the Course Description and Grading Policy, and
- leave a comment on this post containing your favorite mathematical formula (see below).
To make things interesting, your comment should include a description of your favorite mathematical formula typeset in $\LaTeX$. If you don’t know how to use $\LaTeX$ this is a great opportunity to learn — a very basic introduction can be found here. (And if you don’t have a favorite mathematical formula, this is a great time to pick one!)
(P.S. Anyone interested in hearing about some cool “favorite theorems” should check out this podcast.)
Hi, my name is Siddharth Paratkar, and personally my favorite equation is the same as Po-Shen Loh’s, the infamous $e^{i\pi} = -1$, or alternatively, $e^{i\pi}+1 = 0$.
Yes, a classic! To be compared only with this stunner. 😉
$$\lfloor \pi \rfloor – \lceil e \rceil = 0.$$
My favorite mathematical theorem is the Convolution Theorem – a central result in any introductory signal processing course. Let $x(t)$ and $y(t)$ be two complex-valued signals, indexed by a real number $t$. Moreover, let $* \text{ and } \mathcal{F}$ denote the convolution and Fourier transform operators, respectively. Then, the Convolution Theorem states $$\mathcal{F} \{ x(t) * y(t) \} = \mathcal{F} \{ x(t)\} \times \mathcal{F} \{ y(t)\}.$$
In English, this means that the Fourier transform of the convolution of two signals is equivalent to the product of their respective Fourier transforms. This nice relationship between convolution, multiplication, and the Fourier transform enables us to manipulate signals in many useful ways and bypass evaluating unwieldy integrals/sums.
Yes indeed. And other amazing things. For instance, you can reduce the asymptotic cost of multiplying two polynomials via the Fourier transform.
Later on in class we’ll see that the standard Fourier bases (sines and cosines) can be generalized to geometric domains using eigenfunctions of the so-called Laplace-Beltrami operator. Unfortunately, although there is a way to perform a kind of Fourier transform into this basis, there is (to date) no fast Fourier transform algorithm for curved geometry. So, people play other games with the Laplacian to achieve similar results.
a classic 🙂
$e^{i \pi} + 1 = 0$
…Classic!
Lagrange multipliers! $Df(x) = \lambda^T Dg(x)$
Yes, an extremely important one for applied geometry. And of course there’s a geometric picture: an extremum of a constrained optimization problem is a point where the normal of the constraint manifold is parallel to the gradient of the objective function—so that no further progress can be made without leaving the constraint manifold.
$\sum_{k=1}^n k^3 = \left( \sum_{k=1}^{n} k \right)^2$
Wow, I didn’t know this one. Is it really true?!
It is! I have no intuition for why, but check out Christopher Pellerito’s visual proof here (you may have to scroll down a bit): https://www.quora.com/What-is-a-simple-proof-for-the-fact-that-the-sum-of-the-cubes-of-the-first-n-integers-equals-the-square-of-the-sum-of-those-integers
I love this formula! Knuth’s Concrete Mathematics (especially chapter 2) provides intuition and general techniques for finding closed-form solutions for summation, product, and generally any indexed operation. This result can be arrived at with many different methods in the book–especially interesting is “finite calculus.”
My favorite mathematical formula is the incompressible Navier-Stokes equation:
$$\frac{\partial u}{\partial t} + (u \cdot \nabla) u – \frac{\mu}{\rho_0} = – \nabla (\frac{p}{\rho_0}) + g$$
When I first studied Fluid Dynamics, I remember the professor being able to explain how we could derive this function very intuitively using a control volume. It was one of the only equations in Mechanical Engineering that just made sense to me the way he had explained it without having to review it many times. While I haven’t used it much since then, I still remember that class pretty fondly.
Yes! Navier-Stokes is a source of endless intrigue (and perhaps a million dollars…”).
For incompressible Euler equations (no viscosity), there’s a nice geometric picture that connects to, surprisingly enough, rigid body rotations, which becomes clear through the lens of geometric mechanics. This perspective leads in turn to some cool algorithms for simulating fluids via Laplacian eigenfunctions.
One of my favorate math formulae is the HJB (Hamilton-Jacobi-Bellman) equation: $0 = \min_\textbf{u} \left[\ell(\textbf{x},\textbf{u}) + \frac{\partial J^*(\textbf{x},\textbf{u}))}{\partial \textbf{x}}f(\textbf{x},\textbf{u}) \right]$. It describes the relationship between the optimal control policy and the optimal cost-to-go function. $\textbf{u}$, $\textbf{x}$, $f$, $\ell$, and $J^*$ are respectively the current optimal control command, the current system state, the system dynamics, the instant cost for a state and control input pair, and the optimal cost-to-go function.
Nice! The more basic Hamilton-Jacobi equation plays a role in geometry and geometric computation—for instance, in connection with geodesics and geodesic distance. You’ll have a chance to implement one of these equations later on in class.
Wow, that sounds great to me!
$\nabla_p \iint_{\Omega_a} D(-\boldsymbol{A}) = \oint_\Omega(\nabla_{p_\bot}\Omega)\lrcorner( D(-\boldsymbol{A}))$
Not sure what it is—could you give us a few words?
Yes! Sorry I should’ve given more background, this formula is very specific to my ongoing research topic. It is an application of the Leibniz’s rule to study robot locomotion. When implementing a constrained variational optimization, in order to find the gradient of gait displacement $\nabla_pg_\Omega$, the net displacement of a robot gait can be approximated by the area integral of the total Lie Bracket $D(-\bf{A}) = -d\bf{A} + [\bf{A}_1,\bf{A}_2]$, the sum of interior derivative and local Lie Bracket, which is then transformed into the interior product of boundary gradient and integrand.
\[ \lim_{n \to \infty} \frac{a_{n-1} – a_{n-2}}{a_n – a_{n-1}} = 4.669 \]
Who is $a$?
My favorite formula is the Euler Lagrange equation $\frac{dL}{dq} – \frac{d}{dt}(\frac{dL}{d\dot{q}})$.
Excellent. I feel like I could have bypassed so many hours of deciphering “force diagrams” in high school physics had I just written down the Lagrangian and taken a few derivatives…
The Lagrangian (and Hamiltonian) perspective on mechanics also turns out to be very useful in developing so-called “structure-preserving” numerical integrators, which preserve basic quantities like energy and momentum over very long integration times. (Think about simulating the Earth’s orbit so that it doesn’t fly off out of the solar system, or crash into the sun!). Ari Stern has some nice slides here.
My favorite formula is Stokes’ theorem
$$\int_{\Omega} d\alpha = \int_{\partial\Omega} \alpha.$$
The meaning of this theorem will become clear later on in the semester! 😉
Hard to pick an absolute favorite but I like $$\sum_{n=1}^\infty\frac{1}{n}=-\frac{1}{12}$$ because, without the right lens, it’s just wrong, and it can lead you to really interesting topics (foremost being the Reimann Hypothesis).
For anyone who’s perplexed by this statement (“How can the sum of positive values be negative?!), check out this nice writeup of the Ramanujan summation, which is a terrific reminder to always be very careful when you encounter infinity!
My favorite formula is $E = mc^2$. (Strictly speaking it is not a mathematical formula)
It shows that mass and energy are interchangeable.
To quote Walter Matthau portraying Albert Einstein in the 1994 romantic comedy I.Q., “I hope so!”
Bézout’s theorem, $mult(C \cap D) = deg(C) \cdot deg(D)$.
Great! We won’t really talk about algebraic geometry in this class… but we will effectively talk about geometric algebra, which is an entirely different thing!
My favorite formula is the following:
$$\sum_{d|n} \varphi(d) = n$$
Where $\varphi$ is the Euler’s totient function.
The triangle inequality: $\text{dist}(u,v) \leq \text{dist}(u,w) + \text{dist}(w,v)$
One of the few times I’m ok with inequality is when it involves triangles. Great!
The Hagen-Poiseuille equation for fluid flow in cylindrical tubes which is derived by making several assumptions from Navier-Stokes.
$$ Q = \frac{\Delta p \pi R^4}{8 \mu L} $$
Tubular!
Suppose $V$ is a finite-dimensional vector space, $W$ is a vector space, and $T$ is a linear map from $V$ to $W$. The Fundamental Theorem of Linear Maps states $\dim V = \dim \text{ range } T + \dim \text{ null } T$.
Whenever “Fundamental Theorem” is attached to a result, you know it’s important (or the discoverer was incredibly pretentious). This result is the former because it leads to almost every major linear algebraic result in some way or another.
Indeed. We’ll encounter a bunch of pretentious fundamental theorems in this class, like the fundamental theorem of plane curves which says that a curve can be recovered (up to rigid motions) entirely from its curvature.
The vector space decomposition you give is also central to the idea of Helmholtz-Hodge decomposition, which we’ll use to analyze and manipulate tangent vector fields on surfaces.
My favorite formula is the Maxwell’s Equations:
\begin{equation*}
\begin{array}{l}
& \nabla \times \mathbf{H} = \mathbf{J} + \frac{\partial \mathbf{D}}{\partial t} \\
& \nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} \\
& \nabla \cdot \mathbf{B} = 0 \\
& \nabla \cdot \mathbf{D} = \rho
\end{array}
\end{equation*}
Maxwell’s equations present a beautiful duality between electrics and magnetics. Those are not necessarily mathematical equations, while they are closely related to Stokes’ theorem and Gauss’ flux theorem.
Great! Maxwell’s equations can also be concisely written in the language of differential forms, which we’ll develop in the first part of this class. See in particular this paper, which uses discrete differential forms to develop numerical algorithms for solving Maxwell’s equations.
Not really a true “mathematical formula”, but I have always loved the Maxwell-Faraday Equation in electromagnetism:
\[ \nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} \]
Maxwell seems to be a popular guy in this class!
A nice piece of geometry (essentially) due to Maxwell, and related to electromagnetism, is the fact that the solid angle associated with a space curve defines a harmonic function over all of space. This fact can in turn be used to construct some beautiful surfaces, where the boundary is given by a prescribed knot.
The triangle inequality! Has a large effect on my path and motion planning
$ d(a,c)\leq d(a,b) + d(b,c)$
If you like the triangle inequality, you’re in luck: there are gonna be lots of triangles this semester. The triangle inequality in particular will be critical for characterizing discrete Riemannian metrics on an abstract triangulation, which we’ll look at in detail when we discuss intrinsic triangulations.
My personal favourite is the rendering equation 🙂
\[
L_o(x,\omega_o,\lambda,t) = L_e(x,\omega_o,\lambda,t) + \int_\Omega f_r(x,\omega_i,\omega_o,\lambda,t)L_i(x,\omega_i,\lambda,t)(\omega_i \cdot n) d\omega_i
\]
Glad to hear it! 🙂
We won’t discuss light transport in this class, but this same kind of recursive integral equation can be used to solve partial differential equations (PDEs), which we will talk about a lot. There are also some interesting ideas about how to formulate light transport from a geometric point of view.
I’m always a big fan of Coulomb’s Law since electricity is magic, and it’s fun to watch particle simulations
$$F = \frac{q_1q_2}{4\pi\varepsilon_0r^2}$$
Shameless Sideways Self Promo
https://drive.google.com/file/d/1QMTdb5YBAr6U_9e1Qn4tTNS6VQ1_s8wN/view?usp=sharing
Hey, if you like watching repulsive particle systems, you’ll love watching repulsive curves and repulsive surfaces, which are also based on a Coulomb-like energy.
One of the earliest formulas one learns that becomes a famous “non-formula” (with positive integers) when the exponent is changed to $n > 3$
$$ a^2 + b^2 = c^2 $$
Next I suppose you’ll tell me you have a marvelous proof that the comment box is too small to contain…
My favorite equation has to be the heat equation expressed in this form:
$$\frac{\partial U}{\partial t} = \kappa \nabla^{2} U$$
Though I do not come from a physics background, I will never forget how blown away I was when I saw this equation used to solve the for the temperature distribution in a bar. So much math and nature compressed into such a deceivingly simple-looking formula.
Awesome. The heat equation also plays a big, big role in geometric algorithms. For instance, in one of the assignments you’ll use heat diffusion to compute geodesic distance. The temperature at a point (as a function of time) also provides a surprisingly informative description of the underlying geometry.
As some of my peers have done, my favorite equation is a physics formula equation (not exactly a pure math equation): The Tsiolkovsky rocket equation describes the relationship between the mass of a rocket ship’s payload ($m_f$), the total mass (payload + fuel : $m_0$), the effective exhaust velocity ($v_e$, effectively a representation of how efficient the rocket engine is), and the total velocity change that the rocket can give to its payload ($\Delta v$).
It is normally written
$$\Delta v = v_e \ln{\frac{m_0}{m_f}}$$
To understand the implications of it, it is more useful to solve for the total mass:
$$m_0 = m_f e^{\frac{\Delta v}{v_e}}$$
This is the form that describes roughly how much a given space mission will cost (a rocket like the Saturn V that weighs 3000 tons is considerably more expensive than a Pegasus weighing 23 tons (\$1.2 billion per launch vs \$40 million per launch, respectively). So this equation describes facts like “making your rocket carry twice as much cargo will cost twice as much,” but also “traveling somewhere that requires more velocity will exponentially increase cost,” and conversely “designing a more efficient engine or plotting a more efficient route will exponentially decrease cost.” As someone very interested in spaceflight, I find this equation both encouraging and disheartening, depending on which side of the exponential I’m looking at.
Sounds like we should boldly go… but very slowly. Neat!
My favorite is Einstein’s equation for general relativity:
$$G_{\mu\nu} \equiv R_{\mu\nu} – \frac12 R g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}$$
I have to admit I don’t fully understand this equation yet, but I know it involves a lot of concepts of differential geometry in the Lorentz space. It’s amazing how Einstein’s notation can condense down a really complicated concept into such an elegant equation. Science will not be possible without solid and clever math definitions and frameworks!
Yes indeed—the history of differential geometry and general relativity are intertwined not only in the smooth setting, but also in the discrete setting! In fact, some of the very first work on discrete differential geometry was in an effort to solve Einstein’s equations, in Tulio Regge’s classic paper General Relativity Without Coordinates. This work looks remarkably close to what modern discrete differential geometry looks like, especially the topic of intrinsic triangulations.
My favorite is the Fourier transform:
\[ \hat{f}(p) = \int_{-\infty}^{\infty} f(x) e^{-2 \pi i x p} dx \]
Endless cool applications in physics and engineering, and lots of cool theorems surrounding it.
Indeed! As noted above, the Fourier transform can be generalized to all sorts of signals on all sorts of geometry by way of the Laplace-Beltrami operator (and friends). We’ll spend a lot of time talking about the Laplacian this semester.
My favorite is the thin lens equation :
$\frac{1}{p} +\frac{1}{q} = \frac{1}{f} $
A simple equation shows the delicate design of the lens.
I love those Fourier transform and Euler equation. Since they have been mentioned, I’d like to bring in another one of my favorites which, however, I do not thoroughly understand.
\[
i \hbar \frac{\partial}{\partial t}\Psi(\mathbf{r},t) = \hat H \Psi(\mathbf{r},t)
\]
The $i$ in this differential equation indicates kinds of rotation of the wave function, making Schrödinger equation even more attractive (abstruse).
My favorite formula is probably $\frac{\mathrm d}{\mathrm d x} e^x = e^x$
Hi team! My favorite piece of mathematical intrigue is the connection between a Hitbert space’s inner-product and dual space spelled out by the Riesz representation theorem:
For a Hilbert space $X,$ $x’:X\rightarrow\mathbb{R}$ belongs to the dual space $X’$ of $X$ if and only if there exists an $y\in X$ such that for every $x\in X, x'(x) = \langle x,y\rangle$
Gotta go with Mandelbrot
$$Z_{n+1} = Z_{n}^{2} + C$$
I mean, look at what it produces!!!
https://i.stack.imgur.com/ER6mQ.jpg
PS: Whoops, forgot to hit post when I first wrote this 😛
I’m late to the party but since I haven’t seen this one in the comments, I have to:
Cauchy Integral formula: For any holomorphic function in a simply connected region $\Omega$ of the complex plane
$$
f(z)= \frac{1}{2i\pi} \int_{\gamma}\frac{f(w)}{w-z}dw
$$
for any closed curve $\gamma$