Part of your course grade is determined by participation, which can include both in-class participation as well as discussion here on the course webpage. Therefore, your first assignment is to:
- create an account, and
- leave a comment on this post containing your favorite mathematical formula (see below).
To make things interesting, your comment should include a description of your favorite mathematical formula typeset in $\LaTeX$. If you don’t know how to use $\LaTeX$ this is a great opportunity to learn — a very basic introduction can be found here. (And if you don’t have a favorite mathematical formula, this is a great time to pick one!)
(P.S. Anyone interested in hearing about some cool “favorite theorems” should check out this podcast.)
My favorite formula is the isoperimetric inequality. It states that if $\Omega\subseteq\mathbb{R}^{2}$ is an bounded, open, connected region with $C^{1}$ boundary $\partial\Omega$, then $L^{2}\geq4\pi A$, where $L$ is the length of $\partial\Omega$ and $A$ is the area of $\Omega$, with equality when $\Omega$ is a ball.
The fact that the boundary is $C^{1}$ allows us to parameterize the boundary as being the image of a $C^{1}$ function $s:S^{1}\rightarrow\mathbb{R}^{2}\cong\mathbb{C}$, where $S^{1}$ is the circle and $|s'(\theta)|=L$ for every $\theta\in S^{1}$. The proof relies on Fourier analysis, which makes it one of my favorite proofs of all time. In the equality case, we show that $s(\theta)=s(0)+s'(0)e^{2\pi i\theta}$ by Fourier magic, which gives immediately that $\Omega$ is a ball.
Pythagorean Theorem in 3D $ d = \sqrt{x^2 + y^2 + z^2} $ . used to calculate the distance between two points in three-dimensional space. It extends the classic 2D Pythagorean theorem by incorporating the z-axis, representing depth, in addition to the x and y axes.
Basel’s identity $\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$. A mysterious formula with an amazing proof by Euler
One of my favorite mathematical formulas is the J-rule, also known as path induction, from Homotopy Type Theory. For a given type A : Type and a dependent type P : (x : A) $\to$ (y : A) $\to$ (p : x = y) $\to$ Type, we have a map
((x : A) $\to$ P a a refl) $\to$ ((x : A) $\to$ (y : A) $\to$ (p : x = y) $\to$ P x y p)
This probably doesn’t look like mathematics to most people, but this principle is the driving force behind the “homotopy” part of Homotopy Type Theory!
Briefly, it says that, in order to prove that a property P holds for all elements x and y such that x = y, we only need to prove that P holds in the case where the two elements are both x and the proof of x = x is the reflexivity proof.
What’s interesting is that this actually isn’t trivial! The punchline is that in HoTT, there can be multiple ways in which two elements can be equal. The proof that x = x by reflexivity might only be one of them! So it’s surprising that in some situations, we still only need to consider the reflexivity case.
And furthermore, there’s an analogue of this fact in homotopy theory, and the connection between the two has helped fuel the development of the HoTT program. In the slim chance anyone is actually following along with this, I’d definitely recommend the blog post https://homotopytypetheory.org/2011/04/10/just-kidding-understanding-identity-elimination-in-homotopy-type-theory/ for explaining this rule in more depth, but that isn’t the most approachable resource for beginners necessarily (unless you also have a background in type theory).
I like the Simple Harmonic Motion(SHM) formula since I found it useful for representing cute motion animations of elastic balls.
\[y = A\sin(\omega t + \phi_0)\] where A is the amplitude, \( \omega \) is the angular velocity, t is time, and \( \phi_0 \) is the phase constant.
I love the normal distribution:
$$\Phi(x) = \frac{1}{\sqrt{2\pi\sigma}}e^{\frac{(x-\mu)^2}{2\sigma^2}}$$
It combines some useful constants like $\pi$ and $e$ in one clean equation. The normal distribution itself in general has many applications across domains; importantly, the Central Limit Theorem which states that the distribution of sampled means converges to this distribution explains why so many random events in nature follow this distribution.
I like the residue theorem. Say you’re given a (simply connected) open subset of the complex plane $U$ and a function which is $f$ (essentially, infinitely differentiable) over all of $U$ aside from a finite number of points $a_1, a_2, \dots, a_n$.
Then, it is possible to compute the loop integral over a simple closed curve $C$ via
\[\oint_C f(z)\ dz = 2\pi i\sum\text{Res}(f, a_k)\]
with the sum taken over the $a_k$s inside the loop. Residues are essentially calculated via power series in complex analysis which are far easier to compute than the integrals otherwise may be.
This isn’t actually the most general definition, but I like this statement of the formula because of its application to real integrals. The idea of computing certain real integrals by going into the complex plane and using this theorem is surprisingly common and presents an interesting connection.
My “favourite” algorithm is stokes’ theorem. The reason for that is when I was an undergraduate, I took the class Multivariable Calculus and it was such a painful experience. So I was partially horrified when I realize that I will have to deal it again in DDG.
Anyways, the theorem says:
Let \(S\) be a piecewise smooth oriented surface having a piecewise smooth boundary curve \(C\). Let \(\bf{F} = M\bf{i} + N\bf{j} +p\bf{k}\) be a vector whose components have continuous first partial derivatives on an open region containing \(S\) . Then the circulation of \(\bf{F}\) around C in the direction counterclockwise with respect to the surfaces’s unit normal vector \(\bf{n}\) equals the integral of the curl vector field \( \mbox{curl}\,\bf{F}\) over S:
\begin{equation}
\oint_C{\bf F}\cdot{d\bf r}=\iint_S\mbox{curl}\,{\bf{F}}\cdot{\bf{n}\ } d \sigma
\end{equation}
I’m a fan of the classics, so Euler’s identity is a favorite of mine. $e^{i\pi} + 1 = 0$. Simple, elegant, and impactful!
My favorite mathematical formula is $ Z = Z^2+C$ – the equation for the mandelbrot set. I like the nature that this equation graphs and how it can be related to a logistic map.
Equation for sphere at origin: $$x^2 + y^2 + z^2 = r^2$$ Clean equation, and spheres are a fundamental object in a number of different fields
My favorite is Markov’s inequality
$$
P(X \geq a) \leq \frac{E[X]}{a}
$$
It’s great because we don’t need to know the distribution of the random variable, we just need its expected value (and that the random variable is non-negative) to get some useful information.
I really like the length contraction formula, simple but so cool:
$$ L = L_0\sqrt{1- \frac{v^2}{c^2}}$$
L is the observed length based on the relative speed the observer is moving and the speed of light. It has a simple derivation but profound implications.
My favorite formula is the uncertainty principle: ∆x∆p ≥ h/4π
This formula places fundamental limits on what we can know about the state of a particle and alters our understanding of the determinacy we find in classical mechanics.
Since we haven’t quite define what a “formula” should be, I will just take it to be any first-order sentence in some theory in mathematics. Consider the following formula (it’s really a statement): $$Con(T)\rightarrow T\nvdash Con(T)$$
Yep, it’s Gödel’s incompleteness theorem: given a consistent theory $T$, there is no way that $T$ can claim itself to be consistent.
My favorite formula in mathematics is the Gauss-Bonnet Theorem, which is a deep but fundamental result that connects differential geometry and topology. It relates the curvature of a surface to its topological characteristics, specifically its Euler characteristic. The formula is often expressed as:
\[
\int_{M} K \, dA + \int_{\partial M} k_g \, ds = 2\pi \chi(M)
\]
Here:
– \( \int_{M} K \, dA \) represents the integral of the Gaussian curvature \( K \) over the surface \( M \).
– \( \int_{\partial M} k_g \, ds \) is the integral of the geodesic curvature \( k_g \) along the boundary \( \partial M \) of \( M \).
– \( \chi(M) \) is the Euler characteristic of the surface \( M \).
This theorem elegantly bridges differential geometry and topology, showing how local geometric properties (curvature) can inform global topological properties (Euler characteristic). It has far-reaching implications in various fields of mathematics and theoretical physics.
(In fact, a simpler version of this equation appeared in the first lecture of this class.)
One of my favorite mathematical formulas is the divergence thoerem:
\[ \int_V(\nabla\cdot \mathbf{F})\,dV = \int_{\partial V} \mathbf{F}\cdot \,d\mathbf{S} \]
Which states that if we have a region $V$ in space with some boundary $\partial V$, then the volume integral of the divergence of $\mathbf{F}$ over $V$ is the surface integral of $\mathbf{F}$ over the boundary of $V$.
This also gives rise to Gauss’s Law if we apply it to the electric field.
One of my favorite formulas is the Singular Value Decomposition (SVD):
Any matrix $A_{m \times n} $ can be decompose and written as:
\[ A_{m \times n} = U_{m \times m} \Sigma_{m \times n} V^T_{n \times n} \] Where $U$ is an orthogonal matrix whose columns are the eigenvectors of $AA^T$, $V$ is an orthogonal matrix whose columns are the eigenvectors of $A^TA$, and $\Sigma$ is a diagonal matrix containing the “singular values”.
It is a very useful decomposition across various domains and applications, extending from robotics, machine learning, computer vision, astrodynamics, and signal processing to quantum information and beyond.
A very simple formula that I like is Euler’s polyhedron formula, which states that for any convex polyhedron,
\[ V – E + F = 2 \]
where $V$, $E$, and $F$ are respectively the numbers of vertices, edges, and faces in the polyhedron.
My favorite formula would be the Seifert-Van Kampen Theorem, used to the calculate the fundamental group of topological spaces.
Suppose we have a topological space $X$ that can be expressed as the union of two open and path connected spaces $A$ and $B$. If $A \cap B$ is nonempty and path connected, then $\pi_1(X) \sim \pi_1(A) \ast_{\pi_1(A \cap B)} \pi_1(B)$. In words, the fundamental group of $X$ is isomorphic to the free product of $A$ and $B$’s fundamental groups, amalgamated over the fundamental group of $A \cap B$.
The Seifert-Van Kampen Theorem thus allows us to calculate the fundamental group of $X$ if we know the fundamental groups of $A$, $B$, and $A \cap B$. In many cases, such as with cell complexes, the fundamental groups of $A$, $B$, and $A \cap B$ are easy to calculate on their own. The theorem can also be extended to work with a union of more than 2 groups.
I really like the recursive rendering equation, which is used to compute the light emitted from a point in raytraced rendering techniques. While it looks very long and complex, it breaks down into several small pieces that each describe an intuitive physical phenomenon. It is as follows:
$$L_r(x, \omega_o) = L_e(x, \omega_o) +\int_{\Omega}L_r(x, \omega_i)f_r(x, \omega_o\rightarrow\omega_i) \cos\theta d\omega_i$$
Where:
– $\omega_o$, $\omega_i$ represent outgoing and incoming rays
– $L_e$ is the emitted light, which is relevant if the object is emissive like a lightbulb or LED
– $f_r$ is the BRDF, which encapsulates material properties like shininess and color
– $\int_{\Omega}$ is a hemispherical integral over all incoming ray directions
Evaluating this function leads to infinite recursion, where computing $L_r$ at one location requires knowledge of the incoming light from infinitely many incoming directions, invoking infinitely many recursive calls. As such, it’s usually implemented using Monte Carlo techniques.
I think that the Euler-Lagrange equation is pretty cool. It tells you how to optimize a function $f\left(t, y, \dot y:= \frac{dy}{dt}\right)$ in any system of generalized coordinates and forms the basis of calculus of variations:
\[
\frac{d}{dt}\left(\frac{\partial f}{\partial \dot{y}}\right) – \frac{\partial f}{\partial y} = 0
\]
It’s hard to choose a favorite formula, but a neat one I recently learned about is the Lagrangian for (simple) graphs
$$L_G(x_1,x_2,\dots,x_n)=\max_{\substack{x_1,\dots,x_n\geq0 \\ x_1+\dots+x_n=1}}\sum_{i\sim j}x_ix_j$$
There’s a neat way to prove Turán’s theorem using the Lagrangian. Each $x_i$ can be interpreted as the fraction of total vertices in the $i$th independent set of a complete multipartite graph, so the product $x_ix_j$ “measures” the number of edges between the $i$th and $j$th independent sets. Then, the Lagrangian (for $n$-vertex graphs that do not contain any $K_{r+1}$) obtains its maximum when the $x_i$’s reflect the Turán graph, $T(n,r)$. Generally, it is a useful formula when working with extremal graph problems.
I will choose the formulae of probability distribution transformation. Denote a random Variable $X$ with distribution $P_X(x)$ and an invertible function $f(\cdot)$ that transforms $X$ into $Y=f(x)$, then the probability distribution of $Y$ is given by $P_Y(f(x)) |Jf| = P_X(x)$. This formula is intuitive in that transforming a random variable induces a “volume” change in its distribution. When the $f(x)$ is an instantaneous change such as $f(x,t)=x+g(x, t)dt$, $Jf=I+Jg dt$ which make $|Jf|=trace(Jg)dt$. This means that if we transform the random variable by infinite small steps, we only need to consider the trace of the transform: $log P_Y(f(x)) + \int trace(Jg) dt = log P_X(x)$.
My favorite formula is from my freshman discrete math course: For any integers n and k, the number of non-negative integer solutions to the equation $x_{1} + x_{2} + x_{3} + … + x_{k} = n$ is $(n + k – 1)$ choose $(k – 1)$. I find the formula useful in some of my CS classes which involve combinatorics and analysis of algorithms. Plus, I really liked the way that the formula was explained in class using the dots and sticks example.
My favorite formula is the Catalan number: $$C_n = \frac{1}{n+1} \cdot {2n\choose n} = \prod_{k=2}^{n} \frac{n+k}{k}$$ for $$n \ge 0$$.
This formula is widely used in many combinatorial problems.
I like the Schrodinger equation $i\hbar \frac{\partial}{\partial t} \psi = \hat H \psi$ because it’s the foundation of quantum mechanics