Walk on Spheres in One Weekend
Welcome to Walk on Spheres in One Weekend. Walk on spheres (WoS) is grid-free method for solving fundamental partial differential equations (PDEs), like the Laplace or Poisson equation, using the Monte Carlo method. In this tutorial you will implement a walk on spheres (WoS) solver “from scratch,” starting with nothing more than basic math and vector operations. We won't use any external libraries or dependencies—so that you really understand 100% of what's going on, and how these algorithms work. Throughout the tutorial we'll provide little code snippets for you to fill in, as well as reference solutions you can check your answers against. By the end you should have a fully functional solver for some basic equations—and a clear sense of how to extend the tutorial code into something that can be used to solve real problems.
Contents

Note: throughout the text, the “two feet” icon indicates that a footnote is available.After all, we are exploring the walk on spheres algorithm! Place your cursor over the icon to read the footnote.

The equations in this tutorial may not display properly in Safari or other browsers. If you run into any issues, please try a different browser, like Chrome or Firefox. Overview
What Is This All About?

The purpose of walk on spheres (WoS) algorithms is to compute the solution to basic PDEs encountered throughout physics, mechanics, geometry, etc. In particular, current walk on spheres solvers handle linear elliptic equations (Laplace equation, screened Poisson equation, biharmonic equation, etc.) and parabolic equations (like the heat equation), as well as equations that build on these basic equations (such as Navier-Stokes equations for fluid flow, or coupled diffusion and thermal radiation equations for heat transfer).

Traditionally, such PDEs are solved using finite element methods (FEM), but WoS has some unique features that give it a leg up on FEM solvers in many scenarios. In fact, algorithmically, WoS is much closer in spirit to Monte Carlo ray tracing algorithms used for photorealistic image synthesis, than it is to FEM. It therefore inherits a lot of the attractive properties of Monte Carlo rendering (and Monte Carlo algorithms in general). Quoting Sawhney & Crane (2020):

You can find an overview of walk on spheres, discussing its motivation and some of its unique features, at the beginning of this video.

Recent years have seen fairly rapid growth of the WoS method, as well as some excellent tutorials, nicely collected on the Resources on Monte Carlo Geometry Processing page. However, much of this literature focuses on the mathematical derivation of WoS methods, and numerical comparison to alternative methods. The purpose of this tutorial is to actually teach you how to code it up yourself! So, we'll focus less on justifying why algorithms work (which you can read about plenty elsewhere) and instead just talk through the nuts and bolts of a practical implementation.

In particular, for each section we'll sketch out a small piece of the algorithm, and then provide an interactive environment where you can try coding things up yourself. If you run into trouble, there will be a button that lets you “look in the back of the book” for the answer, i.e., which toggles between your code and the reference solution code. The coding exercises aren't too long, and you may be pleasantly surprised how quickly you reach a full, working solver for some basic PDEs (especially if you've ever tried coding up a mesh-based solver from scratch—including all the numerical linear algebra!). By the time you're done with the tutorial, you'll have a full, working PDE solver—all in one weekend. From there you should be able to extend the solver to richer problems (different boundary representations, boundary conditions, etc.), and we'll provide some pointers on how to get started.

Running Example

There are many situations where you might need a PDE solver, ranging from physical, chemical, mechanical, and biological problems, to problems in visualization or design. In this tutorial, just to have a concrete goal, we'll analyze a model of a cylinder head gasket. In an internal combustion engine (like the one you find in a gas-powered car), the head gasket basically keeps anything bad from leaking into the chambers where combustion occurs. However, if this gasket is subjected to too much thermal or mechanical stress (i.e., if it gets too hot, or bends too much), performance can degrade, and leaks can occur.

Since the head gasket is pretty thin relative to its width and height, it is often modeled as a 2D domain (rather than a full 3D model), making it a perfect test case for our simple 2D WoS solvers. This domain can get pretty complicated: it has not only a few large bore holes for the pistons to pass through, but also many small holes corresponding to coolant channels, oil channels, and oil return channels, as well as an outer boundary that has to conform to the geometry of the cylinder head and the cylinder block. In a finite element method, you would have to first generate a triangular or quadrilateral mesh of this domain, using separate meshing software, being careful to make elements sufficiently refine to resolve the solution, without leading to excessive computation. (There are also some tricky issues like re-entrant corners or other geometric singularities, which can confound even high-quality finite element meshes.) In contrast, our WoS solver will be able to work directly with the original boundary representation, without approximating curves or tessellating the domain.

On this domain, we'll consider two different PDEs, which help to analyze different phenomena:

In both of these problems, we can also use a gradient estimator to help search for these hot spots—without having to solve the PDE exhaustively at every point in the domain (as with finite element analysis).

More specifically, in both problems we will model the gasket as a domain \(\Omega \subset \mathbb{R}^2\) with boundary \(\partial\Omega\). For thermal conduction we will solve a Laplace equation with Dirichlet boundary conditions, namely, \[ \tag{Laplace equation} \begin{array}{rl} \Delta u = 0 & \text{on}\ \Omega \\ u = g & \text{on}\ \partial\Omega, \end{array} \] for the interior temperature \(u\), where the function \(g: \partial\Omega \to \mathbb{R}\) prescribes the temperature at the boundary. For elastic deformation we will solve the biharmonic equation \[ \tag{biharmonic equation} \begin{array}{rl} \Delta^2 u = 0 & \text{on}\ \Omega \\ u = g & \text{on}\ \partial\Omega \\ \tfrac{\partial^2 u}{\partial n^2} = 0 & \text{on}\ \partial\Omega, \end{array} \] where \(u\) describes the vertical displacement, \(g: \partial\Omega \to \mathbb{R}\) describes the vertical displacement at the boundary, and the second-order conditions specify that there is no bending/curvature along the boundary (here \(n\) is the outward unit boundary normal).

Beyond Our Running Example

Note by the way that if you're interested in playing with other domain geometries, from other areas of science and engineering, you can use our DXF to GLSL conversion tool to extract boundary geometry in a format compatible with our tutorial. Note, however, that the tutorial code works only with 2D domains bounded by line segments, circles, and circular arcs. If you want to work with other shapes, you'll have to implement them yourself (which is a great way to enrich your understanding!).

Code Framework

The code examples in this tutorial are written as pixel shaders in the OpenGL Shading Language (GLSL). For each example, we provide an interactive coding environment where you can edit the code and try it out in the browser itself.

This use of GLSL is a bit unorthodox, in the sense that a full-featured WoS solver (like Zombie) would more typically be written in a “real” language like C++. However, there are a couple good reasons for using GLSL to start learning about WoS, namely (i) it's easy to run the examples in-browser, without having to install anything, and (ii) GLSL can take advantage of the high-performance parallel nature of your GPU, which is needed to accumulate a large number of samples.

There are plenty of great tutorials on GLSL out there, but you won't need a super deep knowledge of the language for our WoS tutorial. Basically if you're familiar with any “C-like” programming language, you should be able to follow along. For the code we'll write, you may want to familiarize yourself with some basic GLSL types, like `float`, which represents numbers, and `vec2`, which represents 2D vectors—but honestly, a lot of this usage is pretty self-explanatory.

The most sophisticated data structure we'll use is a `struct`, which is just a basic collection of data. For instance, we might describe a circle via the struct

``` struct Circle { vec2 center; // (x,y) coordinates of the circle center float radius; // number encoding the radius }; ``` Once this `Circle` type has been defined, we can declare it and access its data like this: ``` Circle C; float r = C.radius; float area = M_PI * r*r; // compute the area A = πr² ... ``` Note that GLSL doesn't define some basic numerical constants; we'll assume throughout that we've made the following definitions: ``` const float FLT_MAX = 3.402823466e+38; // maximum single precision floating point value const float M_PI = 3.14159265358979323846; // value of π in single precision ``` Computing Over Pixels

The more unique feature of GLSL pixel shaders is how they perform computation. Rather than explicitly loop over an array of grid cells, we just write a single function, `mainImage`, that gets executed independently for every pixel in an image, in parallel, by the graphics processing unit (GPU). This setup is well-suited to basic walk on spheres algorithms, which likewise are able to estimate the solution to a PDE at a collection of points (or even just a single point of interest) without needing to communicate information among grid cells or mesh elements.

Note that it also means we obtain one solution value per pixel in our grid—about half a million values for an \(800 \times 600\) image. This is way more solution values than are often computed using traditional methods like FEM—and we're doing it essentially in real time. So, that's a good thing to keep in mind when comparing the performance of WoS with that of traditional solvers.

On the flip side, there are some useful extensions to WoS, like boundary value caching or mean value caching, where it is helpful to share information between different evaluation points. This is a place where it starts to become useful to build a solver in a general-purpose programming language (like C++), rather than our simple tutorial-oriented GLSL setup.

Domain Coordinate Systems

Of course, in order for this parallel computation to be useful, it has to get applied to slightly different inputs for each pixel. In particular, the `mainImage` function has an input `vec2 fragCoord`, which gives the coordinates \(\vec{x}_{\text{pixel}} = (x_{\text{pixel}},y_{\text{pixel}})\) of the pixel being processed. These coordinates are in the range \([0,w]\) and \([0,h]\), respectively, where \(w\) and \(h\) are the (integer) width and height of the domain, in pixels.

Often, however, it's more convenient to express our calculations in terms of a coordinate system that does not directly depend on the window resolution. So, we will do a little calculation at the beginning of each program to compute the location \(\vec{x} \in \mathbb{R}^2\) of our pixel in a coordinate system where (i) the origin \((0,0)\) is at the center of the image, and (ii) the top and bottom of the image are at height \(y=\pm 1\). In particular, we let

\[ \vec{x} := 2\frac{(\vec{x}_{\text{pixel}} - (w/2,h/2))}{h} = \frac{2\vec{x}_{\text{pixel}} - (w,h)}{h}. \]

Teasing this expression apart, we're first expressing the pixel location \(\vec{x}_{\text{pixel}}\) relative to the center of the image, \((w/2,h/2)\). We're then dividing this new location by the image height, putting all \(y\)-coordinates in the range \([-1/2,1/2]\). Finally, we multiply by two so that all \(y\)-coordinates are in the range \([-1,1]\). The \(x\)-coordinates likewise range between \(-w/h\) and \(+w/h\). Note that we apply the same scaling to both \(x\)- and \(y\)-coordinates, so that each pixel represents a small square region in the domain (rather than a small rectangle).

From here we can adjust our domain size by applying other transformations to \(\vec{x}\). For instance, if we wanted to cover a vertical range of [-3,3] rather than [-1,1], we could just scale our coordinates up by a factor of three. The shader below provides a minimal example of how to set up such a coordinate system.

Here, and throughout all our shaders, the constants `iResolution.x` and `iResolution.y` provide the width \(w\) and height \(h\), respectively. The `mainImage` function also provides the pixel coordinates \(\vec{x}_{\text{pixel}}\), in the variable `fragCoord`. Here, to visualize the coordinate system, we use the fractional part of the \(x\) and \(y\) coordinates to define the red and green color channels, respectively. This way, each box represents a square of the integer grid, and we can see that there are three boxes above/below the origin, as intended.

Shader Environment

The shader environment above, which we use for examples throughout the tutorial, lets you edit a GLSL fragment shader on the left, and visualize the output on the right. Changes don't show up immediately—you must either press the `Compile` button, or use the keyboard shortcut `Ctrl+Enter` (see below for a full list of shortcuts). For some shaders there is a temporal component (e.g., moving geometry), which can be triggered using the `Play/Pause` button. Importantly, for all our Monte Carlo integration examples, there is an `Accumulate` button, that averages computed estimates over time.

Any errors in your shader code will show up in red, and will give you the line number on which the error occurs. For more advanced editing, you can enable Vim keybindings.

Controls & Shortcuts
Compile (`Ctrl+Enter`)Recompiles the shader with current editor content.
Play/Pause (`Ctrl+P`)Starts/stops continuous rendering.
Reset (`Ctrl+R`)Clears accumulation and resets time.
Accumulate (`Ctrl+/`)Toggles frame accumulation mode.
Toggle Solution (`Ctrl+T`)Toggles between user code and reference solution (when available).
Copy (`Ctrl+K`)Copy shader source to clipboard
Save (`Ctrl+S`)Save shader source to file
Vim Mode (`Ctrl+M`)Enables/disables Vim keybindings in the editor.
Mathematical Notation
Math vs. Code

Throughout we will describe algorithms both in terms of GLSL code and data structures, as well as using standard mathematical symbols and notation. Especially when we work out tricky formulas (such as those for geometric queries), it can be a lot easier on our very limited brains to only have to juggle a letter or two for each point, vector, and angle—versus a long variable name. However, in translating between math and code it can be helpful to understand how these quantities correspond. In particular we have the following correspondence:

object mathematical notation GLSL
real number \(a \in \mathbb{R}\) `float a`
two-dimensional vector \(\vec{x} \in \mathbb{R}^2\) `vec2 x`
three-dimensional vector \(\vec{x} \in \mathbb{R}^3\) `vec3 x`
vector component \(\vec{x}_i\) `x[i]`
dot product \(\vec{u} \cdot \vec{v} \ \text{or}\ \langle \vec{u}, \vec{v} \rangle\) `dot(u,v)`
cross product \(\vec{u} \times \vec{v}\) `cross(u,v)`
length \(|\vec{v}|\) `length(v)`
unit vector \(\widehat{\vec{v}}\) `normalize(v)`

As motivated in our appendix on computing signed angles, we define the cross product of 2D vectors \(\vec{u},\vec{v} \in \mathbb{R}^2\) as \[ \vec{u} \times \vec{v} := \vec{u}_1 \vec{v}_2 - \vec{u}_2\vec{v}_1, \] which gives the nonzero component of the 3D cross product \((\vec{u}_1,\vec{u}_2,0) \times (\vec{v}_1,\vec{v}_2,0)\), i.e., the component ”sticking out of the plane.”

Spheres vs. Balls Given that we are implementing the walk on spheres algorithm, it should come as no surprise that we will be talking a lot about spheres—and balls. If you're confused about the difference, it's totally excusable: in natural languageWell, at least in English! the word “ball” is used interchangeably to mean something that might be either solid or hollow. For instance, a meatball is solid, whereas a soccer ball is hollow:

In mathematics, however, these two words have a distinct, standardized meaning:

More precisely, the unit \(n\)-dimensional ball is the set of all points in \(\mathbb{R}^n\) a distance no greater than 1 from the origin: \[ \{ \vec{x} \in \mathbb{R}^n : |\vec{x}| \leq 1 \}. \] The corresponding unit sphere is the set of all points a distance exactly equal to 1 from the origin: \[ \{ \vec{x} \in \mathbb{R}^n : |\vec{x}| = 1 \}. \] More generally, we can consider a ball centered at \(\vec{x}\) with a radius \(r\): \[ B(\vec{c},r) := \{ \vec{x} \in \mathbb{R}^n : |\vec{x}-\vec{c}| \leq r \}. \] Typically we'll drop the radius \(r\), since we often just want to talk about “some” ball around \(\vec{x}\). Finally, we express a sphere as the boundary of a balls: \[ \lowlight{\partial B(\vec{c},r)}{sphere with center c, radius r} \] Here \(\partial\) is not a derivative, but just means “take the boundary of this set.” Much as the sugar coating on an M&M is just the “shell” of its chocolate center, a sphere can be thought of as the shell, or boundary, of the solid ball. Turn Up the Volume

In general, we'll use \(|X|\) to denote the volume of any set \(X\), or really the “interesting volume.”Formally, the \(k\)-dimensional Hausdorff measure, where \(k\) is the intrinsic dimension of the set. For instance, in 3D a ball of positive radius \(r > 0\) has some substantial volume (since it is “full of chocolate”), and so \(|B|\) will be a nonzero value. But you might say that a sphere in 3D has zero volume, since it's “infinitely thin.” However, a sphere of positive radius in 3D still has a nonzero area—and it is of course this quantity we're interested in when we write \(|\partial B|\). So perhaps we should say that \(|X|\) gives the “generalized volume,” possibly referring to length, area, volume, hypervolume, etc., depending on the dimension. But after a while you get used to this idea, and simply call it “volume.”

Circles vs. Disks

Something math-types also often forget to say out loud is that, for \(n=2\), we for some reason have special words for “sphere” and “ball”:

Since the walk on spheres algorithm is essentially the same in any dimension, we will often use the words “sphere” and “ball”, even though we draw pictures of circles and disks. In fact, that's exactly what we did in the picture above! Well, you have at least been warned…

Problem Description
Domain Geometry

A beautiful thing about solving PDEs using walk on spheres is that we don’t have to make a mesh or grid of the domain. Instead, we can use any mix of primitives (circles, rectangles, line segments, etc.) to describe our domain geometry, as long as they support a few basic geometric queries. For this tutorial, we’ll use circles, line segments, and circular arcs to define our domain geometry.

To fully specify a PDE, we must also somehow specify our boundary conditions. A simple way to define the boundary data might be to just store a single value per primitive. In general, however, the boundary data could be much more general—for instance, we could specify an integer ID `boundaryID` that triggers evaluation of an arbitrary function \(g(x)\) to evaluate at the specific boundary point \(x\) under consideration. Alternatively, we could define a single, global function \(g(\vec{x})\) that can be evaluated at any point \(\vec{x}\) in space, but only actually query this function at boundary points \(\vec{x} \in \partial\Omega\).

We'll come back to the question of how to model boundary conditions later on; for the moment, our scene data structures will just describe the geometry of each primitive. For instance, each circle stores the usual center and radius:

```{4} struct Circle { vec2 center; float radius; // must be positive // ...plus some boundary data... }; ``` Likewise, each line segment stores its two endpoints, plus a boundary value: ```{4} struct Segment { vec2 start; vec2 end; // ...plus some boundary data... }; ``` Finally, each circular arc is specified as a center, radius, and start/end angle: ```{6} struct Arc { vec2 center; float radius; // must be positive float startAngle; // in radians float endAngle; // in radians; must be greater than startAngle // ...plus some boundary data... }; ``` Note that it's important that the start and end angles be ordered, since otherwise the definition of the arc is ambiguous. For instance, if we were just given two angles \(\pi/3\) and \(-\pi/3\) in no particular order, should we draw the smaller arc (passing through \((1,0)\)) or the larger one (passing through \((-1,0)\))? With our `Arc` data structure above, we know our arc always goes from the start angle to the end angle, in the counter-clockwise direction.

Given this setup, our problem domain is then described by just a list of circles, a list of segments, and a list of arcs: ``` // Initialize a list of circles const int nCircles = 3; Circle circles[nCircles] = Circle[]( // center, radius Circle( vec2(0.0, 0.0), 1.00 ), Circle( vec2(1.0, 1.0), 0.50 ), Circle( vec2(-1.0, -1.0), 0.75 ) ); // Initialize a list of segments const int nSegments = 2; Segment segments[nSegments] = Segment[]( // start, end Segment( vec2(0.0, 0.0), vec2(0.0, 1.0) ), Segment( vec2(1.0, 1.0), vec2(-1., -1.) ) ); // Initialize a list of Arcs const int nArcs = 2; Arc arcs[nArcs] = Arc[]( // center, radius, start, end Arc(vec2( 0.1, 0.2), .5, .1, .4 ), Arc(vec2( 0.3, 1.4), .6, .7, 1.9 ) ); ```

In order to make use of this description, we'll need to develop some basic geometric queries.

Geometric Queries In lieu of a global mesh or grid, the only thing a walk on spheres algorithm needs to be able to do is answer geometric queries about the boundary geometry. The most basic information we'll need for any algorithm is the distance between a query point \(x\) and the closest point \(\overline{x}\) on the domain boundary \(\partial\Omega\). In fact, to get a correct solution to our PDE, we don't even need the exact distance: it's fine to use any “conservative” estimate of the distance, i.e., any value less than or equal to the true distance. Later, more sophisticated algorithms may require additional geometric information, such as the actual location of the closest point, the point where a given ray intersects the boundary, or the closest silhouette point. But for now, the distance to the boundary will be enough.

Here we derive algorithms for computing the exact distance to our three basic shapes, `Circle`, `Segment`, and `Arc`; the distance to more complicated composite shapes, such as our cylinder head gasket, can then be computed by simply taking a minimum over the individual components. Note that similar formulas are readily available for the signed distance function (SDF) of nearly any type of basic shape. Inigo Quilez provides fairly comprehensive lists in both 2D and 3D, including some very nice tutorials, and GLSL code that can easily be integrated with the code from this tutorial.

Distance to Circles
The distance to a circle is the easiest one to start with. Here already we start to see the benefits of the walk on spheres approach: rather than tessellating circles into line segments (or in 3D: spheres into polygons), we can solve a problem on a domain defined by exact circular boundaries. Such boundaries come up surprisingly often in scientific and engineering applications, modeling for instance circular (or spherical) particles or pores. Remembering that in our domain description a circle is defined by a center \(\vec{c} \in \mathbb{R}^2\) and radius \(r \geq 0\), we know that the points in a circle are given by the set \[ C_{\vec{c},\vec{r}} := \{ \vec{x} \in \mathbb{R}^2 : \|\vec{x}-\vec{c}\| = r \}, \] i.e., by all the points \(\vec{x}\) in the plane \(\mathbb{R}^2\) whose distance from the center (given by the norm of the difference between \(\vec{x}\) and the center \(\vec{c}\)) is equal to the radius. Any points where the quantity \(d := \|\vec{x}-\vec{c}\|\) is less than \(r\) are inside the circle; points where \(d\) is greater than \(r\) are outside the circle:
So, to get the distance to the circle (rather than its center), we can just measure how far this quantity \(d\) is from zero. In other words, we can just take the absolute value of \(d-r = \|\vec{x}-\vec{c}\| - r\). This idea translates very easily to code. Try filling in the method `distanceCircle` below, which should turn the screen from flat black into a nice visualization of equal-distance contours to a circle drawn in black. Hitting the `Play` button will move the circle around, so you can verify that your implementation works for more than just one circle. If you're struggling to get this working, or are not sure what the correct answer should look like, just hit the `Toggle Solution` button, which will show the reference solution and code.
Distance to Line Segments
Computing the distance to a line segment is a bit trickier, but again leads to a nice simple formula. The high-level strategy is that, given a query point, we: However, rather than computing three separate distances (line and two endpoints), we can roll everything into one simple calculation that avoids any conditional branching. The basic observation is that we can just “clamp” the closest point on the line to the closer of the two endpoints. This approach is based on a calculation by Inigo Quilez, who gives a more in-depth video tutorial on this formula, with some beautiful examples.

Looking again at our domain description, we see that a line segment is defined by a start point \(\vec{a} \in \mathbb{R}^2\) and an end point \(\vec{b} \in \mathbb{R}^2\). Given a query point \(\vec{x}\), we want to find the distance to the closest point \(\xbar\) on this segment:

First, let's find the signed distance \(h\) from \(\vec{a}\) to \(\xbar\), in the direction of \(\vec{b}\). Note that this distance can be negative, if \(\xbar\) is to the “left” of \(\vec{a}\) (relative to \(\vec{b}\). We can compute \(h\) by projecting the vector \(\vec{x}-\vec{a}\) onto the unit vector in the direction \(\vec{b}-\vec{a}\), via a dot product: \[ h := \left\langle \vec{x}-\vec{a}, \frac{\vec{b}-\vec{a}}{|\vec{b}-\vec{a}|} \right\rangle. \] Dividing by the segment length \(|\vec{b}-\vec{a}|\) converts \(h\) into a value \(t\), such that \(t=0\) at \(\vec{a}\) and \(t=1\) at \(\vec{b}\): \[ t := h/|\vec{b}-\vec{a}| = \frac{\langle \vec{x}-\vec{a}, \vec{b}-\vec{a} \rangle}{|\vec{b}-\vec{a}|^2} = \frac{\langle \vec{x}-\vec{a}, \vec{b}-\vec{a} \rangle}{\langle\vec{b}-\vec{a}, \vec{b}-\vec{a}\rangle}. \] If \(t\) is less than zero, then the projection of \(\vec{x}\) onto the gray line is to the “left” of \(\vec{a}\), and hence we know the closest point in the red segment is the point \(\vec{a}\) itself. Likewise, if \(t\) is greater than one, the closest point is \(\vec{b}\). So, clamping \(t\) to the range \([0,1]\) tells us where the closest point is, in terms of \(\vec{a}\) and \(\vec{b}\): \[ \xbar = \vec{a} + \text{clamp}(t,0,1) \vec{b}. \] The final distance \(d\) we're looking for is then just the norm of the distance between our query point \(\vec{x}\) and this closest point \(\xbar\): \[ d = |\vec{x} - \xbar|. \] This formula leads to a concise and efficient routine for computing the distance to a segment:
Distance to Circular Arcs
Our approach to computing the distance to a circular arc combines the strategies we used for both a circle and a line segment:
In more detail, suppose we have an arc with center \(\vec{c}\), radius \(r > 0\), start angle \(\theta_0\), and end angle \(\theta_1 > \theta_0\). The first thing we want to do is check if the given query point \(\vec{x}\) is contained in the “wedge” between the start and endpoint of the arc. To do this, we construct unit vectors \[ \begin{array}{rcl} \vec{u} &:=& (\cos\theta_0, \sin\theta_0), \\ \vec{v} &:=& (\cos\theta_1, \sin\theta_1), \\ \end{array} \] which point from the circle center toward the two endpoints \(\vec{a} = \vec{c} + \vec{u}\) and \(\vec{b} = \vec{c} + \vec{v}\), respectively. If \[ \vec{w} := \vec{x} - \vec{c} \] is likewise the vector from the circle center \(\vec{c}\) to the query point \(\vec{x}\), then we can compute the angle \(\phi\) from the first endpoint \(\vec{a}\) to the query point \(\vec{x}\) as \[ \phi := \text{atan2}( \vec{u} \times \vec{w}, \vec{u} \cdot \vec{w} ). \] (This formula is explained in the appendix on computing signed angles.) The query point is inside the wedge as long as \(0 \leq \phi \leq \theta_1 - \theta_0\), in which case the distance is the same the distance to a circle with center \(\vec{c}\) and radius \(r\). Otherwise, the distance is the distance to the closer endpoint \(\vec{a}\) or \(\vec{b}\). Putting it all together, we have \[ d = \begin{cases} ||\vec{x} - \vec{c}| - r|, & 0 \leq \phi \leq \theta_1 - \theta_0, \\ \min(|\vec{x}-\vec{a}|,|\vec{x}-\vec{b}|), & \text{otherwise}. \end{cases} \] Note that from here there are several ways we can accelerate our code. For instance, rather than storing `startAngle` and `endAngle` in the `Arc` data structure, we could store the corresponding unit vectors \(\vec{u}\) and \(\vec{v}\) directly—or perhaps those vectors times the circle radius, since \(\text{atan2}\) doesn't care about magnitude, and we need the scaled vectors to evaluate the points \(\vec{a} = \vec{c} + r\vec{u}\) and \(\vec{b} = \vec{c} + r\vec{v}\). It's also possible to develop formulas for arc distance that avoid branching, but to be useful these formulas need to be generalized to arbitrary start/end angles (at which point it's not clear whether they are any faster than our basic formula above). In general, premature optimization is the root of all evil, and for now we can continue happily with this “fast enough” routine.
Putting All the Shapes Together
Once we know how to compute the distance to each type of primitive, finding the distance to the domain boundary is easy—we just take the minimum of the distance over all primitives that make up the boundary: ``` float distanceBoundary( in vec2 x ) { float d = FLT_MAX; // start with the largest possible value // keep the minimum distance to any primitive in the domain boundary for( int i = 0; i < circles.length(); i++ ) { d = min( d, distanceCircle( x, circles[i] )); } for( int i = 0; i < segments.length(); i++ ) { d = min( d, distanceSegment( x, segments[i] )); } for( int i = 0; i < arcs.length(); i++ ) { d = min( d, distanceArc( x, arcs[i] )); } return d; } ``` The example below plots the distance to a more complicated domain: the boundary of a Volkswagen cylinder head gasket. As noted above, you can try out other models using our our DXF conversion tool.
Beyond the simple shapes we've considered here, there are a huge number of other shapes and surfaces that admit easy distance computation. Importantly, you don't even have to compute the distance exactly, as long as you have a lower bound on the distance. I.e., if you know that the boundary is no closer than a certain distance, then it's safe to take a random step of that size in any direction. This “under-stepping” can be helpful in applying your implementation to boundary representations where exact distances are not knownHeck, even some fractal sets have nice simple distance under-estimates., or simply reduce cost when distance evaluation is expensive. (Of course, this reduction in cost has to be kept in balanced with the larger amount of work needed to take more, smaller steps.) PDE Estimators
Laplace Estimator

Now that we know how to define and query our domain geometry, we can actually start building PDE solvers. Recall that the first equation we want to solve is a Laplace equation \[ \tag{Laplace equation} \begin{array}{rl} \Delta u = 0 & \text{on}\ \Omega \\ u = g & \text{on}\ \partial\Omega, \end{array} \] where \(u: \Omega \to \mathbb{R}\) is the unknown function on the domain \(\Omega\), \(g: \partial\Omega \to \mathbb{R}\) are known values along the domain boundary \(\partial\Omega\). For instance, you can imagine that we can observe the temperature of the gasket from the outside, and want to predict its temperature on the inside. The operator \(\Delta\) denotes the usual Laplace operator on \(\mathbb{R}^n\).

Mean Value Formula

Most often, the value of \(\Delta u\) at a point \(\vec{x} \in \mathbb{R}^n\) is expressed in coordinates \(x_1, \ldots, x_n\), as the sum of all second partial derivatives: \[ \Delta u(\vec{x}) = \sum_{i=1}^n \frac{\partial^2 u}{\partial x_i^2}(\vec{x}) \]

However, there's another interpretation of the Laplace operator that is simultaneously more meaningful—and also leads us to the walk on spheres algorithm. Intuitively, for any function \(u: \Omega \to \mathbb{R}\), the value of \(\Delta u\) at any point \(\vec{x} \in \Omega\) gives the deviation of \(u\) from its average over a small sphere around \(\vec{x}\),. More precisely, \[ \Delta u(\vec{x}) \propto \lim_{r \to 0} \frac{1}{r^2}\left(\text{mean}_{\partial B(\vec{x},r)}(u) - u(\vec{x})\right), \] where \(B(\vec{x},r)\) is a ball centered on \(\vec{x}\) of radius \(r\). So, suppose that for all points \(\vec{x}\) and all radii \(r\), the mean is equal to the value at the center. Then the function \(u\) is a solution to our Laplace equation, i.e., \(\Delta u(\vec{x}) = 0\). Likewise, it is not hard to showThough does not quite follow immediately… that every solution to our Laplace equation satisfies the mean value property \[ \boxed{ u(\vec{x}) = \text{mean}_{\partial B(\vec{x},r)}(u), } \] i.e., the solution to our Laplace PDE at any point is nothing more than the average value over any sphere \(\partial B\) around \(\vec{x}\). More explicitly, we can write the mean value of \(u\) over a sphere as its integral over the sphere, divided by the sphere area \(|\partial B|\): \[ u(\vec{x}) = \class{lowlight}{\cssId{sphere area}{\frac{1}{|\partial B(\vec{x})|}}} \class{lowlight}{\cssId{total over sphere}{\int_{\partial B(\vec{x})} u(\vec{y})\,dy}}, \] where \(B(\vec{x}) \subset \Omega\) is any \(n\)-dimensional ball centered around \(\vec{x}\) and contained within the domain \(\Omega\), \(\partial B(\vec{x})\) is the sphere bounding this ball (i.e., its “outer shell”), \(|\partial B(\vec{x})|\) is the (generalized) volume of this sphere (i.e., its length in 2D, or surface area in 3D), and the integral is taken over all points \(\vec{y}\) of the sphere.

Specialization to 2D

In our 2D problem, \(B(\vec{x})\) is a solid disk, and \(\partial B(\vec{x})\) is just a circle centered around \(\vec{x}\) of some radius \(r > 0\). Hence, \(|\partial B(\vec{x})|\) is the usual circumference for a circle, \(2\pi r\), and our expression for the Laplacian of a function \(u\) can be written more explicitly as \[ u(\vec{x}) = \frac{1}{2\pi r} \int_0^{2\pi} u(\vec{x} + r(\cos\theta,\sin\theta))\,r d\theta. \] Here we've just written the point \(\vec{y}\) as an offset from the ball center \(\vec{x}\) by its radius \(r\) in the unit direction corresponding to each angle \(\theta\).

Amazingly enough, when \(u\) is a harmonic function, the mean value formula holds for a ball of any radius—so long as it's still contained inside the problem domain. We can find such a radius by computing the distance from \(\vec{x}\) to the closest point on the domain boundary \(\partial\Omega\), and will assume \(r\) equals this largest radius unless otherwise noted. In fact, that's why we spent so much time working out our geometric queries. (For reasons that will become clear later, using the largest ball will also help our algorithms converge quickly.)

Recursive Estimation

Believe it or not, the mean value formula (just an integral over a circle) almost gives us an algorithm for solving our Laplace equation. It says: if we want to know the value of the solution \(u\) at a point \(\vec{x}\), we just need to integrate over its value at all points \(\vec{y}\) in a circle around \(\vec{x}\). The only problem is that, in general, we don't know the value of \(u\) at \(\vec{y}\) either. However, we do have a formula that tells us the value of \(u(\vec{y})\): the mean value formula, which describes \(u(\vec{y})\) in terms of yet other values of \(u\)! If we keep going down this route, it sounds like we might just forever keep invoking the mean value formula. But actually, there's a way out: we do know the solution values \(g\) along the domain boundary \(\partial\Omega\). So, if we can ultimately express the solution purely in terms of these boundary values (and we will!), then we're in good shape.

The other issue we have to content with is the fact that, to evaluate our integral, we can't possibly evaluate \(u\) at all points of the circle. After all, there are infinitely many of them! But what we can do instead is approximate this integral using a finite number of samples. For instance, if we took \(k\) evenly-spaced angles \(\theta_0, \ldots, \theta_k\) on the circle, then we could use the approximation \[ \int_0^{2\pi} u(\vec{x} + r(\cos\theta,\sin\theta))\,r d\theta \approx \sum_{i=1}^k u(\vec{x} + r(\cos\theta_i,\sin\theta_i)) \frac{2\pi r}{k}, \] where the final factor \(2\pi r/k\) captures the idea that each sample is used to approximate the value on “one \(k\)th” of the circle. To get the average value of \(u\) over the circle (rather than the total value) we can divide by \(2\pi r\), leading to some fairly straightforward code: ``` float kSampleSolutionEstimate( vec2 x, // evaluation point int k // number of samples around the circle ) { // Get the radius r of the largest empty ball around x, by // computing the distance to the domain boundary float r = distanceBoundary( x ); // Add up solution values around the circle float sum = 0.; for( int i = 0; i < k; i++ ) { float theta_i = (2.*M_PI*i)/k; // evenly-spaced angles vec2 y_i = x + r*vec2( cos(theta_i), sin(theta_i) ); // sample point on circle sum += solutionEstimate(y_i); // recursively estimate solution } return sum / k; // return average } ```

But there's a big problem here: if we use \(N\) samples of the circle to approximate \(u(\vec{x})\), and then apply this formula again to approximate \(u(\vec{y})\) at each of these \(N\) samples, then we're already taking \(N^2\) samples. Much worse, as we keep going, recursively applying this formula, we'll need \(N^3\), \(N^4\), \(N^5\), ... samples, exponential blowup (with no end in sight).

Monte Carlo to the Rescue

So, we're going to take a different approach that might sound a bit crazy at first, but which turns out to work extremely well. Rather than use \(N\) evenly-spaced samples of the circle to approximate each integral, we'll use just one random sample. In other words, we'll use a Monte Carlo estimator. More explicitly, we can approximate our integral as just \[ \int_0^{2\pi} u(\vec{x} + r(\cos\theta,\sin\theta))\,d\theta \approx 2\pi r u(\vec{x} + r(\cos(2\pi\xi),\sin(2\pi\xi))), \quad \xi \sim \mathcal{U}([0,1]), \] where the notation \(\xi \sim \mathcal{U}([0,1])\) means that the value \(\xi\) is a random number in the interval \([0,1]\) (drawn uniformly at random). Notice that the factor of \(2\pi r\) in this approximation of our integral is going to get canceled out by the factor of \(1/(2\pi r)\) in the mean value formula. So, our overall estimate of the solution at \(u\) at the point \(\vec{x}\) is simply \[ \widehat{u}(\vec{x}) := \widehat{u}(\vec{x} + r(\cos(2\pi\xi),\sin(2\pi\xi))), \quad \xi \sim \mathcal{U}([0,1]), \] where here and throughout we use a “hat” to indicate that a value is approximate.

In the next section, we'll think about how to turn this recursive expression into actual code.

Taking a Walk

If we look back at the expression \(\widehat{u}(\vec{x})\), or at the function `estimateSolution`, they look a little strange: to approximate the solution at \(\vec{x}\), we just grab the value at some nearby random point \(\vec{y}\). To estimate the solution at \(\vec{y}\), we grab the value at some nearby random point, and so on. In fact, at this point it might be useful to enumerate the points used in our esimate as \(\vec{x}_0, \vec{x}_1, \vec{x}_2, \ldots\). If we plot the sequence of points generated by this procedure, as well as the spheres we draw samples from, we see that we are taking a literal walk on spheres—hence the name walk on spheres!

Let's write a little shader that visualizes such a walk. You can get a sneak preview of the thing we're trying to code by clicking the `⇄ (Toggle Solution)` button (but no peeking at the solution code!). Here we use a simpler domain that will make it easier to see long walks: just a circle inside a square. We've also set things up so that clicking and dragging the cursor changes the starting point for the walk, and hitting `Play` changes the random seed, so we can see different possible walks.

The main function you need to implement is `visualizeWalk`, which simulates and visualizes a walk starting from the given point `x0`. Before thinking about the visualization aspect, see if you can simply tease apart the recursive expression for our estimate \(\widehat{u}(\vec{x})\) into non-recursive procedure that computes the points \(\vec{x}_0, \vec{x}_1, \vec{x}_2, \ldots\) along the walk, i.e., the arguments to the recursive invocations of the function \(\widehat{u}(\vec{x})\). Remember that in general we want to take the largest possible step at each iteration. [Hint: you probably want to use the `distanceBoundary` function we spent so much work developing in the previous section!]

Just for fun, and to reinforce your thinking about distance functions, you'll also need to use your existing distance functions for circles and lines to help you visualize the walk itself! Try drawing a circle for the sphere used to sample each step of the walk, and a line segment between each pair of consecutive points. To draw the points themselves, we've added a new function `distanceDisk`; take a look, and try using it to define a function that can draw a small dot at a given location.

Note by the way that random number generation is not built in to GLSL—but is part of pretty much any other standard programming language. Since it's a bit of a distraction (yet still needed to build a solver “from scratch!”), we defer implementation of the function `unitRand()` to the appendix on random number generation.

It's Good to Be Average

Ok, just to recap our story so far:

If this estimate sounds pretty bad, well... it is! At least if we only take one walk. However, rather than averaging over many points in each sphere, we can just average over many walks, each of which requires only a linear number of samples. In other words, what we really want to compute is not just \(\widehat{u}(\vec{x})\), but rather an average \[ \frac{1}{N} \sum_{i=1}^N \widehat{u}(\vec{x}), \] where \(N\) is the total number of walks, and each term in the sum is computed using different random numbers. The whole idea of Monte Carlo integration, then, is that as we take more and more walks, we will see less and less error in our estimate (which for us will show up as less and less noise in our visualization).

In a typical code environment, this average might be expressed with a simple outer loop:

``` float estimateLaplace( vec2 x, // evaluation point int nWalks ) // number of samples used to compute average { // Accumulate estimates from many walks float estimate = 0.; for( int i = 0; i < nWalks; i++ ) { // Note that solutionEsimate(x) will give different // values each time we call it, since it uses different // random numbers each time estimate += solutionEstimate(x); } return estimate / nWalks; // return average } ```

We'll see a bit later that this kind of averaging is a bit trickier in GLSL. But the code structure above is the “right” way to think about the averaging process in a typical implementation. (Or perhaps you might divvy up the \(N\) walks across multiple threads, and compute pieces of this average in parallel.)

Termination: Boundary Conditions and The Epsilon Shell

In case you hadn't noticed, we still have a glaring problem: as written, our walk never never ends! We just keep chasing random neighboring points until the end of time (or really, until we get a stack overflow). As noted above, we want to somehow take advantage of the fact that we know the solution values \(g\) on the domain boundary \(\partial\Omega\) to terminate our random walk.

If we wait until we sample a point \(\vec{x}_k\) exactly on the domain boundary, we’re out of luck: since our spheres generally just touch the boundary tangentially, the probability that we ever sample this single tangential point is essentially zero. Instead, we'll make a slight approximation and say that if the most recent step in our walk \(\vec{x}_k\) is “close enough” to the boundary, then we can stop and just use the value \(g(\xbar_k)\) at the boundary point \(\xbar_k \in \partial\Omega\) closest to \(\vec{x}_k\). In particular, we can specify a stopping tolerance \(\varepsilon > 0\), and stop if the distance \(d(\vec{x}_k,\partial\Omega)\) between \(\vec{x}_k\) and the closest point on the domain boundary \(\partial\Omega\) ever goes below \(\varepsilon\). In this case, we often say that we are within the “\(\varepsilon\)-shell” \(\partial\Omega_\varepsilon\).

One way to express this stopping condition is to make our expression for the estimate \(\widehat{u}(\vec{x})\) a conditional expression, that either recursively evaluates another solution estimate (if we're outside the \(\varepsilon\)-shell), or directly evaluates the boundary conditions (if we're inside the \(\varepsilon\)-shell): \[ \highlight{\widehat{u}(\vec{x}_k)}{estimate at x} := \begin{cases} \highlight{\widehat{u}(\vec{x}_{k+1})}{recursive estimate}, & \highlight{d(\vec{x}_k,\partial\Omega) > \varepsilon}{if still inside}, \\ \lowlight{g(\xbar_k)}{boundary value}, & \lowlight{d(\vec{x}_k,\partial\Omega) \leq \varepsilon}{if near boundary}. \end{cases} \] Here, as usual, the next point \(\vec{x}_{k+1}\) is the one obtained by randomly sampling the largest sphere around \(\vec{x}_k\). The quantity \(d(\vec{x},\partial\Omega)\) is the distance between \(\vec{x}\) and the domain boundary, as we computed earlier in the section on Geometric Queries.

In code, this means we just add a small check to our estimation procedure before continuing with the next step of the walk. Let's see if you can implement a version of this procedure, starting with the skeleton below (which includes a working version of the methods from the previous examples). If so, you will have completed the first part of the tutorial! In other words, you will have built a complete, working solve for the Laplace equation “from scratch.” Well, almost: this code will estimate the solution with just a single walk per point, rather than taking the average over many walks, as discussed above. But see the comments immediately following the code for a discussion of this averaging takes place in GLSL—or just hit `Play` to see it happen before your eyes!

So, this is great… but where's the loop over walks, implementing our average \(\tfrac{1}{N}\sum_{i=1}^N \widehat{u}(\vec{x}_0^i)\)? Unlike the “outer loop” we discussed earlier, we've built the GLSL shader environment in a way that lets us progressively accumulate estimates over time, so that we can watch the solution emerge as we take more and more walks. Conceptually this happens by storing a 2D array of solution estimates, one for each pixel in the image, which we update and display each time we take a new walk.If you're really interested in how this is implemented at a low level, feel free to take a look at the code in shader-env.js. However, this setup gets pretty gnarly, and is way outside the scope of this tutorial! In a more conventional programming language, you would simply modify values in a 2D array. This kind of setup is quite common in Monte Carlo rendering, helping to provide immediate feedback about what the solution will look like, without having to spend a huge about of time and computation resolving the solution down to its final accuracy. Likewise, in the context of engineering design or physical simulation, and can be quite helpful to get this kind of immediate “live” feedback—allowing someone to make small adjustments without running a long simulation. What Are We Looking At?

After working through all these little details, and getting your code to match the reference solution, your head might be spinning a little bit. What are we actually looking at in this final image?

Remember that, from an application point of view, we set out to compute the temperature at each interior point of our gasket, given known temperatures on its boundary. To do this, we said we would use the Lapalace equation \(\Delta u = 0\) as our mathematical model. We then observed that the solution to this equation at any point \(\vec{x}\) is just the average of the solution values at all points on any sphere around \(\vec{x}\). So, we recursively estimated this this average, by taking a single sample of each sphere, which led to the walk on spheres (WoS) algorithm. We then averaged over many such walks, to get a final estimate of the solution at each point.

So, what are we looking at in the end? Exactly what we wanted to see: an estimate of the domain temperature, using a “pseudocolor” visualization where brighter pixels indicate higher temperatures, and darker pixels indicate lower temperatures. From this image—or even a rough estimate of this image—we can quickly identify “hot spots” where the gasket gets too hot (e.g., we could optionally plot in bright red points that go over a maximum temperature threshold). Since the simulation is so fast, we could easily move around components with our cursor (say) to explore different gasket designs. Or even differentiate the temperature with respect to the parameters of the boundary geometry (like circle centers and radii) to automatically optimize the design, and keep it cool.

But differentiability is a story for another day. For now, we're going to move on and talk about a couple more basic aspects of Monte Carlo PDE solvers, namely (i) generalizing our implementation slightly to handle Poisson equations, which will help us get a handle on a fundamental concept of importance sampling, and also (ii) incorporating Neumann boundary conditions, which are an essential component of more realistic physical models.

Poisson Estimator

A Poisson equation is only slightly more general than a Laplace equation: rather than assuming that the temperature on the interior depends only on the boundary temperature \(g\), we imagine there is also a “background” source of heat or cooling at each point in the domain. If this source is described by a function \(f: \Omega \to \mathbb{R}\), then the equation we want to solve is now \[ \tag{Poisson equation} \begin{array}{rl} \Delta u = f & \text{on}\ \Omega \\ u = g & \text{on}\ \partial\Omega. \end{array} \] For instance, in the context of our gasket we might use \(f\) to model heating due to the engine block or cylinder head. This data could come from a fixed function that models the ambient temperature distribution, or from a grid or table—or might even come from a secondary solver. The beauty of walk on spheres is that, unlike finite element or finite difference methods, you don't have to sample this function onto every node of a grid ahead of time. Instead, our only requirement is that we can evaluate it at a given point \(\vec{x}\) whenever the solver requests it. This makes evaluation of the source term “output-sensitive”: we do the minimum possible amount of work needed to estimate the solution.

Integral Equation for the Poisson Equation For this more general equation, the mean value property no longer applies to \(u\): if it did, we'd just recover the solution for \(f=0\). Instead, we need to consider a more general boundary integral equation (BIE), which expresses the solution as \[ \lowlight{u(\vec{x})}{temperature} = \lowlight{\frac{1}{|\partial B(\vec{x})|} \int_{\partial B(\vec{x})} u(\vec{z})\,d\vec{z}}{boundary term} + \lowlight{\int_{B(\vec{x})} G(\vec{x},\vec{y}) f(\vec{y})\,d\vec{y}}{source term}. \]

As with the Laplace equation, we have a boundary term that expresses the temperature at \(\vec{x}\) in terms of the temperature on a sphere around \(\vec{x}\). But we also now have a source term, which incorporates a new function \(G: B \times B \to \mathbb{R}_{\geq 0}\) called the Green's function. For any two points \(\vec{x}, \vec{y}\) in the ball \(B\), the value \(G(\vec{x},\vec{y})\) describes how a “pin prick” of heat applied at \(\vec{y}\) impacts the temperature felt at the point \(\vec{x}\).And conversely: it describes how heat applied at \(\vec{x}\) impacts the temperature felt at \(\vec{y}\), i.e., the Green's function \(G\) is a symmetric function. Hence, by integrating the Green's function \(G\), times our source function \(f\), over all points \(\vec{y}\), we can determine how the source term over the whole ball impacts the temperature we're currently trying to estimate at \(\vec{x}\).

Harmonic Green's Function for a 2D Ball
In our setting, where we have a 2D domain and hence want to integrate over a two-dimensional ball (i.e., disk), the Green's function for a ball of radius \(R\) can be written explicitly as \[ \tag{2D Harmonic Green's Function (Ball)} G(\vec{x},\vec{y}) = \frac{1}{2\pi}\log\left(\frac{R}{|\vec{x}-\vec{y}|}\right) \] If we plot this function in terms of the distance \(r := |\vec{x}-\vec{y}|\) from the ball center (here viewed “from the side”), we notice a couple things: This behavior will play an important role in the efficiency of our Monte Carlo estimator. In fact, we'll work out two different schemes: a naïve estimator that doesn't take the behavior of \(G\) into account, and a more sophisticated importance-sampled estimator that treats it more carefully—and yields better performance. Beyond Poisson As a side note, this idea of translating \[ \text{PDE}\ \to\ \text{boundary integral equation over a ball}\ \to\ \text{Monte Carlo estimator} \] is the basic strategy for coming up with many walk on spheres algorithms. The Laplace and Poisson equations are just the first two of many examples. The fact that we specifically consider an integral over a ball is the “secret sauce” that makes walk on spheres algorithms work out so nicely: instead of having to solve the problem all at once over a domain with complicated geometry, we reduce it to a sequence of sub-problems that we can more easily solve over simpler subdomains (like balls), where for instance we have closed-form expressions for the Green's function. We'll expand on this basic picture when we talk about Neumann boundary conditions, where our subdomains are no longer balls—but the basic principle of “decomposing into simpler domains” still applies. Monte Carlo to the Rescue… Again

So, how do we turn the boundary integral equation for the Poisson equation into an actual algorithm? We already know what to do for the boundary term: recursively sample a point on the largest sphere around the current point \(\vec{x}_k\), just as we did for the Laplace estimator. But what about the source term?

Here, the answer is again of course to compute a Monte Carlo estimate. This time around, we'll need to draw samples uniformly from the ball (or disk) \(B(\vec{x})\) around \(\vec{x}\), rather than from the circle \(\partial B(\vec{x})\); the procedure for actually computing such a sample is discussed in the section below. We can then estimate the source term as

\[ \frac{1}{M} \sum_{i=1}^M G(\vec{x}_k,\vec{y}_i) f(\vec{y}_i), \] where \(\vec{y}_1, \ldots, \vec{y}_M\) are independent uniform samples of the disk \(B(\vec{x})\). Unlike the boundary term, where our unknown quantity \(u\) depends on other unknown values of \(u\), everything in this integral is known: the source term \(f\) is given, and the Green's function \(G\) is known in closed form. So, we could in principle use as many samples as we want without getting an exponential blowup in cost. In practice, however, since we know we're going to take many walks, and since we need to compute a source integral at every step of our walk, we'll still just take a single sample. (Though as we talk through our importance sampling strategy, you might want to think about scenarios in which it's useful to take a few more.)

Sampling a Disk

To actually evaluate our estimate of the source term, we have to somehow draw a sample \(\vec{y}\) uniformly from a ball, or in 2D, disk \(B(\vec{x},r)\) of radius \(r\), centered at \(\vec{x}\). “Uniformly” means that it needs to be equally likely that we pick any point inside the disk.Or, if you want to be a bit more pedantic: it means the probability that we pick a point from any region within the disk is proportional to the area of that region. But the only source of randomness we have are points that are uniformly distributed on the interval \([0,1]\), provided to us in our GLSL code skeleton by the routine `unitRand()`. So, what can we do?

Actually, we already solved this kind of problem when sampling the circle \(\partial B(\vec{x},r)\) in our Laplace estimator: given a uniform sample \(\xi\) from \([0,1]\), we “warped” it into a sample of the circle with radius \(r\) and center \(\vec{x}\), by applying the transformation \[ \xi \mapsto \vec{x} + r\left( \cos(2\pi\xi), \sin(2\pi\xi) \right). \] In essence, we first “stretch” the unit interval \([0,1]\) into a larger interval \([0,2\pi]\), corresponding to the set of all possible angles \(\theta\) around the circle, plug this angle into the usual equation \((\cos(\theta),\sin(\theta))\) for a circle, scale this circle by its radius \(r\), and finally translate it by its center \(\vec{x}\).

How can we perform the same kind of “warping” of random samples for a disk? Suppose for now we just want to uniformly sample the unit disk \(B(\vec{0},1) \subset \mathbb{R}^2\). Here, instead of warping a 1D interval into a curve (i.e., a circle), we effectively need to warp the 2D square made by a pair of two random samples \((\xi_1, \xi_2) \in [0,1]^2\) into a disk. In other words, we need to go from rectangular to polar coordinates. One idea that quickly comes to mind is to

i.e., apply the usual polar coordinate formula. However, this simple scheme doesn't quite work out, since a circle of small radius (say, \(r=0.1\)) has much smaller circumference than a circle of large radius (say, \(r=0.9\)), and yet we distribute just as many samples near the small circle as the large one. Hence, we get too many samples near the center, and not enough near the boundary (as illustrated above).

Instead, by performing a careful calculationThe link will take you to a full derivation, but the essential idea is that we can use the determinant of the Jacobian to determine how much the volume changes under any “warping.” By writing out this formula, we can determine what the warping should look like to achieve a uniform warping of volume across the whole domain., one can show that a correct transformation is given by \[ (\xi_1,\xi_2) \mapsto \sqrt{\xi_1}\left( \cos(2\pi\xi_2), \sin(2\pi\xi_2) \right). \] If we sample a bunch of random points from \([0,1]^2\) and apply this transformation, we'll get nice uniform points in the disk, as in the rightmost (blue) picture below. If on the other hand we omit the square root, we get the uneven sampling seen in the leftmost (red) picture.

Side quest: if you want to do a little experiment to show that our square root formula really is right (and the other one is wrong), a nice test you can do is to integrate (using Monte Carlo) the function \[ f(x_1,x_2) := \cos(\alpha_1 x_1) + \cos(\alpha_1 x_2) \] over the unit disk centered at each point of the plane, where \(\alpha_1 \approx 3.83171\) is the first zero of the Bessel function \(J_1(x)\). This function is carefully chosen so that the integral over any unit disk is exactly zero—and you can see why if you plot it: as we translate the disk around, any ”positive stuff” we gain or lose is exactly balanced by the change in “negative stuff”. But if we omit the square root from our formula, our integrals will focus more on the values near the center of the disk—which are not constant over the domain. (Maybe you can cook up your own shader to test this out?) Basic Poisson Estimator

Taking both boundary and source terms into account, and incorporating our “1-sample Monte Carlo” strategy for estimating both terms, our final recursive estimator for the Poisson equation is given by \[ \highlight{\widehat{u}(\vec{x}_k)}{estimate at x} := \begin{cases} \highlight{\widehat{u}(\vec{x}_{k+1})}{recursive estimate} + \highlight{|B(\vec{x}_k)| G(\vec{x}_k,\vec{y}_k) f(\vec{y}_k)}{source term}, & \highlight{d(\vec{x}_k, \partial\Omega) > \varepsilon}{if still inside the domain}, \\ \lowlight{g(\overline{\vec{x}}_k)}{boundary value at closest point}, & \lowlight{d(\vec{x}_k, \partial\Omega) \leq \varepsilon}{if close enough to the boundary to stop}. \end{cases} \] There are an awful lot of symbols here, but algorithmically the only thing we're doing differently from our Laplace estimator is to (i) sample a random point \(\vec{y}_k\) from the disk \(B(\vec{x}_k)\) at each step, and (ii) evaluate the contribution of the source term at this point. The area factor \(|B(\vec{x}_k)|\) just accounts for the fact that we're approximating the source term integral by pretending that the source is constant over the disk (with value equal to our sampled value).

See if you can translate this estimator into code, starting with a working implementation of the Laplace estimator. See the comments above the function `float solvePoisson(in vec2 x0)` to get started. You should have all the tools you need, but might need to stop and think for a moment about exactly what to implement. To better see the effect of the source term, we set the boundary conditions to zero, and defined a simple source term \(f(\vec{x}) = 1\), via the function `sourceTerm(vec2 x)`, that is constant over the whole domain. As before, hitting `Play` will accumulate samples, and smooth out the noise in your Monte Carlo estimates.

By the way, these simple boundary/source terms actually correspond to a different physical problem, of determining the deflection of a thin membrane with negligible bending stiffness (like a soap bubble, cell membrane, or rubber balloon) under uniform pressure or uniform load. E.g., if you have a really thin shape, and subject it to a uniform force like gravity—but fix the boundary in place—what will happen to the rest of the surface? Of course, you're free to play with whatever source term (and boundary conditions) you like. Have at thee, coward!

Computing Signed Angles This appendix describes how to calculate angles between vectors in the plane. In general, sloppy handling of angles in geometric code can lead to unnecessarily complicated code—and long, frustrating debugging sessions. In contrast, adopting a principled approach to angle calculations can yield concise code that nicely handles the general case. By signed angles, we mean angles that describe both the difference and direction between the angle of two vectors. For instance, if I tell you only that the hour hand on a clock is \(90^\circ\) away from midnight, this unsigned angle doesn't specify whether it's 9:00pm or 3:00am. On the other hand, the phrase “three hours after midnight” indicates a signed angle of \(+90°\), and “three hours before midnight” describes an angle of \(-90°\). With vectors in the plane it's the same except: An important question, then, is: how do we compute the angle \(\theta\) from a vector \(\vec{u}\) to a vector \(\vec{v}\)? If you've taken a course in vector calculus/3D calculus, you may recall some relationships between vectors and angles like \[ \langle \vec{u}, \vec{v} \rangle = |\vec{u}||\vec{v}|\cos\theta. \] So, you might think that to get the angle we can just solve this equation for \(\theta\) to get \[ \theta = \arccos\left( \frac{\langle \vec{u}, \vec{v} \rangle}{|\vec{u}||\vec{v}|} \right) = \arccos(\langle \widehat{\vec{u}}, \widehat{\vec{v}} \rangle). \] The problem is that this expression always gives the smallest unsigned angle between \(\vec{u}\) and \(\vec{v}\), rather than the signed angle from \(\vec{u}\) to \(\vec{v}\) in the counter-clockwise direction. For instance, if \(\vec{u} = (1,0)\) (pointing at “3 o'clock”) and \(\vec{v} = (0,-1)\), the expression above will give \(90^\circ\) or \(\pi/2\) radians, making impossible to know whether we should swing \(\vec{u}\) clockwise or counter-clockwise by a quarter turn to reach \(\vec{v}\). The Two-Argument Arc Tangent. The basic problem is that a single value (such as the dot product) does not provide enough information to disambiguate between the different cases. Instead, we need to use the two-argument arc tangent function \[ \text{atan2}: \mathbb{R}^2 \to [-\pi,\pi); (y,x) \to \theta, \] which gives the angle \(\theta\) of the vector \((x,y)\) relative to the horizontal axis \((1,0)\) (at angle \(0\)), properly taking the quadrant into account. Notice that the argument order really is \((y,x)\) rather than \((x,y)\), perhaps to evoke the idea that the ordinary (single-argument) arc tangent function takes \(y/x\) as input. For instance, “3 o'clock” gets mapped to 0, “12 o'clock” gets mapped to \(+\pi/2\), and “6 o'clock” gets mapped to \(-\pi/2\). (Note, then, that if we prefer our angles in the range \([0,2\pi)\), we have to add \(+2\pi\) to any angle returned by \(\text{atan2}\)).
In GLSL, \(\text{atan2}\) can be called by simply invoking the arc tangent function with two arguments, i.e., `atan(y,x)`. As noted above, the argument order really should be `(y,x)` and not `(x,y)` (which is a common source of bugs).To make matters more complicated, some packages, such as Wolfram Mathematica, Microsoft Excel, and Google Sheets use the other argument order, `(x,y)`. So watch out! Signed Angle From One Vector to Another. Using the \(\text{atan2}\) function, we can now correctly compute the signed angle from a vector \(\vec{u}\) to another vector \(\vec{v}\). The key idea is that we now need to use the first vector to define our coordinate system, i.e., we need to think of \(\vec{u}\) as the “x-axis” or the ”3 o'clock” direction, and measure the coordinates \(x,y\) of the second vector \(\vec{v}\) relative to \(\vec{u}\). One way to do this is to let \[ x := \langle \vec{u}, \vec{v} \rangle \qquad\text{and}\qquad y := \langle \mathcal{J}\vec{u}, \vec{v} \rangle, \] where the \(2 \times 2\) matrix \[ \mathcal{J} = \left[ \begin{array}{cr} 0 & -1 \\ 1 & 0 \end{array} \right] \] represents a quarter-turn in the counter clockwise direction. In other words, we just measure the extent of \(\vec{v}\) along the two basis directions \(\vec{u}\) and \(\mathcal{J}\vec{u}\), and use the resulting coordinates \((x,y)\) as the arguments for our call to \(\text{atan2}\). Expanding the definition of the dot product, we have \[ \begin{array}{rcl} x &=& \vec{u}_1\vec{v}_1 + \vec{u}_2\vec{v}_2, \\ y &=& \underbrace{\vec{u}_1\vec{v}_2 - \vec{u}_2\vec{v}_1}_{=: \vec{u} \times \vec{v}}, \\ \end{array} \] where the second formula defines a sort of 2D cross product. So, we can also write the signed angle \(\theta\) from \(\vec{u}\) to \(\vec{v}\) using a reasonably compact formula \[ \boxed{\theta = \text{atan2}( \vec{u} \times \vec{v}, \vec{u} \cdot \vec{v} ),} \] i.e., as just the two-argument arc tangent of the cross product and the dot product. Random Number Generation Write me.