Conformal geometric algebra

The story so far: in the vector model of \(\mathbb{R}^n\), rotations that fix the origin are elegantly described by rotors. Sadly, rotors mix poorly with translations; for example, rotations about an arbitrary axis are awkward to describe.

In the homogeneous model, rotations and translation can be easily combined into a single operation, at the cost of resorting to vector-algebra-style 4x4 matrices. While geometric algebra still has advantages over vector algebra in the homogeneous model, it is a pity we lose our beloved rotors.

Here, we’ll introduce the conformal model, which has the best of both worlds. Translations are viewed as a special case of rotations, so can also be described with a sort of rotor. A single versor can describe rotations, translations, and other conformal transformations.

That’s no moon

Consider a rotating cube revolving around a planet, which in turn revolves around a star.

Let \(f(X) = e^{-X/2}\) in the geometric product: \[ \newcommand{\vv}[1]{\mathbf{#1}} \newcommand{\x}{\vv{x}} \newcommand{\y}{\vv{y}} \newcommand{\z}{\vv{z}} \newcommand{\i}{\vv{i}} V = f(12 \z\infty) f(\x\y \gamma) f(8\x\infty) f(\y\z \beta) f(2\y\infty) f(-\x\z \alpha) \]

We shall see how the map \(p \mapsto V p V^{-1}\) as \(\alpha, \beta, \gamma\) range over all angles produces the following animation:

We have effortlessly mixed rotations and translation. Pure algebra. No matrices required.

Though nice, this demonstration is perhaps underwhelming because we could have employed 4x4 matrices instead. In fact, we lose operations by switching from matrices to conformal geometric algebra: we must do without shear transforms and non-uniform scaling. However, we have ample compensation:

  • As with the homogeneous model, we have beautiful succinct formulas for subspaces, for transforming them (outermorphisms), and for the intersection of subspaces.

  • Outermorphisms preserve the outer product; conformal transformations preserve the entire algebra.

  • Unlike the homogeneous model, spheres of all dimensions are subspaces, that is, we have succinct formulas for manipulating spheres of any dimension.

  • Reflection and inversion are easily expressed.

See also projective geometric algebra, which retains many useful features while being simpler and more efficient.

So this article can stand on its own, we code from scratch, copying some definitions from before:

{-# LANGUAGE CPP #-}
{-# LANGUAGE OverloadedLists #-}
import Control.Monad
import Data.Bool
import Data.List
import Data.Map (Map)
import qualified Data.Map as M
import Data.Maybe
#ifdef __HASTE__
import Data.IORef
import Haste.DOM
import Haste.Events
import Haste.Graphics.Canvas hiding (translate, rotate)
import Haste.Graphics.AnimationFrame
#endif

type Multivector = Map String Double

m ! s = M.findWithDefault 0 s m

a ~== b = abs (a - b) < 10**(-6)

wedge :: Multivector -> Multivector -> Multivector
wedge m n = M.filter (not . (~== 0)) $
  M.fromListWith (+) $ catMaybes $ f <$> M.assocs m <*> M.assocs n where
  f a@(s, _) b@(t, _)
    | length u == length s + length t = Just c
    | otherwise                       = Nothing
    where c@(u, _) = basisMul a b

dot :: Multivector -> Multivector -> Multivector
dot m n = M.filter (not . (~== 0)) $
  M.fromListWith (+) $ catMaybes $ f <$> M.assocs m <*> M.assocs n where
  f a@(s, _) b@(t, _)
    | length u == length t - length s = Just c
    | otherwise                       = Nothing
    where c@(u, _) = basisMul a b

1D Complex Numbers

We’ve already seen complex numbers pop up in geometric algebra. Recall that for \(\mathbb{G}^n\) with \(n \ge 2\), if \(\i\) is a product of two orthonormal vectors, then \(\i^2 = -1\).

It turns out there’s another way to construct the complex numbers in geometric algebra, and we only need one dimension. We mentioned in passing that instead of the standard dot product, we could have used any quadratric form. If we take the one-dimensional vector space \(\mathbb{R}\) and define \(x \cdot x = -x^2\) for all \(x \in \mathbb{R}\) (the negation of the standard dot product), we wind up with a geometric algebra equivalent to the complex numbers.

For reasons that will become clear, we write \(\mathbb{G}^{0,1}\) for this geometric algebra.

2D Quaternions

We’ve already seen quaternions pop up in geometric algebra. If \(\vv{e}_1, \vv{e}_2, \vv{e}_3\) is an orthonormal basis for some 3-dimensional subspace of \(\mathbb{G}^n\) for \(n \ge 3\), then writing \(i = \vv{e}_2 \vv{e}_3, j = \vv{e}_3 \vv{e}_1, k = \vv{e}_1 \vv{e}_2\) yields the quaternions; the geometric algebra vector model description of 3D rotations is identical to the quaternion version.

It turns out there’s another way to construct the quaternions in geometric algebra, and we only need two dimensions. Let \(\vv{i}, \vv{j}\) be an orthonormal basis for the vector space \(\mathbb{R}^n\). Then define the geometric product using the inner product \((a\vv{i} + b\vv{j}) \cdot (c\vv{i} + d\vv{j}) = - ac - bd\) (the negation of the standard dot product). Lastly, define \(\vv{k} = \vv{i}\vv{j}\), and we have the quaternions!

For reasons that will become clear, we write \(\mathbb{G}^{0,2}\) for this geometric algebra.

What is a quadratic form anyway?

Let’s skip the details of quadratic forms, and fast forward to the results. It turns out nondegenerate geometric algebras are characterized by two non-negative integers \(p, q\), which we write \(\mathbb{G}^{p,q}\). We start with the vector space \(\mathbb{R}^{p+q}\) and instead of the standard dot product, we define the geometric product as the sum of the first \(p\) coordinates multiplied pairwise minus the sum of the other \(q\) coordinates multiplied pairwise:

qDot :: (Int, Int) -> [Double] -> [Double] -> Double
qDot (p, q) v w = sum a - sum b
  where (a, b) = splitAt p $ zipWith (*) v w

Why would we want negative norms? At the heart of geometric algebra, there is a tension with coordinates. On the one hand, we ultimately need them, but on the other, we want to hide them.

We’ve been praising geometric algebra because it captures high-level geometric concepts with funny little symbols. When we described rotations centered at the origin, we were liberated from coordinates. We composed rotations by multiplying rotors.

Can we generalize to all Euclidean isometries? In this setting, an algebraic expression describing, say, rotating a triangle ABC in 3D space around the axis AB by 60 degrees followed by translating along the direction AC by 100 units should be the same whether the origin is close by or far away.

Consider the square of a vector: \(\vv{v}^2\). With the conventional inner product, this is the square of the distance from the origin, which is meaningless in Euclidean geometry, because the origin is an arbitrary reference point. If we see the square of a vector in an expression, it better be canceled out by other geometrically meaningless terms, otherwise the whole expression is suspect. Squaring a vector is one of the simplest things we can do with the geometric product, so it’s troubling that it’s meaningless.

With negative norms, we can construct a null vector, which is a vector \(\vv{v}\) satisfying \(\vv{v}^2 = 0\), and use these null vectors to represent points. The square of such a vector now has nothing to do with our choice of origin. We also arrange things so that \(-2 \vv{v} \cdot \vv{w}\) is the distance between the points represented by \(\vv{v}\) and \(\vv{w}\). We wind up with a geometric product that is meaningful for Euclidean geometry. In fact, it turns out to be useful for more than just Euclidean geometry.

Conformal Geometry

A conformal transformation is an angle-preserving transformation. This is more general than it might sound because we can measure angles between two curves by taking the angle between the tangents at the point of intersection. It turns out we can transform lines and circles into each other and still preserve angles.

Inversion is the quintessential conformal map. In 2D, we pick a circle centered at a point \(O\) with radius \(r\). Then we map the point \(P\) to \(P'\) such that the directed distances satisfy \(OP \times OP' = r^2\). Inversion generalizes to any dimension.

Under inversion, a circle avoiding \(O\) maps to a circle avoiding \(O\), while a circle through \(O\) maps to a line avoiding \(O\), and vice versa. Lastly, a line through \(O\) maps to itself.

Lines can be thought of as circles with an infinitely large radius, whose ends meet at the point at infinity (unlike projective geometry, this infinite point is the same for all directions); inversion swaps \(O\) and this point at infinity. With this mindset, we can simply say inversions map circles to circles.

Besides inversions, conformal maps include more familiar operations: rotations, reflections, uniform scalings which we call dilations, and translations. Notably absent but seldom missed are shearing and non-uniform scaling.

The Conformal Model

We introduced an extra dimension for the homogeneous model. For the conformal model, we introduce two.

We extend \(\mathbb{R}^n\) with unit vectors \(e_+\) and \(e_-\) and construct \(\mathbb{G}^{n+1,1}\) by defining \(e_+^2 = 1\) and \({e_{-}}^2 = -1\). (The other basis vectors contribute to the inner product in the standard way.)

Define the origin \(o = (e_- - e_+)/2\) and infinity \(\infty = e_- + e_+\). These are null vectors:

\[ o^2 = \infty^2 = 0 \]

and we have:

\[ o \cdot \infty = -1 \]

Define \(E = o \wedge \infty\), which is \(-e_+ e_-\). This is called the Minkowski plane.

Some authors prefer to define the above so that their dot product is 1, in which case we must flip signs in some of our later formulas, such as the coefficient of \(\infty\) in our point representation and the exponent of the translation map.

In practice, we think of our space as \(\mathbb{R}^n\) extended by two orthogonal null vectors whose dot product is \(-1\). We write a multivector as if \(o, \infty\) are basis vectors, but we must be mindful that their products are not part of the canonical basis; in fact, they’re not even bivectors: \(o\infty = -1 + E\) and \(\infty o = -1 - E\).

To keep our code simple we stick with \(e_-\) and \(e_+\) in our implementation. Recall we represent basis vectors with single characters; we pick '+' for \(e_+\) and '-' for \(e_-\). We handle the latter vector specially so its norm evaluates to -1 instead of 1, but otherwise our geometric product function remains unchanged.

mul :: Multivector -> Multivector -> Multivector
mul x y = M.filter (not . (~== 0)) $
  M.fromListWith (+) $ basisMul <$> M.assocs x <*> M.assocs y

basisMul (s, m) (t, n) = gnome [] (s ++ t) (m * n) where
  gnome pre (a:b:rest) c
    | a < b  = gnome (pre ++ [a]) (b:rest) c
    | a == b = back pre rest $ bool c (-c) $ a == '-'
    | a > b  = back pre (b:a:rest) (-c)
    where
      back []  rest c = gnome []         rest            c
      back pre rest c = gnome (init pre) (last pre:rest) c
  gnome pre rest c = (pre ++ rest, c)

inf :: Multivector
inf = [("+", 1), ("-", 1)]
o :: Multivector
o   = [("+", -0.5), ("-", 0.5)]

prop_conformalIdentities =
  mul inf inf == [] &&
  mul o   o   == [] &&
  mul o   inf == [("", -1), ("+-", -1)] &&
  mul inf o   == [("", -1), ("+-", 1)]

We represent a point \(\newcommand{\p}{\vv{p}} \newcommand{\q}{\vv{q}} \p\) by any nonzero scalar multiple of:

\[ o + \p + \frac{1}{2} \p^2 \infty \]

Hence we can recover \(\p\) from a representation \(p\) via:

\[ \p = (\frac{p}{-p\cdot\infty} \wedge E) E^{-1} \]

This is just saying we can recover the original point by:

  1. Normalizing by dividing the coefficient of \(o\), which we extract by taking the dot product with \(\infty\).

  2. Zeroing the coefficients of \(o\) and \(\infty\), that is, computing the rejection with respect to the bivector \(E\).

While this equation is mathematically elegant, in our fromCGA function, we manipulate the coefficients directly instead:

toCGA :: [Double] -> Multivector
toCGA xyz = M.fromList $ ("-", t + 0.5) : ("+", t - 0.5) : zip (pure <$> "xyz") xyz
  where t = 0.5 * sum (map (^2) xyz)

-- Assumes input represents a point.
fromCGA :: Multivector -> [Double]
fromCGA m = map (get . pure) "xyz" where
  d     = m!"-" - m!"+"
  get s = m!s / d

A bit of algebra yields the identities:

  • \(p^2 = 0\) (so \(p\) is null)

  • \(p \cdot q = -\frac{1}{2}|\p - \q|^2\)

  • \((p - q)^2 = (\p - \q)^2\)

prop_pIsNull x y z = null $ join mul $ toCGA [x, y, z]

prop_pDotqIdentity px py pz qx qy qz = mul p q!"" == -d2 / 2
  where
    p = toCGA [px, py, pz]
    q = toCGA [qx, qy, qz]
    d2 = sum $ map (^2) $ zipWith (-) [px, py, pz] [qx, qy, qz]

prop_pMinusqSquared px py pz qx qy qz =
  join mul (M.unionWith (-) p q) == [("", d2)] where
    p = toCGA [px, py, pz]
    q = toCGA [qx, qy, qz]
    d2 = sum $ map (^2) $ zipWith (-) [px, py, pz] [qx, qy, qz]

As in the homogeneous model, \(p\) is normalized, and the representation is homogeneous, that is, \(\lambda p\) represents the same point \(\p\) for any nonzero \(\lambda\).

Lines, planes, circles, spheres, …​

The above quickly leads to dual representations of certain geometric objects.

The dot product identity implies \(p - q\) is a dual representation of the midplane between \(\p\) and \(\q\), for then \(x \cdot p = x \cdot q\) implies \(\vv{x}\) is equidistant from \(\vv{p}\) and \(\vv{q}\).

The same identity implies a sphere with center \(\vv{c}\) and radius \(\rho\) is given by \(x\cdot c = - \frac{1}{2} \rho^2\), which is equivalent to \(x\cdot (c-\frac{1}{2}\rho^2\infty) = 0\). Thus a dual representation of a sphere is simply the vector

\[ \sigma = c - \frac{1}{2}\rho^2\infty \]

We find \(\rho^2 = \sigma^2\), and that for a point \(p\), we have \(2p\cdot \sigma = \rho^2 - (\p^2 - \vv{c}^2)\) hence the sign of \(p\cdot\sigma\) tells us whether \(\p\) is inside, on, or outside the sphere.

An alternative dual representation is to use the center \(\vv{c}\) and some point \(\p\) on the sphere:

\[ \sigma = c - \frac{1}{2}\rho^2\infty = - (p\cdot\infty) + (p\cdot c)\infty = p (c\wedge \infty) \]

A dual plane with normal vector \(\vv{n}\) and distance \(\delta\) from the origin is the vector \(\vv{n} + \delta \infty\). We can extract \(\delta\) from this vector by taking the dot product with \(-o\).

A point \(\p\) lies in this plane if \(p\cdot (\vv{n}+\delta\infty) = 0\), which we can rearrange to \(\delta = - p\cdot\vv{n} / (p\cdot\infty)\). Thus a dual plane with normal \(\vv{n}\) through \(\p\) is:

\[ \pi = -(p\cdot\infty)\vv{n} + (p\cdot\vv{n}) \infty = p\cdot(\vv{n}\wedge\infty) \]

A circle is the intersection of a sphere and a plane, and since they span the whole space, we can compute their meet with:

\[ \sigma \cap \pi = (c - \frac{1}{2} \rho^2 \infty)\wedge(c\cdot(\vv{n}\wedge\infty)) \]

We find \((\sigma\cap\pi)^2 = -\rho^2\).

We’ll postpone proofs until the section on covariance below.

A dual line containing the point \(\p\) and orthogonal to the bivector \(\vv{B}\) is:

\[ \lambda = p \cdot (\vv{B} \infty) \]

As with the homogeneous model, direct representations of subspaces are incredibly simple. Unlike the homogeneous model, spheres of all dimensions can be considered subspaces in a manner of speaking that will be evident from the table below.

Subspace Equation

flat point; \(\p\) and \(\infty\)

\(p \wedge \infty\)

line through \(\p, \q\)

\( p \wedge q \wedge \infty \)

plane through \(\p, \q, \newcommand{\r}{\vv{r}} \r\)

\( p \wedge q \wedge r \wedge \infty \)

1D circle; two points \(\p, \q\)

\(p \wedge q\)

circle through \(\p, \q, \r\)

\( p \wedge q \wedge r \)

sphere through \(\p, \q, \r, \vv{s}\)

\(p \wedge q \wedge r \wedge s\)

These extend naturally to higher dimensions.

In the conformal model, a line and a plane meet at two points, one finite and one infinite, that is, some point \(\p\) and \(\infty\). Thus \(\lambda^* \cdot \pi\( is a flat point \)p \wedge \infty\), which represents a pair consisting of some point \(\p\) and infinity. Since \(p \wedge \infty = E + \p \infty\), we can extract \(\p\) with:

\[ \p = (o (p \wedge \infty) \wedge E) E^{-1} \]

As above, in code it’s simpler to read the coefficients directly.

A line \(L\) satisfies \(L^2 = (\p - \q)^2\). It can also be defined by \(p \wedge \vv{a} \wedge \infty\) where \(\vv{a}\) is a vector parallel to the line.

A plane \(P\) satisfies \(A^2 = -(P/2)^2\) where \(A\) is the area of the triangle defined by \(\p, \q, \r\). It can also be defined by \(p \wedge \vv{a} \wedge \vv{b} \wedge \infty\) where \(\vv{a}\wedge\vv{b}\) is parallel to the plane.

A line or circle can intersect a circle at two points. We call two points a 1D circle, and represent them with \(R = p \wedge q\). Its center is \(R\infty R\), and its radius \(\rho\) satisfies \(\rho^2 = R^2 / (R\wedge\infty)^2\). We can wedge a 1D circle with \(\infty\) to obtain Plücker coordinates, the first three of which represent a multiple of \(\q - \p\). Write \(\vv{v} = E \cdot (R \wedge \infty)\). Then the points of intersection are:

\[ R\infty R \pm \rho \vv{v} / |\vv{v}| \]

Fontijne and Dorst state that \(4 \rho^4 = R^2\), and recommend recovering \(p, q\) by computing:

\[ (R \pm \sqrt{R^2}) / (\infty \cdot R) \]

Their radius formula only works if the 1D circle is built from normalized points, but its sign appears to suffice for testing if the line intersects with a sphere. In contrast, their formula for recovering the points \(p, q\) works in general.

Since \(\infty \cdot R\) is a vector, its inverse is the same vector multiplied by a constant. As we’ll ultimately normalize to recover \(\p\) and \(\q\), we can ignore the constant.

The center of a circle \(C\) is \(C\infty C\), and its radius \(\rho\) satisfies \(\rho^2 = -C^2 / (C\wedge\infty)^2\).

The center of a sphere \(S\) is \(S\infty S\), and its radius \(\rho\) satisfies \(\rho^2 = S^2 / (S\wedge\infty)^2\).

For example, consider a sphere of radius 5 centered at \((0, 0, 7)\), and the line through \((0, 1, 0)\) and \((0, 1, 1)\). They intersect at \((0, 1, 7 \pm \sqrt{24})\), and our formulas should agree:

dual = (`mul` M.singleton "zyx-+" 1)

lineCGA p q = toCGA p `wedge` toCGA q `wedge` inf

meet a b = dual a `dot` b

minkowski :: Multivector
minkowski = M.singleton "+-" (-1)

gAdd :: Multivector -> Multivector -> Multivector
gAdd = M.unionWith (+)

prop_lineMeetSphere = fromCGA (r `mul` inf `mul` r) == [0, 1, 7]
  && and (zipWith (~==) p [0, 1, 7 + sqrt 24])
  && and (zipWith (~==) q [0, 1, 7 - sqrt 24])
  where
    s = foldl1' wedge $ toCGA <$>
      [[0, 0, 2], [0, 0, 12], [0, 5, 7], [5, 0, 7]]
    r = s `meet` lineCGA [0, 1, 0] [0, 1, 1]
    p = fromCGA $ mul (gAdd r (       sqrt <$> r `mul` r)) $ inf `dot` r
    q = fromCGA $ mul (gAdd r ((0 -) . sqrt <$> r `mul` r)) $ inf `dot` r

Conformal Operations

All conformal operations are easy to express:

Transformation Equation

rotation in \(\vv{i}\) by \(\theta\)

\(e^{-\vv{i}\theta/2} p e^{\vv{i}\theta/2}\)

translation: \(\p \mapsto \p + \vv{t}\)

\(e^{-\vv{t}\infty/2} p e^{\vv{t}\infty/2}\)

dilation: \(\p \mapsto \alpha\p\)

\(e^{-E \ln\alpha/2} p e^{E\ln\alpha/2}\)

reflection: plane normal \(\vv{n}\)

\(-\vv{n} p \vv{n}^{-1}\)

inversion: \(\p \mapsto \p^{-1}\)

\(-(o - \frac{1}{2} \infty) p (o - \frac{1}{2} \infty)^{-1}\)

Rotation from the vector model works unchanged, that is, we derive the rotor from the 2-blade representing the plane of rotation and the angle of rotation around the origin.

Reflection is easily verified. I suppose the other operations can be verified algebraically, but the simple equations suggest there are shortcuts.

In practice, for translation we use \(e^{\infty X} = 1 + \infty X\), and for dilation we can use \(e^{EX} = \cosh X + E\sinh X\) so the coefficients of \(\infty\) and \(o\) are mulitplied by \(\alpha\) and \(\alpha^{-1}\) respectively while the others are unchanged. Also, \(o - \frac{1}{2} \infty\) is simply \(-e_{+}\), which is self-inverse.

The minus signs for reflection and inversion mean these operations reverse orientations, though in practice we can ignore them due to normalization.

We only need rotation and translation in our cube animation. The following code takes three angles a, b, c and maps a cube to the corresponding location using reasonable radii values of the orbits:

translate :: [Double] -> Multivector
translate xyz = M.insert ""  1 $ mul inf $
  M.fromList $ zip (pure <$> "xyz") $ map ((-0.5)*) xyz

rotate :: [Double] -> Double -> Multivector
rotate axis theta = [("", -c), ("yz", nx*s), ("zx", ny*s), ("xy", nz*s)]
  where
    c            = cos (theta / 2)
    s            = sin (theta / 2)
    [nx, ny, nz] = map (/ sqrt(sum $ map (^2) axis)) axis

moon :: Double -> Double -> Double -> [[Double]]
moon a b c = fromCGA . transform . toCGA <$> cubeCoords
  where
    transform p = foldl1 mul $ rs ++ [p] ++ (inv <$> reverse rs)
    rs =
      [ translate [0, 0, 12]
      , rotate [0, 0, 1] c
      , translate [8, 0, 0]
      , rotate [1, 0, 0] b
      , translate [0, 2, 0]
      , rotate [0, 1, 0] a
      ]
    cubeCoords = replicateM 3 [-1, 1]
    inv = M.mapWithKey f where
      f [] v = v
      f _  v = -v

We cobble together some code to animate these transformations on this webpage.

#ifdef __HASTE__
-- Cube faces and their colours.
cubeFC = zip
  [[0, 1, 3, 2], [4, 6, 7, 5],
   [0, 4, 5, 1], [2, 3, 7, 6],
   [1, 5, 7, 3], [0, 2, 6, 4]]
  -- Darken, so the white side is visible.
  $ toColor . map (\c -> c * 220 `div` 255) <$>
-- http://the-rubiks-cube.deviantart.com/journal/Using-Official-Rubik-s-Cube-Colors-268760351
  [[0, 0x51, 0xba], [0, 0x9e, 0x60],
   [0xff, 0xd5, 0], [0xff, 0xff, 0xff],
   [0xff, 0x58, 0], [0xc4, 0x1e, 0x3a]]

toColor :: [Int] -> Color
toColor [r, g, b] = RGB r g b

paint canvas tRaw = sequence_ $ renderOnTop canvas . drawFace <$> filter vis cubeFC
  where
  ms = round tRaw :: Int
  ang s = fromIntegral (mod ms (s*1000)) * 2 * pi / fromIntegral (s*1000)
  drawFace (face, rgb) = color rgb . fill . path $ project . (ws!!) <$> face
  ws = moon (ang 2) (ang 5) (ang 23)
  project :: [Double] -> (Double, Double)
  project [x, y, z] = (x/z * 160 + 160, y/z * 160 + 160)
  -- Cull faces pointing away from us.
  vis (face, _) = d!"" <= 0 where
    (u:v:_) = zipWith (zipWith (-)) t ts
    d = foldl1' mul [toGA u, toGA v, M.singleton "zyx" 1, toGA h]
    ts@(h:t) = (ws!!) <$> face
    toGA = M.fromList . zip (pure <$> "xyz")

main = withElems ["canvas"] $ \[cElem] -> do
  Just canvas <- fromElem cElem
  paused <- newIORef False
  let
    anim t = do
      render canvas $ pure ()
      paint canvas t
      b <- readIORef paused
      unless b $ void $ requestAnimationFrame anim
    pause = do
      b <- readIORef paused
      writeIORef paused $ not b
      when b $ void $ requestAnimationFrame anim
  _ <- cElem `onEvent` MouseDown $ const pause
  requestAnimationFrame anim
#endif

Covariance

We’ve taken pains to write our equations in the form \(\pm V p V^{-1}\) for some versor \(V\). For rotations, it implied the Euler rotation theorem. In the conformal model, it implies covariance: conformal operations preserve more than just angles. Up to orientation, they preserve the entire algebraic structure: grades, sums, and products of all kinds (scalar; geometric; inner; outer).

We say "outermorphisms" for linear transformations in the homogeoneous model because they preserve the outer product, but there seems to be no equally catchy name ("panmorphism"?) for conformal transformations even though they preserve much more. [Kevin Latendresse suggests "conformism" because hardly anything changes.]

Covariance allows us to "prove by example". Let’s walk through the process a few times.

Any two points \(\p\) and \(\q\) can be mapped to O = (0,0,0) and X = (1,0,0) via conformal transformation. Let \(x = o + \x + \frac{1}{2}\infty\). If we show the line OX is represented by \(\lambda = o \wedge x \wedge \infty\), then by covariance, the line defined by \(\p, \q\) is directly represented by \(p \wedge q \wedge \infty\) as claimed.

We find \(\lambda = e_+ e_- \x\):

prop_unitSegment = o `wedge` (toCGA [1, 0, 0]) `wedge` inf == [("+-x", 1)]

and a point \(\vv{a}\) satisfies \(a \wedge \lambda = 0\) if and only if both its \(y\)- and \(z\)-coordinates are 0, thus \(\lambda\) represents the line OX.

We also find \(\lambda = o \wedge \x \wedge \infty\):

prop_rawUnitSegment = o `wedge` [("x", 1)] `wedge` inf == [("+-x", 1)]

thus again by covariance, the line through \(\p\) and parallel to \(\x\) is given by \(p \wedge \x \wedge \infty\). We’ve implicitly used the following fact: if we apply a translation versor \(1\pm \vv{a} \infty\) to a vector \(\p \in \mathbb{R}^n\), then the result is \(\p - \delta\infty\) for some \(\delta \in \mathbb{R}\) that gets annihilated by an outer product.

We have \(x \cdot o = -\frac{1}{2}\). Suppose we dilate by \(\alpha\), an operation we shall denote \(D_\alpha\). Then \(D_\alpha(x) \cdot D_\alpha(o) = -\frac{1}{2}\). Recall dilation multiplies the coefficient of \(o\) by \(\alpha^{-1}\), so normalizing yields: \(a \cdot o = -\frac{1}{2}\alpha^2 \) where \(a\) is the normalized representation of \(\vv{a} = \alpha \vv{x}\). Since translation and rotation have no effect on normalization, we derive \(p \cdot q = -\frac{1}{2} \alpha^2 = -\frac{1}{2} (\p - \q)^2\), an identity we encountered above.

Let \(R = x \wedge -x\) be a 1D circle of radius 1 centered at the origin. We find \(R \cdot R = 4\). Dilation by \(\rho\) and normalizing shows any 1D circle \(R\) of radius \(\rho\) satisfies \(R \cdot R = 4 \rho^4\); again, translation and rotation preserve normalization.

We can prove our dual line equation above, by showing it true for \(p = o\) and \(\vv{B} = \x\y\).

We can prove the other formulas for direct representations. For example, we can transform a circle to a circle of radius 1 centered at the origin via a translation and dilation. Then the point \(\p\) on this circle is represented by \(e_- + \p\), hence for three linearly independent points, we have \(C = p\wedge q \wedge r = e_- (\p\q + \q\r + \r\p) + \p\q\r\); none of these terms disappear due to linear independence. The outer product of this and a vector \(a\) is zero if and only if the coefficient of \(e_+\) is zero, which implies \(|\vv{a}| = 1\), thus in general \(p\wedge q\wedge r\) is indeed the direct representation of a circle.

On the unit circle, algebra shows \(C\infty C\) is a nonzero multiple of \(o\), which implies in general \(C \infty C\) is the center of the circle. (We have \(C \infty C \cdot p = C \infty C \cdot q = C \infty C \cdot r\) for the unit circle, and these hold under any conformal transformation; normalization affects each expression equally.)

Consider two lines meeting at the origin: \(l_i = o \wedge p_i \wedge \infty\) for \(i = 1, 2\). Then we can show the cosine of the angle between them is given by:

\[ \frac{l_1 \cdot l_2}{|l_1| |l_2|} = \frac{\p_1 \cdot \p_2}{|\p_1| |\p_2|} \]

By covariance, the left-hand side formula works for all intersecting lines in general. In fact, since inversion is conformal, the formula remains valid if we replace one or both lines with the equation of a circle.

Covariance implies reflecting a line \(l\) in a plane \(\vv{n}\) is:

\[ -\vv{n} l \vv{n}^{-1} \]


Ben Lynn blynn@cs.stanford.edu 💡