Every year that I teach the numerical analysis sequence, we end the year with a project on image compression. This year, the students used a technique called Principal Component Analysis (PCA) to sort though large datasets of images, looking for a common structure in the image data. Once they discovered the structure, they could use it to compress the images by only storing some of the pixel data and using what they knew about images to reconstruct the rest. The technique doesn’t work as well as it can on images which are already compressed with JPEG, since that method loses some detail already.

But the results on uncompressed data from a RAW file? Pure genius. Kristen Bach of Treehouse and beautyeveryday and Karen Gerow of Double Helix STEAM Academy donated some of their very excellent photography for the students to try their work on.

After a semi-rigorous set of A/B comparisons between different compressions of various images, the class decided that the best results were due to Fred Hohman (in the 50% compression category, meaning that Fred uses half of the image to predict the other half), Irma Stevens (in the 90% compression category, meaning that Irma used 10% of the image data to predict the rest) and Ke Ma (in the 99% compression category, meaning that Ke used only 1% of the image data to predict the rest).

Here are their results!

I just finished uploading a large collection of my tight knots and links to Thingiverse, where they can be downloaded for 3d printing or sent to Shapeways or another service. The ever-awesome Laura Taalman did a bunch of these over the summer, and I was so inspired that ever since I’ve wanted to put the whole collection online somewhere.

Hopefully, someone will actually print these! Here’s a sort of random selection of some of the models available for your printing pleasure…

After another couple of days messing about, I finally understand a few more things about LuxRender.

1.) You can’t make a LightSource of type “area”. Despite the fact that this is (ahem) nowhere in the documentation, you have to use AreaLightSource. The AreaLightSource seems to be a LOT less powerful than a “direct” light source, requiring changes in gain on the order to thousands to see them both in the same scene. This may be because the “direct” light is taken to be something like sunlight?

2.) LuxRender reliably crashes at the end of each and every render if you use hybrid (CPU/GPU) mode. I’m not sure whether this is because it’s insisting on picking up the Intel Iris GPU on my MacBook as well as the NVIDIA card, but I can’t seem to find a way to lock out the IrisPro.

3.) Despite the face that “sphere” primitives are described as unsupported in the hybrid renderer, they actually seem to work just fine in 1.3.1 (although they do throw a warning, this may just mean that the CPU has to get involved?).

4.) Texture mapping meshes depends on you providing explicit uv coordinates for the vertices. In these coordinates, the dimensions of the texture image seem to be [0,1] x [0,1], regardless of the pixel dimensions of the image.

5.) Although per-vertex normals are described as “optional” in the mesh documentation, and a mesh I built by hand worked fine without them, a PLY mesh without vertex normals doesn’t seem to render at all.

After all this, I finally managed to make the example I was looking for, which will be part of the MATH 4510/6510 class next semester. It’s a collection of 750 spheres distributed normally in space with a collection of 70 spheres distributed according to a scaled normal which are all glowing.

The resulting image is kind of sleek and dramatic and wonderful, I think. I found that the noise in the image was pretty much gone at 500 S/p. (This is an 800×800 render, and I got to that point in about 10 minutes on the left.) The one on the right has a good deal more light, and required half an hour or so to get to that point. The image shown is at something closer to 800 S/p, and required some manual balancing of the light sources to look right.) I was more than prepared to continue all night, but it doesn’t seem needed.

My book on Unity arrived today, so that seems like a natural time to switch gears and plunge into my next tech project: writing the 2.0 version of TaylorTurret!

So I’ve been working for a few days on understanding LuxRender, which if you’ve never seen it is a quite impressive global illumination renderer. I’ve been trying to piece together a Mathematica interface which will allow me to take quick snaps of tubes, knots, random polygons and the like and been completely befuddled.

Aside from the usual confusion about coordinate systems and the like, the real thing that threw me about physically realistic rendering was the idea of tone mapping. The problem is that images that you actually see in the real world have very high dynamic range: there are regions in the image where lots and lots of light is coming into your eye, and others where very little is. That causes a real problem for a renderer, which comes up with brightness values which are way larger in range than anything which you can display in a reasonable file format (also EXR seems to be a graphics format designed for exactly this sort of thing). I’ve figured out the settings enough to produce the image below right from the data below left:

which I call “pink popcorn explosion”. That seems like a pretty good day’s work to me! Of course, the image right suffers from the usual problems of bad photorealistic rendering: too few lights, shadows, are too harsh, color choices are kind of random, and there’s no sense of scale anywhere in the image. Plus the camera angle or the data seems to have changed between renders. Still, as a first attempt: seems like a good day’s work to me!

So I know I’m dating myself (badly) here, but a lot of my personal art code still relies on PovRay in order to make “nice” renders of things. The problem is that PovRay has been entirely dead for the past six or seven years, and now doesn’t even compile and install on my exciting new Mavericks laptop. Which meant it was time for a change.

The criteria were pretty simple: I had to be able to render from *Mathematica, *since most of my work these days uses it in some form or another. And I had to be able to use it as a scriptable command-line interface from Ridgerunner, in case I want to go back and make animations and so forth.

Some web research revealed a couple of candidates:

- The RenderMan clones Pixie (www.pixierender.org) and Aqsis (www.aqsis.org). After all,
*Mathematica*outputs RIB files, so in theory I could just render the RIB file directly. Unfortunately, Pixie hasn’t been updated since 2009, and Aqsis chokes on*Mathematica*‘s RIB output. No luck. - The “academic” renderer PBRT. This even comes with a
*Mathematica*interface, so I figured that I was in good shape! Unfortunately, the*Mathematica*interface is written for*Mathematica*3.0 and uses various features that are by now not only deprecated, but actually not functional. I briefly considered fixing the interface, but then I discovered… - LuxRender. LuxRender is a PBRT fork which includes a number of nice enhancements, and seems to have an active user community. Plus, the scene file format is well-documented, and includes direct support for the PLY mesh format. PLY isn’t something that I currently support in tube, but there’s a reasonable straight-C selection of source code available from the old days, and it wouldn’t be so hard to get PLY output from tube directly if I needed to add it in. Further,
*Mathematica*at least claims to support Graphics3D output as PLY, so all I’d have to translate would be the camera stuff.

This means that today’s project is going to be to try to hack together some kind of *Mathematica* -> LuxRender interface by exporting the scene geometry as PLY and writing out the camera parameters and lighting (and so forth) as text directly into a LuxRender scene file. Hopefully, we’ll be able to run LuxRender directly from inside Mathematica so that we can debug the whole thing as needed.

I just finished making a small contribution to this image (hopefully to appear on the cover of the Proceedings of the National Academy of Sciences) built by Tammy Cantarella and Aaron Abrams.

The structure illustrates some of Aaron’s research on Dehn functions. Aaron’s work is way deeper than this example, but the example is still pretty neat. It shows a space where a curve can enclose *exponentially* much area compared to its length (as opposed to curves in the plane, which can only enclose area proportional to the square of their length). The rectangles on the complex above are each considered to have the same area. At each branch point, the lines split off along the three leaves of the tree in cyclically repeating order, so the number of lines on each leaf is one-third of the number on the “parent” leaf. This means that a curve which goes far out from the central spine can enclose a lot of rectangles near the spine with a very short length.

Tammy made this picture by actually building the tree from paper and carefully photographing it. Then she edited the image in Photoshop and added the horizontal and vertical lines. Aaron provided the math, and I provided a little design advice and mostly translated between the two of them.

Kudos to Tammy and Aaron– this looks awesome!

When ambient light is reflected and reradiated in an image, finding the final distribution of light in the image requires the solution of a very large linear algebra problem. The total light *R_i *radiating from a face *i* is given by R_i = E_i + Σ F_ij R_j, where _{i} is the light emitted by face *i* and the *F*_{ij} are “view factors” describing the relative geometry of faces *i *and *j*. In the scene on the left, there are 64 polyhedra, each with 32 faces. The resulting system of 2048 linear equations in 2048 variables is solved in several tenths of a second using an iterative method implemented in Mathematica, but would take much longer with a standard solver. In a real application, like a scene from an animated movie, there would be a few million polygons in the scene and the resulting solution would require solving a system *A**x* = *b* where the matrix *A* was (say) 2,000,000 by 2,000,000. Even storing such a matrix would take on the order of 4 terabytes of memory! Luckily, such matrices are very sparse, so they are (barely) tractable with good computing hardware.

The Flocktree is a sculptural installation of a flock of pigeons cast in an expanding urethane foam supported and grouped by a collection of nesting aluminum frames. The interplay between the rigorous order imposed by the frames and the fluid organic shape of the flock raises questions about the way the viewer makes sense of collections of objects. The groupings of birds created by the frames form an octree structure.

The Flocktree was made possible by the financial support of the ICE program at UGA and installed in the courtyard at the Floor Group at 159 Oneta Street in Athens, GA. The Flocktree was created in collaboration with my brother Luke Cantarella a scenic designer and artist living in Brooklyn, NY.

The Flocktree structure divides a group of 17 bird sculptures into 6 groups by grouping them into front/back, left/right, and top/bottom groups, then enclosing each subgroup (such as the top/right/front) group in an aluminum frame. The entire flock is contained in a larger frame as shown below.

This process for dividing objects into groups is used in computational geometry and computer graphics to quickly process large collections of geometric objects.

Constructing the frames for the boxes poses an interesting challenge: how to build the best box possible from angle stock without mitering and welding it? Each of the corners are constructed in a triple-overlapping pattern shown in the gallery below. This means that around a face of each box, the pieces overlap in a spiral pattern, as shown in the middle frame. These spirals can be right-handed or left-handed. Can we pick a direction for the spiral on each frame so that they are all compatible?

As the pictures show, all of the spirals are compatible as long as each one points outward from the cube. In a frame built like this, all the corners are the same. In practice, this makes the cube design particularly strong and accurate, since each piece of angle stock is bent just a bit out of shape and the pieces all press together symmetrically at the corners.

This design is a simple example of a mathematical concept called orientability. A surface, like the cube is said to be orientable if on each face you can choose an outward direction so that all of these directions are compatible. Any surface like this can be built with the spiraling frames above. On the other hand, a nonorientable surface, like a mobius strip, couldn’t have been built this way!

In August of 2007 I was lucky enough to give a talk on mathematics and quilting to the Cotton Patch quilters here in Athens. The talk was a lot of fun, and taught me quite a bit about quilts! You can read the slides at the link above.

In 2006 I did a series of images of tight knots for the art magazine Cabinet. They appeared in issue 20 *Ruins*, accompanying an article by Kenneth Millett about knots. These are all rendered with the flaky but amazing Electric Image Animation System. I think they look pretty good: