You can get Voxel Farm now. For more information click here.

Friday, November 27, 2015

Global Illumination over Clipmaps

Global illumination is one of my favorite problems. It is probably the simplest most effective way to make a 3D scene come to life.

Traditional painters were taught to just "paint the light". This was centuries before 3D graphics were a thing. They understood how light bounced off surfaces picking up color on its way. They would even account for changes in the light as it went through air.

Going into realtime 3D graphics we had to forget all of this. We could not just draw the light as it was computationally too expensive. We had to concentrate on rendering subjects made of surfaces and hack the illumination any way we could. Or we could bake the stuff, which looks pretty but leaves us with a static environment. A house entirely closed should be dark, opening a single window could make quite a difference and what happens if you make a huge hole in the roof?

For sandbox games this is a problem. The game maker cannot know how deep someone will dig or if they will build a bonfire somewhere inside a building.

There are some good solutions out there to realtime global illumination, but I kept looking for something simpler that would still do the trick. In this post I will describe a method that I consider is good enough. I am not sure if this has been done before, please leave me a link if you see that is the case.

This method was somewhat of an accident. While working with occlusion I saw that determining what was visible from any point of view was a similar problem to finding out how light moves. I will try to explain it using the analogy of a circuit.

Imagine there is an invisible circuit that connects every point in space to it neighboring points. For each point we also need to know a few physical properties like how transparent it is, how it changes the light direction and color.

Why use something like that? In our case it was something we were getting almost for free from the voxel data. We saw we could not use every voxel, it resulted in very large circuits, but the good news was we could simplify this circuit pretty much the same way you collapse nodes in an octree. In fact the circuit is just a dual structure superimposed on the octree.

Consider the following scene:

The grey areas represent solid, white is air and the black lines is an octree (quadtree) that covers the scene at adaptive resolution.

The light circuit for this scene would be something like:

Red arrows mean connections between points where light can freely travel.

Once you have this, you could feed light into any set of points and run the node to node light transfer simulation. Each link conduces light based on its direction and the light's direction, each link also has the potential to change the light properties. It could make the light bounce, change color or be completely absorbed.

It turns out that this converges after only a few iterations. Since the octree has to be updated only when the scene change you could run the simulation many times over the same octree, for instance when the sun moves or a dragon breathes fire.

To add sunlight we can seed the top nodes like this:

Here is how that looks after the simulation runs. This is a scene of a gorge in some sort of canyon. Sunlight has a narrow entrance:

The light nodes are rendered as two planes showing the light color and intensity.

Here are other examples of feeding just sunlight to a complex scene. Yellow shows the energy picked up from the sunlight.

Taking light bounces into account is then easy. Unlike the sunlight, the bounced light is not seeded from outside, it is produced by the simulation.

In the following image you can see the results of multiple light bounces. We made the sunlight pure yellow and all the surfaces bounce pure red:

You can see how the light probes (the boxes) are picking red light from below. Here is the same setup for a different scene:

This is still work in progress but I like the fact it takes a fraction of a second to compute a full light solution, regardless of how complex the scene is. Soon we will be testing this in a forest setting. I miss the green light coming through the canopies from those early radiosity days.

Tuesday, September 29, 2015

All your voxel are belong to you

The same network effect that made cities preferable to the countryside was multiplied as networks have gone all electric and digital. But networks are only as important as the stuff they carry. For a digital network, that would be data.

A very interesting property of voxels to me is their simplicity. The simpler it is to produce, share and consume a data format, the better chance it has to multiply across a network.

This is not only because simplicity makes things practical and easier. I believe you can only own those things you understand. Without true ownership of your data, any hope for its democratic production or consumption is lost.

This is how we see people sharing voxel content:

The data format used here is quite simple and hopefully universal at describing 3D content. I'll get to it in a future post.

Friday, September 25, 2015

Another oldie

This one takes me back, I realize I never posted a video of this demo:

I had only posted screenshots here and there. This is Voxel Farm circa 2013. Two voxel years feels an eternity.

I think it still holds pretty nicely. I appreciate the asteroids surfaces have craters and other interesting variations in them. It is not just mindless noise.

Tuesday, September 22, 2015

A bit of recap

I will be announcing some really cool new features soon, which are keeping us super busy. It has been all about giant turtles, photons... quite unreal if you ask me.

So while we wait here are a couple of things you may have missed.

First, did you know we have made all engine documentation public? You can find it here:

The reference for the entire C++ library is covered in the programmer's section. The actual code is not there (sorry, we still require to license) but you can see all the headers, classes, etc. It may give you a good idea of how the whole thing works and is built.

Also here is an oldie we never got to publish. It shows how a single tool, the selection box, can be used to perform many different tasks.

My favorite bit is around the 5 minute mark, where the Cut&Paste is used to reshape an existing arcade. That is a good example of emergent possibilities coming out of very simple mechanics.

Also we got two team members to share their work sessions over twitch. It is mostly Voxel Studio but also how the assets that go into a Voxel Farm project are created. They stream most days:

Feel free to take a look. Be nice, they are both very gentle creatures.

Thursday, August 27, 2015

Voxel Occlusion

Real time rendering systems (like the ones in game engines) have two big problems to solve: First to determine what is visible from the camera's point of view, then to render what is visible.

While rendering is now a soft problem, finding out what is potentially visible remains difficult. There is a long history of hackery in this topic: BSP trees, PVS, portals etc. (The acronyms in this case make it sound simpler.) These approaches perform well for some cases to then fail miserably in other cases. What works for indoors breaks in large open spaces. To make it worse, these visibility structures take long to build. For an application where the content constantly changes, they are a very poor choice or not practical at all.

Occlusion testing on the other hand is a dynamic approach to visibility. The idea is simple: using a simplified model of the scene we can predict when some portions of the scene become eclipsed by another parts of the scene.

The challenge is how to do it very quickly. If the test is not fast enough, it could still be faster to render everything than to test and then render the visible parts. It is necessary to find out simplified models of the scene geometry. Naturally these simple, approximated models must cover as much of the original content as possible.

Voxels and clipmap scenes make it very easy to perform occlusion tests. I wrote about this before: Covering the Sun with a finger.

We just finished a new improved version of this system, and we were ecstatic to see how good the occluder coverage turned out to be. In this post I will show how it can be done.

Before anything else, here is a video of the new occluder system in action:

A Voxel Farm scene is broken down into chunks. For each chunk the system computes several quads (a four vertex polygon) that are fully inscribed in the solid section of a chunk. They also are as large as possible. A very simple example is shown here, where a horizontal platform produces a series of horizontal quads:

These quads do not need to be axis aligned. As long as they remain inside the solid parts of the cell, they could go in any direction. The following image shows occluder quads going at different angles:

Here is how it works:

Each chunk is generated or loaded as a voxel buffer. You can imagine this as a 3D matrix, where each element is a voxel.

The voxel buffer is scanned along each main axis. The following images depict the process of scanning along one direction. Below there is a representation of the 3D buffer as a slice. If this was a top down view, you can imagine this is a vertical wall going at an angle:

For each direction, two 2D voxel buffers are computed. One stores where the ray enters the solid and the second where the ray exits the solid.

For each 2D buffer, the maximum solid rectangle is computed. A candidate rectangle can grow if the neighboring point in the buffer is also solid and its depth value does not differ more than a given threshold.

Each buffer can produce one quad, showing in blue and green in the following image:
Here is another example where a jump in depth (5 to 9) makes the green occluder much smaller:

In fact, if we run again the function that finds the maximum rectangle on the second 2D buffer it will return another quad, this time covering the missing piece :
Once we have the occluders for all chunks in a scene, we can test very quickly whether a given chunk in the scene is hidden behind other chunks. Our engine does this using a software rasterizer, which renders the occluder quads to a depth buffer. This buffer can be used to test all chunks in the scene. If a chunk's area on screen is covered in the depth  buffer by a closer depth, it means the chunk is not visible.

This depth buffer can be very low resolution. We currently use a 64x64 buffer to make sure the software rasterization is fast. Here you can see how the buffer looks like:

It is also possible not to use our rasterizer test at all and feed the quads to a system like Umbra. What really matters is not how the test is performed, but how good and simple the occluder quads are.

While this can still be improved, I'm very happy with this system. It is probably the best optimization we have ever done.

There was an error in this gadget