You can get Voxel Farm now. For more information click here.

Tuesday, April 26, 2016

Geometry is Destiny

In the previous post, I introduced our new land mass generation system. Let's take a look at how it works.

For such a large thing like a continent, I knew we would need some kind of global generation method. Global methods involve more than just the point of space you are generating. The properties for a given point are influenced by points potentially very far away. Global methods, like simulations, may require you to perform multiple iterations over the entire dataset. I favor global methods for anything larger than a coffee stain in your procedural table cloth. The reason is they can produce information whereas local methods cannot: information is limited to the seeds used in the local functions.

The problem in using a global simulation is speed. Picking the right evaluation structure is paramount. I wanted to produce maps of approximately 2000x2000 pixels, where each pixel would cover around 2 km. I wanted this process to run in less than five seconds for a single CPU thread. Running the generation algorithm over pixels would not get me there.

The alternative to simulating on a discrete grid (pixels) is to use a graph of interconnected points. A good approach here is to scatter points over the map, compute the Voronoi cells for them, and use the cells and their dual triangulation as the scaffolding for the simulation.


I had tried this in the past with fairly good results, but there was something about it that did not sit well with me. In order to have pleasant results, the Voronoi cells must be relaxed so they become similarly shaped and the dual triangulation is made of regular triangles.

If the goal was to produce a fairly uneven but still regular triangle mesh, why not just start there and avoid the expensive Voronoi generaion phase? We would still have implicit Voronoi cells because they are dual to the mesh.

We started from the most regular mesh possible, an evenly tessellated plane. While doing so we made sure all diagonal edges would not go in the same direction by making their orientation flip randomly:



Getting the organic feel of the Voronoi driven meshes from here was simple. Each triangle is assigned a weight and all vertices are pulled or pushed into triangles depending on these weights. After repeating the process a few times you get something that looks like this:


This is already very close to what you would get from the relaxed Voronoi phase. The rest of the generation process operates over the vertices in this mesh and transfers information from one point to another using the edges connecting vertices.

With the simulation scafolding ready, the first actual step into creating the land mass is to define its boundaries. The system allows a user to input a shape, in case you were looking for that heart-shaped continent, but if no shape is provided a simple multiresolution fractal is used. This is a fairly simple stage, where vertices are classified as "in" or "out". The result is the continent shoreline:


Once we have this, we can compute a very important bit of information that will be used over and over later during the generation: the distance to shoreline. This is fairly quick to to compute thanks to the fact we operate in mesh space. For those triangle edges that cross the shoreline we set the distance to zero, for edges connected into these the distance is +1 and so on. It is trivial to produce a signed distance if you add for edges in mainland and subtract for edges in the ocean.

It is time to add some mountain ranges. A naive approach would be to use distance to shore to raise the ground, but this would be very unrealistic. If you look at some of the most spectacular mountain ranges on Earth, they happen pretty close to coast lines. What is going on there?

It is the interaction of plate tectonics what has produced most of the mountain ranges that have captured our imagination. This process is called orogeny, and there are basically two flavors of it, accounting for most mountains on Earth. The first is when two plates collide and raise the ground. This is what gave us the Himalayas. The second is when the oceanic crust (which is a thinner New-York-pizza-style crust) sinks below the thicker continental crust. This raises the continental crust producing mountains like the Rockies and the Andes. The two processes are necessary if you look for a desirable distribution of mountains in your synthetic world.

Since we already have the shape of the continental land, it is safe to assume this is part of a plate that originated some time before. More-so, we can assume we are looking at more than one continental plate. This is what you see when you look at northern India, even if it is all a single land mass, three plates meet at this point: the Arabian, Indian and Eurasian plates.

Picking points fairly inland, we can create fault lines going from these points into the map edge. Again this works in mesh space, so it is fairly quick and the results have the rugged nature we initially imprinted into the mesh:

Contrary to what you may think, this is not a pathfinding algorithm. This is your good-old midpoint displacement in action. We start with a single segment spanning from the fault source to the edge of the map. This segment, and each subsequent segment, is refined by adding a point in the middle. This point is shifted along a vector perpendicular to the segment by a random amount. It is fairly quick to know which triangles are crossed by the segments so the fault can be incorporated into the simulation mesh.

In this particular case the operation has created three main plates, but we are still missing the oceanic plates. These occur a bit randomly, as not every shoreline corresponds to an oceanic plate. We simulated their occurrence by doing vertex flood fills on selective corners of the map. Here you can see the final set of plates for the continent:


The mere existence of plates is not enough to create mountain ranges. They have to move and collide. To each plate we assign a movement vector. This encodes not only the direction, but also the speed at which the plate is moving:


Based on these vectors we can compute the pressure on each vertex and decide how much it should be raised or lowered, resulting in the base elevation map for the continent:


All the mountains happened to occur in the South side of the continent. You can see why this was determined by the blue plate drifting away from the mainland, otherwise we would have had a very different outcome. This will be an interesting place anyway. While the gray-scale image does not show it, the ground where the blue plate begins sinks considerably, creating a massive continent-wide ravine.

Getting the continent shape and mountain ranges is only half the story. Next comes how temperature, humidity and finally biomes are computed. Stay close for part two!


Tuesday, April 19, 2016

Geography is Destiny

Here are some images from the output of a new land mass generator we wrote for the Voxel Farm suite:




While the images are symbolic, they contain very detailed information about biome placement. Each pixel is approximately 2Km wide. You can imagine each biome type/pixel in this map replaced by a rich biome manifestation, which will provide elevation, erosion and other layers of detail. Rivers and lakes do not appear at this point because they need the final elevation. What you see here is more like a blueprint for where the next generation phase starts.

The challenge in this case was to make biomes appear in the right location. The method behind the images uses tectonic plate simulation for mountain ranges and a pretty cool humidity transfer system. I believe there is no other way if you want plausible maps. Just in case you want to see if the deserts and jungles make sense, note the wind in the three maps above comes from the south-west corner.

I will be covering how this works in my next post.

Monday, March 21, 2016

Displacement and Beyond

In a previous post I introduced a couple of new systems we are working on. We call them Meta Meshes and Meta Materials. Here is another quick bit of info on how they work. This time we will look at the very basics.

We will start with a very low resolution sphere mesh like this one:
This new system allows me to think of this not just as a mesh, but as a "meta mesh". This is not just a sphere, but the basis of something else that kind-of looks like sphere: a planet, a pebble or maybe your second-grade teacher.

To a meta-object like this one we can apply a meta-material. For the sake of simplicity, in this post I will cover only two aspects of meta-materials. The first is they can use a displacement map to add detail to the surface.

Once you tie a meta-material to a meta-object, you get something tangible, something you can generate and render. Here is how the meta-sphere looks when the displacement equals zero:
This is a reproduction of the original mesh. It still looks very rough and faceted, making you wonder why would you need to go meta in the first place. But here is what you get when you increase the displacement amplitude:

While the shape follows the supplied concept, the detail is procedural. The seed information is the very low resolution sphere and the displacement map, which is applied to the surface using something similar to triplanar mapping. Unlike shader-only techniques, this is real geometry. It will be seen by the collision system, the AI pathfinding and other systems you may lay on top. It is made of voxels, so this is detail you can carve and destroy.

This displacement is applied along the normal of the original meta-object. This would produce beautiful features in vertical cliffs, overhangs and cave ceilings. But often displacement alone is not enough. You may want more complex volumetric protuberances. Luckily meta-materials can also be extended with voxel instances:


This is a real-time generation system, so there are practical limits on what can be done. I would like to go full fractal on this one, having an infinitude of nested meta materials. It is not really like that in this current iteration of the system. But at this stage the system can be used to produce very impressive content, in particular landscapes. I will be covering that in a soon-to-come post.

Wednesday, February 3, 2016

Improving building LOD

Most rendering engines need some form of level of detail management to deal with complexity. In traditional games there is a lot of hand work going into creating multiple views of the same content. A view of a distant tower in a game is likely crafted as an individual asset, it is not a byproduct of the high-detail version used for closeups. There are some very good commercial solutions to produce multiple LOD for assets, even then a lot of massaging is often required.

In our case we want full automation. Our creation tools should be used by people with no understanding of what LOD means. There is no single silver bullet in this case, it is more like a cocktail of copper bullets, but here is one:

If you think about it, distant LODs are necessarily viewed from afar. This is something we can exploit. Most buildings will have rather simple facades compared to their convoluted interiors. If we know the camera cannot be inside a given LOD version of the building, it is safe to remove whatever complexity is not contributing to the exterior. The result is you can cut a lot of information right away without having to lower the fidelity of the facade at all.

You can see the method in this video:



The method cuts from 80% to 90% of the information. It uses raycasting to determine the likelihood of a portion to be seen from outside. To make it quicker, it first discards portions that are trivially known to be outside. For instance all content that projects directly into the bounding box of the object we know is exposed, so there is no need to test for occlusion. Then it removes the portions we found are never exposed. This produces a rather noisy version of the interior. To this we apply some filters to clean it up and produce the lean shapes you see in the video.

The buildings in the video are voxel creations by Landmark players during EverQuest Next's workshop competitions. They are stunning and at the same time were created by someone who has no knowledge of Maya, 3ds max or traditional 3D modeling. For this very same reason we do not want them to think or worry about how their content looks at different LODs. It just has to work.

Sunday, January 31, 2016

Another UE4 environment run video

Here is another video of Voxel Farm in UE4. This time we are lost somewhere in space:


This is using an Alien Biome we have available as an example. If curious, you can see how this terrain was built here. The biome and tutorial were created by one of our artists, Bohan Sun.

The biome is meant as an example. For this reason it is intentionally simple, but it holds pretty well for exploration as you can see in the video.

I took some time to create a blueprint for the little drone that lights the way for the character. I really enjoyed the experience. I wonder if we could write blueprints for real things, like asking your Roomba to fetch your slippers. There is some serious intelligence you could create with this system, once you have means to feed nav meshes and other actors into it, but I digress.

I hope you liked the video. More Unreal stuff coming later...
There was an error in this gadget