Procedural Planet Generation

Procedural Planet Generation

By Chris Lokhorst

For my R&D project, I decided to generate procedural planets using 3D noise and isosurfaces with an LOD system using an octree. The planets should be able to have caves, customizable colors, noise settings to control the surface, and deformation support. I wanted the style of the planets to roughly be like those from the game outer wilds.



To generate the planet, I needed a way to generate a mesh out of a 3D grid of voxels. To generate this grid, I created a compute shader that simply takes the distance to the center of the grid as it's iso value. When generating the mesh, it should create a sphere from these points.

In a previous personal project of mine, I worked with marching cubes to generate an endless terrain. This is a type of isosurface generation. Isosurfaces are meshes created from a space with volume. A video from Sebastian Lague showcases the marching cubes algorithm very nicely, in which he generates an underwater like environment (2019). The results look nice, but marching cubes doesn't create the most optimized meshes. In a lot of cases, vertices can be clumped together, creating a lot of unneeded detail. I also wanted to learn a new way of generating isosurfaces. So instead, I started looking for other ways to generate an isosurface. Eventually, I found 2 ways that interested me: Dual contouring and surface nets. According to a post on 0fps by Mikolalysenko, surface nets is a lot faster than the marching cubes algorithm (2012), which is the main reason I chose to go with surface nets for my project.

Both dual contouring and surface nets are every similair. They place a vertex in every cube on the volume grid, and create quads from each edge intersecting the volume. After reading an article about surface nets (Understanding Surface Nets – CERBION.NET, 2021), I started implementing naive surface nets using compute shaders. The reason I chose surface nets over dual contouring, is because it's less complex. It's also possible to switch to dual contouring later, since the algorithms are very similar.

Above you can see an explanation of how surface nets work.

  1. A grid of points are generated, each with a value defining how deep the point is in the surface. In this case, the surface is the orange circle.
  2. Then, the algorithm goes through all the edges on the grid and gets the intersection point.
  3. Now for every square on the grid, the average of the positions of all surounding intersections is used to generate a vertex.
  4. After all this, the algorithm goes through the edges again, and generates a quad for every intersecting edge using the adjacent vertices.

As seen above, the resulting mesh from the surface nets looks very smooth. Currently, there are many duplicate vertices, since vertices are not being reused for every new quad. I fixed this by simply keeping a dictionary of vertices while constructing the mesh on the CPU. The surface can also look way smoother if we calculate the normals using the iso values. After fixing these things, the resulting mesh can be seen below.

With all this set up, we can start adding some noise to the sphere. I used a noise library that I found online to add some 3D noise to the voxel grid (GitHub - Ashima/Webgl-noise: Procedural Noise Shader Routines Compatible With WebGL, n.d.).

Level of Detail


Next, I wanted to tackle the issue of generating chunks with different levels of detail (LOD). To do this, lower level detail chunks are generated by skipping over a voxel point when generating the mesh. The amount of skipped voxels exponentially increases per LOD.

To test this, I quickly seperated the mesh into chunks. I then lowered the LOD based on the X position in the grid. As you can see in the image below, the detail gets progressively lower.

A problem that arises when lowering the detail, is that gaps appear between different LODs. There are several ways to fix this:

  1. Skirts - By placing a 2D surface around the edges of the chunk, you can cover up the gaps. This makes the terrain appear seamless and chunks don't have to communicate with other chunks to generate their mesh. There are still cases where gaps might appear though, and seams due to normals are visible between chunks.
  2. Stitching - The edge of the chunk is a seperate mesh, and will be regenerated when neigbouring chunks change in detail. by sharing border vertices, the meshes are literally stitched together. Chunks will have to constantly communicate with eachother, and doing this is a lot more complex than skirts.
  3. Overlap - Chunks simply generate an extra length to overlap with neighbouring chunks. This hides gaps, but seams are still very obvious due to z-fighting and normals.

Since I have implemented skirts before in another project, I decided to go with skirts first. From this project, I know that the remaining seams aren't very obvious and should suffice for now. An issue post on github about implementing skirts on surface nets helped me implement it myself (Multiresolution Surface Nets · Issue #26 · Bonsairobo/Building-blocks, 2021). Just like in the post, I made the edges between chunks align by generating a small extra border around the chunks.

Above, you can see how the skirts effectively cover up gaps. There is still a small chance that gaps appear, but that only happens in very extreme cases. Another negative is that seams can still be seen due to incorrect normals and texture stretching. For now though, this is fine.


Currently, the planet is seperated into even sized chunks. This means that even low level detail chunks are seperated into unnecessarily small meshes. A better way of handling LOD is using an octree. An octree partitions a space into progressively smaller cubes. Every cube is divided into 8 new cubes when more detail is needed.

Example Octree (Apple Developer Documentation, n.d.)

I already had an idea on how to implement the octree, but watched a video by World of Zero to check if there is a better way to implement it (2016). He ended up doing something very similair to what I wanted to implement. Every frame, it checks whether the player is close enough to an octree node to subdivide into 8 new nodes. When the max depth is reached, or the node doesn't need to be subdivided, the node is set to active and displays the chunk.

Mesh divided up into chunks using an octree

With the octree working, different levels of detail can now take up an appropriatie amount of space.



Next up, I wanted to be able to add different materials to the surface of the planet. For this, I would have to write a custom shader.

Since UV unwrapping isosurface meshes is basically impossible, I had to look into other ways of texturing a mesh. A little while ago, I played around with Triplanar mapping in shader graph. Triplanar mapping uses the normals and positions of the mesh vertices to wrap a texture around the object. It projects the texture from 6 sides onto the mesh, and blends between the overlap points.

Above you can see how I previously implemented it in shader graph. I based this implementation off of the one from catlike coding (Flick, 2018). Since I'm not using URP and like writing my shaders myself, I rewrote the shader graph into hlsl.

float4 triplanarMapping(sampler2D text, float3 position, float3 normal, float scale, float blendSharpness)
    // Calculates axis blending.
    float3 blendNormal = pow(abs(normal), blendSharpness);
    blendNormal /= blendNormal.x + blendNormal.y + blendNormal.z;

    float3 scaledPos = position * scale;
    float4 tyz = tex2D(text, scaledPos.yz + normal.yz);
    float4 txz = tex2D(text, scaledPos.xz + normal.xz);
    float4 txy = tex2D(text, scaledPos.xy + normal.xy);

    return tyz * blendNormal.x + txz * blendNormal.y + txy * blendNormal.z;

Currently, I can only display a single material on the planet. I want to be able to have any amount of materials on the surface, so I started researching how to do that. I eventually found a github repository of an endless terrain generator (GitHub - Tuntenfisch/Voxels: GPU-based Implementation of Dual Contouring in Unity for Destructible Voxel Terrain., n.d.). In the description, the creator explained how he implemented the different materials in the terrain. Every vertex has a material index which references to a texture in a texture array. In the geometry shader, a blend factor is created that interpolates between the vertices of the triangle. in the fragment shader, all 3 materials are blended between using the interpolated blend factors. Using a page from catlike coding, I learned how to add the geometry pass to an unlit shader in Unity (Flick, 2017).

I also had to assign material indices to each vertex in the planet. To do this, next to the voxel grid density computation, I compute a material index which is then assigned to that voxel. In the mesh generation, I then get the voxel point deepest in the surface and add the material index to the UV channel of the vertex. In the unlit shader, I can then get the material index for every vertex.

void geom(triangle v2g i[3], inout TriangleStream<g2f> outputStream)
    const half3x3 blendFactors = half3x3
        1.0h, 0.0h, 0.0h,
        0.0h, 1.0h, 0.0h,
        0.0h, 0.0h, 1.0h

    uint3 materials = int3(i[0].materialData, i[1].materialData, i[2].materialData);

    for (uint index = 0; index < 3; index++)
        v2g input = i[index];
        g2f output;
        output.materials = materials;

        // materialsBlend will be interpolated between the vertices.
        output.materialsBlend = blendFactors[index];
        output.parentData = input.parentData;
        output.vertex = input.vertex;
        output.normal = input.normal;
        output.position = input.position;
        output.tangent = input.tangent;
        output._ShadowCoord = input._ShadowCoord;



Above you can see the code for the geometry shader. The blend factors half3x3 each hold how much of the material of each vertex is blended in the pixel.

for(int index = 0; index < 3; index++)
albedoColor += triplanarMappingArray(_AlbedoTextures, sampler_AlbedoTextures,
i.position, i.normal, .3, 5, i.materials[index]) * i.materialsBlend[index];

normal += triplanarMappingNormalArray(_NormalTextures, sampler_AlbedoTextures,
i.position, i.normal, .3, 5,3, i.materials[index]) * i.materialsBlend[index];

Above, you can see how the materials blend in the fragment shader. It triplanar maps each of the materials from the geometry shader and multiplies it with the appropriate blend factor.

Material blending with basic colors
Blending with textures
Scriptable object handling the texture2D array

The resulting textures look very nice. You can still see a bit of a sharp transition between the materials, but for now I think it looks fine.


Next up I wanted to implement proper lighting. To do this, I first wanted to sample normal maps for each material. To do this, I use the triplanar mapping function with a slight modification to the texture sampling. I now also call UnpackNormal, instead of just calling tex2D in the triplanar mapping function.

With the normals, I added some simple diffuse lighting by getting the dot product between the surface normal and light direction. I also wanted the shader to receive and cast shadows. I found out how to add this using Unity's documentation (Unity - Manual: Vertex and Fragment Shader Examples, n.d.). I also added ambient lighting to the shader, since the dark side of the planets were pure black.

The lighting looks quite nice now, although it could be improved by adding specular lighting. Other things have higher priority, so in the future I might add support for specular maps as well.



The planet terrain is currently very boring, since I'm only adding some 3D perlin noise to the planet. To improve this, I can add octaves to the noise. Based on an amplitude and frequency, several noise values are layered over each other, creating more detailed noise. Below you can see how this is implemented.

float noise;
float freq = frequency;
float amp = amplitude;
for (int i = 0; i < iterations; i++)
    // '/ 2 + .5' is done, because the returned value is from -1 to 1, but should be 0 to 1.
    float v = snoise((pos + offset) * scale * freq) / 2 + .5; 
    noise += (v + 1) * .5f * amp;
    freq *= 2;
    amp *= .5;
Resulting planet terrain

As you can see, the planet has larger mountains, while retaining detail in lower regions. By simply sampling another noise value before this, and adding it to the offset of the 'snoise' function, we can distort the terrain. This generates a very interesting looking surface.

Distorted surface

Another type of noise I wanted to try implementing was ridge noise. Ridge noise is basically just gradient noise, but instead the noise value should decrease when it hits 0.5. Below you can see a shader graph implementation of this noise. The eventual calculation is ofcourse rewritten in the compute shader.

Ridge noise
Planet with ridges

Seamless Transitions

Currently, the transitions between different levels of detail aren't seamless. Because skirts are used to hide gaps, the chunks aren't stitched together, resulting in obvious differences in surface normals. To improve this, I decided to look into a way to stitch the terrain together. After researching this a bit, I found an article that showcases seamless transitions between terrain chunks by morphing chunks into their higher level detail version (Voxels and Seamless LOD Transitions, 2016). I thought this looked extremely nice, and wanted to implement this myself.

High LOD (blue)
Low LOD (red, thicker grid represents sampled grid points for lower LOD)

The transitions work by generating chunk vertices for both the required level of detail, and the level of detail below the required one. Every vertex from the higher detail chunk then gets a reference to the vertex in the lower detail chunk that corresponds to the used voxel cube. So in the image above, every blue point would get a reference to the red point which is in the same square based on the thicker grid.

This does mean that every vertex stores twice the data for the positions and normals of the vertex, but the resulting transitions are worth the extra memory usage. Now to handle the transitions, every chunk will have a blend factor that changes from 1 to 0 when the player gets close enough (blend factor is 0 right when the chunk subdivides, and 1 when it's subdivided). Based on this value, we simply need to lerp between the lower and higher detail vertex positions and normals.

Handling seamless transitions between chunks can easily be added now as well. Vertices on the borders of chunks simply use the blend factors from neighbouring chunks if it's higher than their own blend factor. When a lower level detail chunk neighbours the chunk, a blend factor of 1 is used, while the bordering vertices from the lower level detail chunk use a factor of 0. This makes the terrain chunks fully seamless.

Seamless LOD transitions

Communication between neighbouring octree chunks is required though, which costs a bit of performance. The resulting transitions are super smooth though, so it's definitely worth the slight performance decrease.


Mesh Generation

In the first part of this article, I talked about how I generate the mesh data using compute shaders, and then clean them up on the CPU by reusing vertices with a dictionary. Because I have to get the mesh data from the GPU to the CPU, and process all the data into a mesh all in 1 frame, the mesh generation can cause big lag spikes. To improve this, I wanted to do the full mesh generation on the GPU, and asynchronously receive the data from the GPU to the CPU to form the mesh.

Using the same github page I used to understand material blending, I checked their code and found out how they handle the mesh generation (GitHub - Tuntenfisch/Voxels: GPU-based Implementation of Dual Contouring in Unity for Destructible Voxel Terrain., n.d.). Apparently, you can create a struct with custom vertex data, and use VertexAttributeDescriptors to assign certain data to vertex attributes (Like Position, Normal, TexCoord1, etc.) (Unity - Scripting API: VertexAttributeDescriptor, n.d.). This means I can directly get a native array of vertex and indices data from the GPU and instantly put that into a mesh, instead of seperately setting the vertex, triangles, and normal arrays on the mesh.

After rewriting the mesh generation code so it only needs to be generated on the GPU, the mesh generation didn't cause any more lag spikes.

Voxel Data

When generating the voxel data for planets, it needs to compute 1024 * 1024 * 1024 voxels. This takes a very long time, and even made my Unity crash on my laptop, since the GPU took longer than 2 seconds to compute.

To fix this, I first made it so the voxel data isn't all generated at once, but generated in chunks. This made it possible to generate the voxel data on my laptop, but didn't make it any faster.

After looking at the marching cubes implementation from Sebastian Lague, I saw he used a 3D render texture to store his voxel data (GitHub - SebLague/Terraforming, n.d.). I was currently using a compute buffer to store the data, so I decided to also switch to a render texture to see if it would improve performance. To my suprise, it sped up the computation a lot. It now only takes about 1 to 2 seconds to generate the voxel data. It turns out that the read/write speed of render textures on the GPU are a lot faster than the speed of a compute buffer.

Terrain Deformation

Throughout development of the project, I always kept in mind that the terrain has to be deformable. Thus, I made sure the chunks can easily be regenerated with reasonable performance.

First, I had to add colliders to the terrain, so I can cast a raycast on the terrain to edit it. Colliders are only generated for chunks with the highest level of detail. This means only the necessary colliders are being simulated. I'm using mesh colliders, which isn't optimal for performance. Thus I need to make sure colliders are only active when needed.

To deform the terrain, I simply need to add/remove values from the voxel data. Based on the radius around a point, a value should be added/removed. Since I don't want to check the distance towards this point for every voxel in de render texture, I only dispatch the compute shader for the amount of voxels that will be edited, and offset it towards the edit point. This way, the editing of the render texture doesn't need to dispatch for every voxel in the render texture.

After changing the voxel data, I simply need to go through the octree and check which nodes were in the range of the edit sphere. These nodes will then re-generate their mesh.

To my suprise, the first implementation immediatly worked fine. Due to the optimization I did before, the deformation caused no visible lag spikes.

Post Processing


A detail that the planets are missing, is water. They currently look quite boring with just the terrain. Thus, I started looking into adding water to the planet.

I didn't want to simply add a sphere mesh around the planet, put a shader on it, and call it a day. The main issue I had with this, is that the terrain has an LOD system, while the water mesh would remain a single resolution. I also didn't want to implement a seperate quadtree system for a cube sphere for just the water. Instead, I searched for some other solutions that are possible in the remaining time for this project.

I first came across tesselation shaders. Using tesselation, you can progressively increase the detail of a mesh (Ned Makes Games, 2021). The downside though is that it can be a bit performance intensive, and doesn't work on every device. Implementing it also looked quite complex, so I decided to keep looking.

Eventually I came across a planet generation video from Sebastian Lague, in which he implements water using a post processing effect (Sebastian Lague, 2020a). Using post processing, you don't need to handle mesh LODs. Another nice addition is that there's an effect when you go underwater, unlike with meshes where the water would just disappear.

The video shows a function that allows you to get the distance and depth of a sphere, using a raycast in a shader. I used this function to create my own water effect. I added differences in depth color by sampling the depth texture and subtracting it from the sphere ray depth. I also added specular lighting by triplanar mapping 2 distortion maps on the sphere, and moving them in the opposite direction of each other. I then calculate the dot product between the camera direction and reflected light direction to get the specular intensity.

The resulting effect looks quite nice. I don't know how I would add waves to the water since it's not a mesh, but I think this suffices for now.


The final touch needed to finish the planets is atmospheres. The planets still look a bit boring from space, so adding the shine of an atmosphere should add a lot more color to the planets.

Luckily, Sebastian Lague already made a detailed video about how to implement an atmoshpere post processing effect (2020b). The effect is achieved by raymarching through a sphere in space, and calculating the light scattering for every point in the ray. The light scattering at every point is determined by calculating the density of the atmosphere on several points on a ray towards the light source.

Visualization of raymarch through sphere (Chapter 16. Accurate Atmospheric Scattering, n.d.)

Using different wave lengths for each color, we can get the effect of the sun going down on the horizon. Based on the scattering value, we remove a certain amount of color based on the wavelength of the color.

Screenshot from video by Sebastian Lague (2020b)
Atmosphere applied to my planet

The results looked great, but there was a mayor downside. Since we are doing a raymarch inside of another raymarch (nested for-loop), the performance is only reasonable when sampling a very limited amount of points. Take in account that I will be rendering multiple planets and atmospheres, which would only make this worse. In the video by Sebastian Lague, he mentions that you can bake the nested raymarch into a texture. This is because the raymarch is done towards the light source, which is always a parallel ray (since it's a directional light), thus meaning the results would be the same from any direction. In a compute shader, I store the density value based on the angle of the ray and the distance from the planet surface. These values are then stored in the texture (angle of the ray on the x axis, height from surface on y axis).

Resulting atmospheres
Optical depth texture

Baking the raymarch into a texture boosted performance immensely. I was now able to render several atmospheres at once, without impacting performance too much. I can also play around with the wavelengths of the red, green and blue values to change the color of the atmoshpere. In the background, you can see how the sand planet has a brown atmosphere.


Final result

My goal was to generate planets using isosurfaces with textures, support for deformation, and a working LOD system. At the end of this research, I can say for sure that I reached my goal. Above you can see a video showcasing the planets I created.

Next to the requirements I set for myself at the start of the research, I was also able to implement more features, like oceans and atmospheres. I'm especially happy with how the LOD transitioning came out, since it was one of the main things I wanted to learn.

Although I'm happy with the current result, there are still a lot of ways the planets can be improved. The terrain shader doesn't support all light sources yet, the LOD transitioning performance can be improved, the ocean can be replaced with a mesh, the voxel generation time is still a bit slow, and the size of the planets is still limited.

In the future, I will probably return to this project to improve these points. I also want to implement more features such as clouds, moving oceans, full solar system with a sun, and switch to dual contouring for the mesh generation. For now though, I will spend some of my time flying around my little planets.


  1. Apple Developer Documentation. (n.d.).
  2. Chapter 16. Accurate Atmospheric Scattering. (n.d.). NVIDIA Developer. Retrieved 1 November 2022, from
  3. Flick, J. (2017, October 25). Flat and Wireframe Shading.
  4. Flick, J. (2018, April 29). Triplanar Mapping.
  5. GitHub - ashima/webgl-noise: Procedural Noise Shader Routines compatible with WebGL. (n.d.). GitHub. Retrieved 29 September 2022, from
  6. GitHub - SebLague/Terraforming. (n.d.). GitHub. Retrieved 15 October 2022, from
  7. GitHub - Tuntenfisch/Voxels: GPU-based implementation of Dual Contouring in Unity for destructible voxel terrain. (n.d.). GitHub. Retrieved 12 October 2022, from
  8. Mikolalysenko, A. (2012, November 30). Smooth Voxel Terrain (Part 2). 0 FPS.
  9. Multiresolution surface nets · Issue #26 · bonsairobo/building-blocks. (2021, May 11). GitHub.
  10. Ned Makes Games. (2021, December 1). Mastering Tessellation Shaders in Unity! Easy LoD, Curved Triangles, Height Maps | Game Dev Tutorial [Video]. YouTube.
  11. Sebastian Lague. (2019, May 6). Coding Adventure: Marching Cubes [Video]. YouTube.
  12. Sebastian Lague. (2020a, July 11). Coding Adventure: Procedural Moons and Planets [Video]. YouTube.
  13. Sebastian Lague. (2020b, August 22). Coding Adventure: Atmosphere [Video]. YouTube.
  14. Understanding Surface Nets – CERBION.NET. (2021, April 10).
  15. Unity - Manual: Vertex and fragment shader examples. (n.d.). Retrieved 20 October 2022, from
  16. Unity - Scripting API: VertexAttributeDescriptor. (n.d.). Retrieved 20 October 2022, from
  17. Voxels and Seamless LOD Transitions. (2016, July 14).
  18. World of Zero. (2016, December 22). Lets Make an Octree in Unity [Video]. YouTube.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts