Name student: Jorrit Goossens
Introduction
Illustration 1, (source: https://www.polygon.com/2020/7/1/21310066/avatar-the-last-airbender-live-action-movie-m-night-shyamalan-netflix).
Illustration 2, (source: https://avatar.fandom.com/wiki/Waterbending).
Almost every millennial or gen Z’er knows the series, Avatar: The Last Airbender and The Legend of Korra. When I started making shaders, I was really interested in reflection and refraction in combination with water. That is why I have choosen to combine the Avatar series and the water shaders, to create waterbending. The inspiration shown in illustration 2 is a basis of how the water orb should look like, but the goal set is to create a similar looking water orb like illustration 1.
Table of Contents
1. Idea
2. Plan
3. Water orb behaviour
§ 3.1. Movement
§ 3.2. Scale
§ 3.3. Object pooling
4. Raymarch shader
§ 4.1. Drawing a circle
§ 4.2. Shading
§ 4.3. Smooth union
§ 4.4. Performance
5. Water shader
§ 5.1. Drawing a circle
§ 5.2. Shading
§ 5.3. Fresnel
§ 5.4. Noise
§ 5.5. Performance
6. Conclusion
7. What research to continue on?
8. Sources
§ 8.1. Introduction
§ 8.2. Idea
§ 8.3. Water orb behaviour
§ 8.4. Raymarch shader
§ 8.5. Water shader
1. Idea
The idea is to make waterbending with shaders, which needed some inspiration how it could be done. The main resource of ideas came from a youtube video about waterbending in Unity, this was done with vfx, animations and shadergraph however (How To Create Water Effects In Unity – Water Bending Tutorial, A Bit Of Game Dev, source: https://youtu.be/3CcWus6d_B8). To research how it is possible to create waterbending using shaders had to come from elsewhere. The goal of the research is creating a compute shader, so the lighting and refraction could look more impressive than the video itself. Besides the shader, the orb also had to function as water, so it should be cohesive to other orbs of water. The behaviour of cohesion to water orbs could be done by the C# scripts itself. Next step is to write a plan to actually prioritize which elements of the water orb had to be done first.
2. Plan
There are at least three epic user stories to create an orb that has a water shader and has the cohesive properties and aesthetics. The research plan was as follows:
- Make raymarch shaders, so the orb aesthetically looks cohesive.
- Make water shaders, so the orb aesthetically looks like water.
- Make the orbs cohesive, so orb behaves like a drip of water.
The raymarch shader will be the bare bone of the project, which will make it the most important task to finish. When objects show and have the raymarch effect of a union, you continue on the cohesive behaviour of water. If you at least finish these two epic user stories, you can see a basis of a spherical cohesive object. As a final touch, you would like to make the spheres look like water orbs. Which will finalize the aesthetics of the water orb.
3. Water orb behaviour
Illustration 3, (source: https://www.pinterest.com/pin/cool-water-cohesion-gif–233131718183223504/).
Water is the most cohesive liquid known on earth. The goal of the shader is to make the water orb seem realistic, which to the behaviour of water, would be cohesion. The idea of creating that behaviour is making the water orbs move to each other when they are close to touching each other. When they do touch, one of the water orbs consumes the other becoming one bigger water orb . How it looks like when water orbs consume one another, is shown in illustration 3.
§ 3.1. Movement
First you have to know if the sphere is colliding with other objects using the OnEnterTrigger function from Unity. Having a list of colliding objects, you can start calculating the direction between the water orbs colliding. You multiply the direction with the movement of the water orb , so the colliding water orbs will slowly move to each other. If multiple water orbs are colliding to each other, each direction you have to other water orbs are added to one another and normalized afterwards. For example, a water orb collides with two other water orbs , you will add both directions creating a general direction between both and normalize it, so the speed of the water orbs stays consistent based on their direction.
Illustration 4.
In illustration 4 you can see that each direction between two water orbs is added and applied to the movement from the water orb . The movement speed from the water orb is dependent on the volume of the water orb , because heavier objects should move slower than lighter objects.
In the next paragraph you will learn more how the volume of each water orb is calculated.
§ 3.2. Scale
Because water orbs consume one another, the water orbs have to change in size as well. When water orbs touch each other, they will combine to one bigger water orb . In this project the choice has been made that the bigger water orb “consumes” the tinier water orb based on each of their volumes. Therefor you need to calculate each water orb its volume to add or substract their volume. By adding or substracting volume, each water orb will lose or gain volume steadily over time. When the tinier water orb gets below the minimum volume threshold, it will add the last bit of volume it has and will be put inactive.
Illustration 5.
Illustration 6.
Illustration 7.
In illustration 5 is shown how an other object is losing its volume while it is adding its lost volume to the water orb this script is from. If the water orb has a volume below the threshold, it will use the bit of code from illustration 6 where it ends the merge and deactivates the tiny water orb that lost all its volume. In the last illustration there is a check if the radius is already updated from the other water orb.
Illustration 8.
Illustration 9.
As you can see in illustration 8, the volume for a sphere is calculated using the radius of the x-axis of the water orb. The volume is used to exactly measure how much is added or substracted from a water orb. Increasing or decreasing the size of a spherical object based on the scale itself, would be an unrealistic way of changing the size of spherical, but also cubic objects. It would exponentially grow or shrink instead of linear. After using the volume as increase of decrease of scale, it has to calculate from volume to radius again using the new volume parameter. The calculation for the radius is seen in illustration 9.
Now there is a start for the behaviour of cohesion for each water orb , which is shown in the video above, where water orbs move and grow or shrink based on collision and their sizes. The end result of the behaviour was shared with a couple of students
4. Raymarch shader
Illustration 10.
Using the raymarch shader enables the spheres to smoothly intersect with one another. This is something that was needed for the water shader, since we want water to look cohesive. The raymarch shader works as follows: from the camera position, it looks for the closest object you can raymarch with, shown as the white arrow in illustration 10. The distance the white arrow has traveled to hit an object, will become distance the ray will travel at first. The ray, the blue arrow in illustration 10, starts at the camera position, camera being the red dot, and uses the camera direction. When the ray has traveled the distance of the white arrow, it will repeat the process of finding the closest object, and use the distance from its new position to the closest object as the new distance the ray has to travel. When the ray itself collides with an object, the iteration stops and the ray returns the colour based on the object’s property colour.
Illustration 11.
In illustration 11 is shown how raymarching looks like in a compute shader (Playlist: Raymarching Shader Tutorial, Peerplay, source: https://www.youtube.com/watch?v=oPnft4z9iJs&list=PL3POsQzaCw53iK_EhOYR39h1J9Lvg-m-g). The Ray hit has an origin and direction, exactly what you need to start the ray from the camera and the right pixel with the camera forward direction as ray direction, as explained before. The traveled distance (the white dot in illustration 10) is increased each increment based on the distance from each new origin to the closest object it finds.
§ 4.1 Draw a circle
Drawing a circle is the first step to take making spheres smoothly union with one another. Two calculations to draw circles have been researched. One calculation can be done using raytracing, one of the more efficient ways to calculate reflections, shades and shadows. The other calculation uses raymarching. Raymarching is one of the least efficient ways to calculate where to draw a circle, shade it and create shadows. However, in this matter it seems very simple, because we require raymarching to make spheres union with each other, we are unable to resort to raytracing to create the metaball effect.
Illustration 12.
In the illustration above is shown how the distance from a spherical object to the origin of the traveled distance is calculated. For the position you fill in the position of the distance traveled origin and substract the sphere position with it. The solution of the subtraction is used in the length function for its distance and substracting the radius from it. Now that we have a circle, we would want it to look like a sphere.
§ 4.2 Shading
The compute shader that is used is an image shader, which is a 2D rendertexture, that means you can not make sphere at all. That is why you have to apply shading to the circle, which will trick people thinking it is a sphere, but in reality it is a shaded circle. To shade the circle, you need the direction of the lighting that we have in the scene. This is easily obtained from the standard scene, since the directional light component is already given in the hierarchy.
Illustration 13.
Illustration 14.
In illustration 13 that the shade is calculated with the dot product of the opposite direction vector of the light source and normal. If you multiply it with the colour of your choice, the circle will now show shading based on a light source you got in your scene. In illustration 14 the normal is calculated from the sphere, so the shade is an actual 3D effect.
§ 4.3 Smooth union
At last, the union is formed between the water orbs , making the aesthetic of cohesion far more realistic than before.
Illustration 15.
The union formula in illustration 15 is from the raymarch tutorial from Peerplay, whom used it from Inigo Quilez’s website (Distance functions: Smooth Union, Subtraction and Intersection, Inigo Quilez, source: iquilezles.org/articles/distfunctions/). In the same article you can retreive how you can calculate different type of object shapes.
In the video above you can see the result of the raymarch shader and the water behaviour combined.
§ 4.4 Performance
Illustration 16.
Illustration 17.
The process of raymarching objects is a hefty process, even for the best of the best graphics cards out there. As you can see in illustration 17, an AMD Radeon RX 6900XT merc 319 has to put in a lot of work to use the raymarch shader on ten moving spheres and a spherical plane from 100 units in diameter. The NVIDIA RTX 2060 laptop version however (illustration 16) uses less power, due to the better raytracing AI. Raymarching costs more processing power from your graphics card, because each ray iterates a maximum of 64 times to finally return a colour for each pixel. Ray tracing for example, requires less processing power, because it does not iterate between sending the ray and receiving the ray colour on a certain position per pixel. An extra downside from raymarching is the amount of iterations you have to choose, because the more raymarch objects you pass, the more iterations you might need to draw farther raymarch objects. Also bigger objects require more iterations from raymarching than smaller objects. This will further increase the amount of work load it requires from graphic cards.
The performance however was far worse at first, where a computebuffer was used to send data from multiple spheres to the compute shader. Due to the irregularity of the amount of spheres that were send, the computebuffer had to be released each time before making a new one, which is very costly, especially sending it through the OnRenderImage cycle. A quick fix to that problem, was to send a Vector4 array instead, which could be used as a float4[] in the compute shader. The float4 that is send contains the x, y and z-axis of the position from the sphere and the fourth number is the radius from the sphere. Converting the code to not use the computebuffer made the NVIDIA RTX 2060 laptop version only use 37% of its power, which is a huge decrease from ~60%.
5. Water shader
Illustration 18, (source: https://developer.nvidia.com/discover/ray-tracing).
To create the water shader, we will be using a compute shader where we use raytracing. Raytracing is far more performant than raymarching. When creating the water shader with raytracing, we can choose to use raytracing or raymarching to draw the water orbs . Because it would be rather useless to use raymarching on water orbs if they are not colliding with any other objects, we would like to use raytracing instead, which will increase performance if water orbs are not colliding.
§ 5.1 Draw a circle
This time we use the method to calculate a circle based on raytracing, because we want to make the shader more performant. Drawing a circle using raytracing uses different calculations than the raymarching circle.
Illustration 19.
The calculations from illustration 19, used for the raytrace shader is from David Kuri, Three-Eyed-Games (GPU Ray Tracing in Unity – Part 1, David Kuri, source: http://three-eyed-games.com/2018/05/03/gpu-ray-tracing-in-unity-part-1/). This time, the sphere is immediately drawn without taking iterating every couple of units. The RayHit bestHit is a struct consisting of a distance, position and a normal. Position being the position of where the ray has hit an object, the distance is between the camera pixel origin to the hit object and the normal is added to have the ability to change the surface of the sphere. The normal is easier to calculate this time, which was more of a hassle in the raymarch shader. The Shade function checks if the ray hit any objects, if it does, it returns the white colour, else it will return the source texture (camera view that is send as a texture).
§ 5.2 Shading
Shading in the raytrace shader is done a bit differently than how it was done in the raymarch shader.
Illustration 20.
In illustration 20 you see the use of the saturate function which creates the shade of the sphere. The direction of the scene lighting is once again used to imply where the shade gets from light to dark, which is put into a dot product with the normal to shade normals too. The light intensity determines the shade intensity of the sphere and increases or decreases the darkness of the shade. The shading is multiplied with the colour and returns the colour for the given pixel.
§ 5.3 Fresnel
Illustration 21 & 22, (source: http://kylehalladay.com/blog/tutorial/2014/02/18/Fresnel-Shaders-From-The-Ground-Up.html).
Adding the Fresnel gives a better aesthetic look to the water. You give the water orb a transparent look at the edges as it should and slowly shows a bit of colour the more water you have to look through. In illustration 21 shows how the Fresnel formula is used in Maths where illustration 22 shows the programmable formula (Fresnel Shaders From The Ground Up, K. Halladay, source: http://kylehalladay.com/blog/tutorial/2014/02/18/Fresnel-Shaders-From-The-Ground-Up.html).
Illustration 23.
Illustration 24.
Illustration 23 shows how the Fresnel formula is used in combination of the shading. The _Bias parameter of the Fresnel shader applies how of the _MainColour is applied to the sphere. The _Scale determines the radius of _MainColour within the sphere and the _Power determines the influence of the saturation variable. Illustration 24 shows how the sphere looks like with Fresnel applied.
§ 5.4 Noise
Water in a sphere form is not perfectly round, you want it to look alive. Therefor we can use normals. To each hit that we get from the circle calculation, you can add noise to its normal.
At first we used a 2D texture to get the parameter for the noise added to the normal. Making the noise texture was done similarly to the tutorial Brackeys has given about 2D noise textures (PERLIN NOISE in Unity – Procedural Generation Tutorial, Brackeys, source: https://youtu.be/bG0uEXV6aHQ). This wasn’t a successful way of adding noise to the sphere, because the 2D texture wasn’t wrapped around the sphere and its values couldn’t be alternated by time and position, because sending two textures every OnRenderImage cycle to the GPU is too heavy to process. However, at the very end I found a method that would work.
Illustration 25.
In illustration 25 is shown how the “snoise” function (noise3D.glsl, S. Gustavson (stegu), source: https://github.com/ashima/webgl-noise/blob/master/src/noise3D.glsl) is used in the compute shader. The noise is now a function in a cginc file, which can be included into the compute shader file I use. This noise function is takes a 3-dimensional position, which returns a float value. The float value is applied to the position of the hit position. The _Frequency parameter increases or decreases the frequency of relief in the noise, where the _Amplitude parameter controls the severity of the amplitude used for the noise.
In the video above you see the result of adding the 3D noise to the water Fresnel and shaded sphere. It should make it alive and more like it is water than a simple sphere.
§ 5.5 Performance
Illustration 26.
Illustration 27.
Raytracing compared to raymarching is a about 10% faster on the NVIDIA RTX 2060 Laptop version shown in illustration 26. However, it didn’t seem to be faster on the AMD Radeon RX 6900XT merc 319 Black graphics card. The 10% faster rate on the NVIDIA graphics card is logical, because each ray that uses its 64 iterations of locating objects around the ray is rather expensive. The difference in raytracing between NVIDIA and AMD is a well-known fact for a lot of gamers and developers. However, the UserBenchmark score still shows that the AMD should perform better (NVIDIA RTX 2060 (Mobile) vs AMD RX 6900XT, UserBenchmark, source: https://gpu.userbenchmark.com/Compare/Nvidia-RTX-2060-Mobile-vs-AMD-RX-6900-XT/m701609vs4091). This might mean that this performance test failed compared to the UserBenchmark scores, that is done by thousands of people.
6. Conclusion
Shaders are great for visual programming, especially with lighting, reflection and refraction. However, making a “realistic waterbendable” water orb is a too hefty of a job for shaders. Realistic in terms of using a water shader that closely correlates with aesthetic and behavioural characteristics of water in real life. Which are reflection, refraction and cohesion.
The aesthetics of cohesion is the bottleneck in the shader. It does require raymarching to smoothen intersections between objects to make it look somewhat realistic. The raymarching shader used is just too inefficient to make waterbending viable for games. It would seem to be more efficient to use vfx, animation and applying water shaders on objects that are animated, like the source of inspiration I found on YouTube from ‘A Bit Of Game Dev’. Using vfx and animations could also make water orbs change form which would increasingly attract as an aesthetic than a rather plain water orb. So all in all, raymarching for now it doesn’t seem like raymarching is the best to make waterbending look more realistic.
In the search of trying to make the Raymarching more efficient, it came to a halt due to issues that would arise with the method used. There was an idea to only raymarch the water orbs with collisions. The other water orbs and the source texture, which is the camera’s output texture, would be drawn by raytracing. Keeping the other objects from being drawn using raymarching could decrease the amount of work the graphics card puts into the shader. To no avail however, because it meant that the raymarch shader would visually snap from an intersecting form to a smooth union form. It will snap, because the spheres have to intersect with each other, but the smooth union would usually take in effect before they intersect. One possible solution for that problem, is increasing only the collider radius, so the sphere is drawn using raymarch before the visual spheres collide. This is the point where the research for now has halted.
A second, but flawed idea there was, is to make two seperate compute shaders and return the raymarch texture into the raytrace texture as an overlap. But as what is just said, it will overlap the raytrace texture, which could mean that objects that are closer to the camera than the raymarch objects, will be overlapped by the raymarch shader. So this idea was easily binned.
7. What research to continue on?
To finalize the ideas that were put in place, the raymarch shader should have normals that look like water waves added to its water orbs. Based on the normals of the water orbs, refraction should be added within a Fresnel effect, which unfortunately did not work well before. At last it would be interesting to research how you could add reflections to the water orb using raymarching. If all these things are added, the water orb should look good aesthetically. Some of my fellow students and I mentioned jitter in the water behaviour, because it isn’t smoothened and the sudden change in its movement direction looks odd. Personally I would like to do an partial overhaul to try to find a possible cleaner option to add water behaviour and it would be interesting if there are possible scientific solutions to make it the water orbs cohesive.
To actually compare the shaders used with the vfx, animations and raytracing shaders, a scene should be made to test the performance of the vfx, animations and raytracing combined. Besides testing performance, it should be rated based on the looks by some random testers. After the testers have done ratings of the raymarch shader method and the vfx, animations and raytracing method, you can assess the preferred style and its performance and decide what would be a preferred option. Besides that you could assess if the style is worth the work load the developer has to put into it, because one of the two methods will be slightly faster to make.
8. Sources
8.1. Introduction
- Han, K. (2020, July 1). Is the live-action Avatar: The Last Airbender as much of a mess as we remember? Polygon. https://www.polygon.com/2020/7/1/21310066/avatar-the-last-airbender-live-action-movie-m-night-shyamalan-netflix
- Wiki Targeted (Entertainment). (n.d.). Avatar Wiki. Retrieved 6 November 2022, from https://avatar.fandom.com/wiki/Waterbending
8.2. Idea
- Dev, G. O. B. A. (2021, October 8). How To Create Water Effects in Unity – Water Bending Tutorial. YouTube. https://www.youtube.com/watch?v=3CcWus6d_B8&feature=youtu.be
8.3. Water orb behaviour
- Bluefoot, I. (2013, April 27). INK book. Pinterest. https://www.pinterest.com/pin/cool-water-cohesion-gif–233131718183223504/
8.4. Raymarch shader
- Peer Play. (2018, November 23). Raymarching Shader – Unity CG/C# Tutorial _Chapter[1] = ‘Shader Theory’; //PeerPlay. YouTube. https://www.youtube.com/watch?v=oPnft4z9iJs
- Quilez, I. (n.d.). Inigo Quilez. Retrieved 21 October 2022, from https://iquilezles.org/articles/distfunctions/
8.5. Water shader
- Ray Tracing. (2019, April 19). NVIDIA Developer. https://developer.nvidia.com/discover/ray-tracing
- Kuri, D. (2018, May 3). GPU Ray Tracing in Unity – Part 1 – Three Eyed Games. http://three-eyed-games.com/2018/05/03/gpu-ray-tracing-in-unity-part-1/
- Halladay, K. (2014, February 18). Kyle Halladay – The Basics of Fresnel Shading. http://kylehalladay.com/blog/tutorial/2014/02/18/Fresnel-Shaders-From-The-Ground-Up.html
- Brackeys. (2017, May 17). PERLIN NOISE in Unity – Procedural Generation Tutorial. YouTube. https://www.youtube.com/watch?v=bG0uEXV6aHQ&feature=youtu.be
- S. Gustavson (stegu). (n.d.). webgl-noise/noise3D.glsl at master · ashima/webgl-noise. GitHub. Retrieved 7 November 2022, from https://github.com/ashima/webgl-noise/blob/master/src/noise3D.glsl
- NVIDIA RTX 2060 (Mobile) vs AMD RX 6900XT. (n.d.). UserBenchmark. https://gpu.userbenchmark.com/Compare/Nvidia-RTX-2060-Mobile-vs-AMD-RX-6900-XT/m701609vs4091
Leave a Reply