Optimization on Ray Marching

first optimization attempt is to make the render target as small as possible. so i change the render target color format from RGBA32 F to GR32 F. for depth buffer i store it on red channel which is 32 bit, and for the color part inside green channel which also 32 bit.

for the depth it is pretty straightforward, just store it right away and done. for the color part. the color are vec4 value, i use RGBA to Float encoding, so i can have single float value to store that vec4 color into one 32 bit channel.

if you interested in the encoding part you can read it here.

for rendering i need to disable any texture filtering because it might break the encoding and turn color into noise. as shown in the image below. disabling texture filtering exposed the primitive i’m using in the ray marching. yes i’m using cubes for estimating object volume. need a better approach.


here is the rendering window split, i think i can do a simple multi sampling for the color map and normal map to remove the voxelized looks.



[edit] i thought too much detail on depth voxelized my scene, turn out heighmap texture filtering are the problem here. bilinear filtering on the height texture create the voxelized effect. re enabling them fix the problem. i think that voxel looks can be a useful effect someday.

Ray Marching for geometry



i was wandering if it is possible to beat polygon count with ray marching. instead of just wondering around with the idea, i start coding. for start every object is like a terrain object. only use single height map and a color texture. usually people store height in the alpha channel, but i think alpha channel is required to make a non “cube object”. for example a coin. same technique, same primitive, plus alpha channel. check out the famous lion and wall texture being rendered.


from the distance function i can get a nice depth map of objects. and each object means doing a render target volley so i can compare the depth. here is a quick example of 2 object rendered. black means far from camera. notice the nice intersection between object.


next i create a simple deferred rendering by generating normal map. normal map generated from the depth field. also deferred rendering requires positions which also generated from the depth map. as you can see below, the 3 component of deferred rendering, the position is not being renderer. see the lower right square is the scene lit by a simple lighting calculation


the final image looks like this, well i drop the ambient lighting for more geometry exposure. as you can see artifact appear in some places. on the bottom middle part of the image, some sort of rippling appear. it is easily fix by increasing the color depth of the render texture. increasing render target texture from a8r8g8b8 to a32r32g32b32 sadly kill the performance. probably due to render target switching (volley).



i borrow some texture from these sites

steep parallax maping

cool indian temple texture


mecha+ rigging