Blades of Triangulum Progress 2

some fix on the rendering. here are screen shots of the gbuffers. some still not right. anyway, they are, color buffer, position, material attributes, normals, depth, fake reflection, shadow, and emissivity based bloom.

with all these new improvements the rendering looks a lot different especially the glowing part, light sabers now looks much better although stages still looks empty.

so, how much the game has changed this whole three months ? well a lot has hapens. below is one of the very early playable state. the lighting are so much different. behind those red armor the character are still pretty much human, the swords are still authentic medieval swords.


below is the very early character design. as the first idea of the game creation, that the game will be about duel between medieval noble in shining ceremonial armor. in the vision to show supper complex decorated armor with varying material visible carvings etc. in the development process they proven not enough to show a complex character due to the uniform metallic material and small carvings not being visible from var distance LOL.


below is another example of what I’ve been iterating. a knight in chainmail and a pot helmet, pretty cool eh?


below another iteration of robotic character. this one ends up in 2 of the characters in the final game rooster.


below is the final character design. fully sculpted in blender. the end result is different from the sculpted model, because in the middle of programming the character i think it will looks way better if i make a action figure style character. with visible joints and stuff. those vocal carvings are going to pop real good in volume rendering.


next post i will show you some sound tracks, may be gameplay video. thank you for reading.

Blades of Triangulum

I think to keep spirit up on the volumetric rendering experiments why don’t make a game out of it. The current state are still in experimental. Tings still has glitch here and there, but i think they look rather nice. So here is my new game called blades of Triangulum.

This game is about greatsword duel between alien, took place in a unknown planet located in the Triangulum galaxy. The plan is to adopt Babylonian architecture for buildings, humanoid reptilian with scaly exoskeleton as the inhabitant, underwater like plants, and biologic based technology.  For the game mechanic, medieval swords fighting became the main influence for the animation.

here is the title screen looks like. a volcanic plains, by the way the logo is a bit awkward and about to be replaced.


versus mode, as seen below the character are rendered using view aligned slices. Not that good i guess, but rather a unique one. nice bumps every where, thanks to the volumetric rendering, even the slices are visible, the character definition are as sharp as ever.


with brutal light count, color, and placement, extreme geometry are well exposed thanks to the deferred rendering, a lot of light probes can be used.


in game screen shoot, 2 volumetric character about to cut each other


one giant leap towards victory


below, foot work in action. where the attack going to land, how to avoid being hit, or how to parry, all based on the foot work mechanic.


below, computer kills player.


below, player kill computer player


below, RPG like 6 slot customization. the equipments changes character’s appearance and the underlying stats that makes characters different between one and another. beside stats equipments also alter character’s move list, for example, a “trickster blade” can pull more advance tricks but no power or speed based techniques, etc


character below is considered as freedom fighter character. minimum armor with advance sword.


character below is an example of a noble alien. with imperial armor, and a basic Orthodox sword


thanks for reading, leave a comment if you are interested in this game


Skinned Volumetric Character

my fist attempt to render a skinned volumetric character. low FPS because high of density slices. A quick look gives a life like figure impression.  i haven’t implement any transfer function. i think low density could give a good result, it just editing shader is not my top priority right now.


another shot, a closer look. something wrong with the neck, fingers, etc. to be honest i took a while for me to make a custom exporter from blender to produce this skinned mesh. a custom file format. the gaping neck actually caused by the algorithm failed to sort 8 vertices to form a cuboid shape. because the model are a blender mesh, and i haven’t been able to get vertices in the correct order.


anyway animated muscle and veins are awesome to watch.

Some 3D texture test

Before i start, i’d like to clarify about “3d texture”. in my rendering code there is no actual 3D texture. what i call 3D texture here means one large ordinary texture, so if i write 32x32x32, it means the actual texture is 32×1024, in my opinion it is 3D, it just being represented in an uncommon way perhaps? So i have 2D slices of object, from the bottom all the way up to the tip, each slices are 32×32 pixels in dimension, and i arrange those slices in one big texture. Lets begin.

Picture below shows the 3D texture rendering LOL. I’m really sorry that the result are not as you might expected, well i’m still thinking it’s rather interesting though. It is just my laptop could not handle editing and rendering high polygon blender scene. Blender rendering becomes really slow because i have to clip the high poly 3D model using boolean operation which are animated as the camera moves from the bottom of the object all the way up to the tip, especially the editing part, it is really slow to move the clipping cube, check the clipping result and back again adjusting it’s position. So, in order to speed things up, i have to lower the resolution of the render target image down to 32 pixels, this only speed’s up the rendering process, editing still slow. After the rendering finish i have a 32x32x32 texture that then i feed up to my volume renderer in irrlicht.


below, another example of baked 3D texture being rendered. I use the same resolution 32x32x32. all the detail i’m expecting are gone because of insufficient resolution. Probably some other time when i have decent hardware i will bake higher resolution texture.


Mapping 2D textures inside a cube

As i mention in earlier post, i will try to explain what sort of things that i called primitives. Firstly, of course the 3d primitive being used are vertices, triangles etc. after the 3d volume slices created which are cube, i did some type of uv coordinate manipulating to render a totally different object. these object which then i called primitives. here they are

Basic Cube

below is a basic cube, with 3d texture coordinate, our starting point. from this Cube i can create 4 type of object. they are : basic volume using 3D texture, single sided cube, double sided cube, and double sided front and back.


Basic Volume

using standard 3d texture sampling to determines each slice values. there are plenty of references on line for doing this, so i’m not going to make any example, well… lets make it next post, baking blender model for 3d texture is so time consuming.

Single Side Cube

below is the single side cube, means the 2d texture, the height map pixel value determine the height value of an axis. in practice, a greater value than the height map pixel are transparent.


Double Sided Cube

the double sided cube are basically the same with single sided one. the difference is the displacement are reflected from the center point.


Double Sided Cube Front And Back

this type extending the Double sided Cube. instead of just reflecting the height map values, it uses different textures for front and back side via texture coordinate, half front, half back


Basic Cylinder 

using simple calculation i can get the cylinder out of the basic cube. and from this cylinder i can map a 2D cylindrically projected texture the same way.


image below shows the possibility what a cylinder might look after the shader kicks in.


ok next post i will pay my debt on showing  you the spherical object type, and a 3D volumetric texture in action.

below a sneak-peek of what this method can do. a full volumetric scene, an interpretation of a .tmx file. all geometry fit in only 16 objects, actually it fits in one, but i need some frustum culling. i have a plan to mix 2d and 3D textures all in one shader, stuff all that in one geometry one draw call.


Some screen shoot

one of the advantages of my new ProxyNode (my custom irrlicht scene node), is the fact that it is an irrlicht scene node. That means regular skeletal animation is works with it. The picture below shows how bone parenting works with the proxy node. The first noticeable problem are the generated screen space normal map. it gives odd outlines on objects. Not a big problem visually, still looks nice. the other problem is the density of the slices . number of slices required for a nice looking surface are pretty high. i need at least 15 slices per irrlicht unit, 50 slices is the nicest looking surface which is pretty high and drops fps a lot. but still around 30 fps which is a good sign here, i haven’t done any geometry generating in shader. I will say it is a fair trade off for unlimited possible  contour, well texture size is the limit here.

when the slice density are too low,  the screen space normal becomes totally broken. because the normal will always facing to the camera, normals which are facing other direction only visible at the edge of the slices. i’m working on some kind of contour interpolating shader to fix this (WIP). if the contour interpolating shader is done, i can use a lot lower density for rendering and still have good result.


Below is an example  of using low resolution texture for the head. without soft brushes when painting it in gimp, i can get a nice voxelated surface, pixel art form pixelated results. The heightmap are being represented nicely by the slices. for the more complex surfaces, like the axe and armor, all geometry are sticking out nicely. Well below is a rather ugly looking model, since i’m not baking any 3d model from blender like i usually do, just quickly edit some images from google.


below, same character from different angle. lighting pretty much working. just some minor glitches from the lack of slice count.


next post, i will try my best to describe the primitive involved in this rendering, beside the proxy geometry of course, because already described in online articles.

CPU assisting GPU

Sounds strange right? In reality, there is not enough room for work load required by my ray marching engine. When i use high end GPU, performance went to the roof, rendering done so fast. But when it comes to the average available GPUs, no chance for this rendering engine. I do some inspection, while CPU are literally sits and wait for the GPU to complete its task, GPU resources are fully occupied. I check CPU load using Windows Task Manager, and using GPUz to check GPU load. while i got below 1% on CPU, i got more than 90% on GPU. That fact conclude that this method need to wait a decade before we can use it. I was too naive.

So what happen to the volume rendering? as the tittle says CPU assisting GPU, i finally accept the algorithm being overred in GPU gems. By using proxy geometry CPU actually assisting GPU. Texture sampling (texture2d) went down almost 64 times lower. This is because i’m using maximum 64 steps for the ray marching to reach object surface, and some times it hits 128 steps for precission. when doing raymarching, each steps requires to read a texel. While it is required for the distance function to reach the object surface, it also kills the performance. Not to mention other expensive mathematical operation being done each step. i must admit that GPU are really awesome. When imagining that the fragment program are run for every pixel color on the screen, it becomes clear that the requirement is just to big to handle most GPUs right now.

Back to proxy geometry, the idea of proxy geometry are simulating ray travel from the camera to the end of the volume data. this done by slicing the volume data using view aligned planes from the nearest to far most position relative to the camera. by doing so the tracing operation are gone. the tracing operation are replaced by alpha blending operation when rendering each slices.

according to the article there are some major disadvantages in using this method. first generating proxy geometry takes time, can be speed up by doing it in the geometry shader. Secondly 3d volume data are huge data sets, when doing rendering, it tooks time to upload into GPU memory, not to mention generating volume data also not an easy task. In general while the operation required are much much less, the amount of memory required are increasing. the third is for each slice we will find alot of blank area which are wasting our computational resources. of course it is easy fixed by doing segmentation on volume data so blank area are skipped.

from the last experiment, i’ve learned that volume data can be stored inside 2D images, there are at least 3 types, rectangular (imagin tessellating plane), cylindrical, and spherical. Well some people will say, just do a tessellation. I think that might be true. that is what GPU manufacturer, and rendering API improving this whole time. Still my curiosity gets me, i think there is a lot of geometrical shapes that impossible to achieve using tessellation “economically”. And i just went to coding mode.

first step are creating the proxy geometry, put some debugging data visible. as seen below, texture coordinate are correct, x rend, z green, and y blue. i also draw the outline of the slices.


the view aligned slices looks pretty, aren’t they? below the rendering without slices outline


after all the required element are setup, i do a test rendering using my favorite Indian temple texture, here is what i got


again tessellation might be much faster here. but volume rendering is where we can get the round surface to be perfectly round, and vocal points to look as sharp as possible. as you can see above, even without lighting (yet), we can really feel the bumpiness of the surface. above tests are running on a laptop intel i5 4200U, and integrated intel graphic chipset. dont worry it reach 200+ fps when i’m using my high performance nvidia 720GTm.

Okay, next post? deffered rendering :).

Progress On Ray Marching

still no meaningful  progress, just a couple of cylindrical object tests. below is a screen shoot showing that the scene rendered using a pretty low GPU. On resolution 1024 x 512 i got about 12 FPS. so bad in performance. Tested on nvidia gtx 980 for about 200+ fps. Pretty satisfying for next gen hardware. I need to add that this rendering system literally takes less than 1% CPU. that makes a big plus. if i build a threading system around this, i can have a pretty decent game engine. 20161212

See bellow. A cylinder can look really nice. I bake a high polygon model in blender. chop it to several pieces, bake the displacement maps on to textures using cylindrical projection, then put them together in c++. you can still notice the low FPS due to the same low performance integrated graphic card i’m using. Well feels like tessellation without geometry shader. if you look closely you will notice that the shadow still wrong. 20161212b

below are the Indian temple texture i previously use. this scene made out of cubes and one sphere. the stat on the upper left corner says 2 draw calls and 11 objects. basically i implemented composite type object, where objects are grouped together to minimize draw call, and decreasing render target switching. 20161212c

full size images image1 image2 image3