Indirect Illumination

Posted: October 31, 2011 in RayTracing

I added indirect illumination by sampling the hemisphere around each intersection. Unfortunatly this produces very noisy images, especially with a busy scene containing very different frequencies of light. Sampling the the hemisphere using a cosine probabilty helps reduce a little of the noise.





…the beginnings

Posted: October 31, 2011 in RayTracing

Below are some images created from the raytracer I’m currently working on. These images are created using whitted type ray tracing.


Posted: October 29, 2011 in Real-Time Rendering

Terrain rendering through multiple draw calls by dynamically stitching together patches. The pictures shown use a 4096×4096 heightmap giving around 6 square miles of exploration using units of 1 meter per game unit. The shader accepts up to 16 layers, allowing for the painting of different textures using blending. The patches are processed in the Vertex shader (displacement, normals, tangents), which makes the terrain completely dynamic, and could be made destructable by using a dynamic heightmaps. The terrain is quite low in terms of memory cost as only different LOD  patches are stored (2^4, 2^5, 2^6, 2^7, 2^8), and are reused. The real cost comes into storing the heightmap which can be fairly large.

4096 * 4096 ~ 16MB (6 Square miles)
8192 * 8192 ~ 67MB (25 Square miles)
16384 * 16384 ~ 268MB (100 Square miles)

However, these need not be loaded all at once and can be streamed in when required, releasing the out of view ones.

I picked up the second copy of PBR today as I like to keep my book collection up to date. I’m not the biggest fan of the book as I find it difficult to read and I dont like the approach they took of explaing how to implement such a tool, Im more a fan of having something simple that works at multiple stages of the process and building it up, as is done in “Ray Tracing from the Ground Up”, which is a much easier read. The new edition brings in new sections on Subsurface Scattering, Metropolis Light Transport and Precomputed Light Transport and some other stuff. They also made two quite big changes, one was to do away with the DLL module approach, which I agree with; it was needless complexity, but they’ve also introduced threading. I think this was a bad move to be honest, as again it adds a lot of complexity. I think these kinds of books should only focus on the problem at hand which is producing Physically Accurate Images, not producing Physically Accurate Images …FAST. I have threading in my RayTracer, and whilst it wasnt that difficult to implement, a lot of the code has to be changed/organised differently to accommodate it, which just makes the task all the more difficult when you’re first starting out. Unfortunatly its not something they could have just chucked in the appendix, as threading needs to be thought about from the very beginning, but I think it should have been left out.

I recieved my copy of Practical Rendering and Computation with Direct3D 11 today and am slowly working my way through it. First thoughts are that its detailed, very detailed. The chapter on the rendering pipeline itself is probably worth the money. Before this I didn’t have much material on Directx API, my only resource was the SDK docs. Frank Lunas Books are also good, but lack detail, which is fair enough as they’re introductory books, where as PR&C is more of a reference manual …with some examples.

I think it’s going to end up being one of those books you’ll always have on your desk.

Particle Editor

Posted: August 29, 2011 in General, Real-Time Rendering

This is my Particle Editor, which was inspired by Unreals Cascade, I call it ‘AssCade’. Its got a lot of the same functionality, including being able to graph all sorts of effects. As you can see from the images, each particle system is made from a series of emitters, which are in turn affected by modifiers. All the computation is done on the CPU as it made it easier to do effects on sibling emitters. Once all the emitters have been processed, they are then sent to the GPU to render. Where each particle consists of a single vertex, and is then expanded into a quad in the Geometry shader. Rather than have each emitter manage a vertex buffer, I instead chose to use the more optimized route of having each particle system manage the vertex buffer, that is used by its child emitters, requiring only a single lock per Particle system, as opposed one for each emitter, which can really add up with effects usually having at least 3-4 emitters.

I am toying around with the idea of having all the computation performed on the GPU to increase performance, by compiling\creating a HLSL effect file from the particle system nodes, similar to what Epic does with their material editor, but I havent got around to that yet.

Shader Editor

Posted: July 29, 2011 in General, Real-Time Rendering

This is a shader editor application I wrote that allows you to write and then test your shaders in the same editor. You write the shader in HLSL on the right and then see it rendered to a mesh on the left. The editor has a bunch of default meshs such as cubes, spheres and planes etc, but also allows you to import your own using my 3dsmax exporter. Ive written my Effect library to compile shaders for multiple vertex formats, so that if you have a shader you want to be used on different formats, you can use preprocess compilation techniques to break the HLSL code into a series of #if #else defines for each format your shader supports. The shader is then compiled for each format. When a mesh is then loaded in and an Effect is to be applied, its vertex format is compared against the compiled versions of the effect, and if a match is found( ie, if that shader was compiled for that particular vertex format), then that specialised shader is applied. The editor is pretty handy as it also reports back any compilation errors from fxc.exe in text window at the bottom of the editor, and also includes an assembler view, which allows you to see the code in assembly. Textures are supported by simply specifiying the location of the file within the Texture definition using Annotations, this is also made easier by using the snippet tool for common code. Once the effect is compiled and produces the correct results, the editor allows you to then make a material from that effect. A material within my engine defines an effect to be applied and the effect parameters(such as textures, samplers and material values), which are either created in code, or loaded\saved as an xml file. The editor creates a material file by parsing the effect file for values that the user can set, this being textures\samplers and any data within “cbMaterial”, which is the material constant buffer. Again, using annotations within the shader, the author can specify how the user sees this data within the editor. So a “float4 DiffuseColour” can be given the annotation “GUI_COLOUR”, which will present the user with a colour selection tool, or a “float SpecularPower” can be given the annotation “GUI_SLIDER MIN[0] MAX[100]”, which will present the user with a slider tool to set floating point values. This list of values are then presented to the user using GUI widgets, which they can use to set the values in a convenient way. As settings are changed, the results are again seen in the view window. Once the user is happy, they can then export to material to an XML file. The Effect is exported separately, and is only referenced by the material.

Ocean Shader

Posted: December 1, 2010 in Real-Time Rendering
Tags: , ,

This is a Real-Time ocean shader I’ve been working on. The bumpyness is achieved from both vertex displacement and normal mapping, where the vertices are displaced vertically on a grid that is projected in view frustum space. This way, no buffers need to be updated, they are simply projected in the vertex shader so that all the vertices are being seen and none are wasted.


Posted: October 29, 2010 in General

Since using high level shader languages like DirectX HLSL, I’ve become quite the fan of vector swizzling, and wanted to use it with my maths libray. My solution was to write a quick console app to write the code for me, and then just include those files in my vector classes. Probably not the most ingenious solution there is, but it worked well for me. Using this technique does produce a fair number of new member functions though :

Vec2 = 2^2 + 2^3 + 2^4 = 28
Vec3 = 3^2 + 3^3 + 3^4 = 117
Vec4 = 4^2 + 4^3 + 4^4 = 336

But now I can write code like :

fsVec2 a = fsVec2( 1.0f, 2.0f );
fsVec4 b = a.xxxx();
fsVec3 c = + a.xyx() + b.www();

I chose not to add MACROS to enable the “.xxxx” syntax, as opposed to the “.xxxx()” as I don’t think the fewer keystrokes justify polluting the global namepspace, specially as I like to use names like “xy” and “xx” for local variables.

Below is the source for doing this.

3ds Max Exporter

Posted: October 29, 2010 in General

So I finally got around to writing my 3ds Max Exporter. Currently it exports :

  • Static Meshes
  • Skinned Meshes
  • Animation Clips
  • Skeletons
  • Collision Meshes

Floating Point Format

Posted: October 29, 2010 in General

One of the chapters in my ASM book I found really interesting was the one on how floating point numbers are stored and how they’re processing in calculations. Here are some C# apps to create and decompose a 32-bit floating point number.

Model Viewer

Posted: October 29, 2010 in General, Real-Time Rendering

This is a model Viewer I made to visualize Meshes and their materials that are exported using my 3dsMax plugin.


Posted: October 29, 2010 in Real-Time Rendering

This is an approximate Radiosity effect I was trying out. Approximate in that it instead renders from the vertices point of view as opposed to the patches point of view. It also doesnt render the half sized bits either, which seems to cause artifacts in the corners,  not a bad result though. The app runs in real time (125fps in debug mode), and works by first rendering the view from each vertex to a render target, and then using that target in a seond stage to light the vertex itself, and this process continues. The viewport rendered from each vertices point of view is actually very small (only 32×32 pixels in the images below, which you’ll need to zoom in quite far), where I use a 1024×1024 render target to store them all in a grid. I also included some debug functionality that allows you to view the scene from each vertex (shown in the last few images) which was handy. The scene used here is very low poly, you can see the wireframe in the last few images.

This first image shows the end effect.

This second image shows the views from the vertices from one iteration.

These final images show the view from several vertices, which made debugging easier.

Game Engine Gems

Posted: October 29, 2010 in News

So apparently theres yet another Gems series being published, this time on Game Engine Design, Game Engine Gems. The first edition is being authored by Eric Lengyel, who wrote the very popular book Mathematics for 3D Game Programming and Computer Graphics. I assume it will follow the structure of all the other Gems series, that being lots of tips and tricks chucked in by knowledgable people, generally from the Games Industry. Im not sure how this series will differ from Game Programming Gems, as that can be considered “Game Engine” related too. But I’m all for new books, so I’m not going to to complain. 🙂

“Game Engine Gems brings together in a single volume dozens of new articles from leading professionals in the game development industry. Each “gem” presents a previously unpublished technique related to game engines and real-time virtual simulations. Specific topics include rendering techniques, shaders, scene organization, visibility determination, collision detection, audio, user interface, input devices, memory management, artificial intelligence, resource organization, and cross-platform considerations. A CD-ROM containing all the source codes and demos accompanies the book.”

Game Programming Gems 8

Posted: October 29, 2010 in News

The new edition of Game Programming Gems has been released !

Atmospheric scattering technique shown by Sean O’neil in his GPU Gems 2 article.

FPS Images

Posted: August 29, 2010 in Real-Time Rendering

These are some pictures of an old FPS game I was working on. Unfortunately I lost all the code 😦

Rendering Techniques

Posted: July 29, 2010 in Real-Time Rendering

This demo shows three common rendering techniques and how they vary in performance. Forward-Multi-Light, Forward-Multi-Pass and Deferred Rendering. The scene comprises of  a collection of spheres and 16 point lights.

Multi-Pass Single Light = ~64FPS
Multi-Light Single Pass = ~200FPS
Deferred = ~300FPS

Multi-pass rendering is hugely inefficient as you have to redraw every thing N times, where N is the number of lights. Multi-Light rendering is more efficient as you only draw things once, but you’re lighting the geometry as its being drawn to the BackBuffer, this isn’t great as you’re probably lighting a lot of geometry that’s going to be behind other opaque geometry, wasting processing time. This could be solved by first rendering a Z-Pass and only then lighting geometry that’s going to be in front, but then you’re back to drawing things more than once again. Deferred Rendering is probably the most efficient in terms of processing as you perform the lighting as a second pass on only the effected area, and the geometry is only drawn once, but it can be both memory and bandwidth intensive when you consider the “baggage” that needs to be stored and accessed.

Using Gildors UE Viewer, I was able to extract Unreal Meshes and their animations and then export them from 3dsMax using my Exporter plugin.

Skeletal animation exported using my 3dsmax plugin.