extremeistan

thoughts about computer graphics, game engine programming

Category: rendering

Physically Based Camera Rendering

The need

With all the talk about Physically Based Rendering nowadays, it’s important to take a look at the abusive post-processing effects games use today to either enhance the image or mask artifacts and lack of detail. Therefore I think it is important to have a Physically Based Camera Rendering (PBCR) system in games, because to have an image that looks like as if it was taken from a movie, you have to look at it through some lenses.

The ingredients

Most (if not all) of today’s AAA games (and even some indie) feature high-quality post effects, such as Depth of Field, Motion Blur, Film Grain, (pseudo) Lens Flares, Bloom, Glare, Vignetting and Chromatic Aberration. Usually you can also adjust the Field of View of the camera you’re observing the world through, and sometimes this is a gameplay feature (think of the sniper rifles).

Of course all this requires a Gamma Correct High Dynamic Range rendering system to be working properly. Pair this with Physically Based Rendering, and you should get really realistic results.

Depth of Field

Depth of Field (or DOF) is an effect that simulates the out-of-focus (blurry) parts of the photographs. http://en.wikipedia.org/wiki/Depth_of_field

Depth of Field

Depth of Field

Bokeh Depth of Field refers to the various shapes that are formed when highlights are out-of-focus. This is in my opinion a pretty nice effect, but there are game styles where this is actually counter-productive (like FPSs).

[UPDATE] The Bokeh shape is directly affected by the aperture, as you can see here, on real world cameras you can even create custom shapes (so sprite based bokeh is not all bs), and with some clever blending between the shapes you can create dazzling eye-candy effects (who told you this can’t be a gameplay feature?).

On real world cameras you can control this effect by changing the point where you focus.

To add, you can change your f-stop (http://en.wikipedia.org/wiki/F-number) number (the diameter of your aperture). The bigger the f-stop is, the smaller the aperture diameter becomes, and therefore less light comes through there, so the image’s exposure should decrease. A high f-stop number also means that your camera will increasingly start to look like a pinhole camera (http://en.wikipedia.org/wiki/Pinhole_camera) and therefore more and more of the image should be in focus.

The f-stop number also depends on the focal length (http://en.wikipedia.org/wiki/Focal_length), and focal length directly affects your field-of-view (FOV).  Large focal lengths should mean small FOV values. The focal length is a parameter of your lenses, however this is why zoom lenses were invented, you can change the focal length, therefore you essentially zoom in on the image, your field of view becomes smaller, and also your depth of field becomes shallower (see, everything correlates).

There are numerous papers around the internet that discuss how to implement this technique efficiently:
http://dice.se/wp-content/uploads/BF3_NFS_WhiteBarreBrisebois_Siggraph2011.pdf

http://www.crytek.com/download/Sousa_Graphics_Gems_CryENGINE3.pdf

http://bartwronski.com/2014/04/07/bokeh-depth-of-field-going-insane-part-1/

Motion Blur

Motion Blur is an effect that simulates the streaking (or blurring) of objects moving relative to the camera either because of their speed, or the camera’s long exposure. http://en.wikipedia.org/wiki/Motion_blur

Motion Blur

Motion Blur

This effect is created because for that camera it actually takes time to record a frame, and in this time objects move, therefore they appear on more than one pixel of the image. Longer exposure times mean that it takes even longer for the camera to record a frame, so objects are more prone to appear blurry.

There are also numberous sources to learn from:
http://john-chapman-graphics.blogspot.hu/2013/01/what-is-motion-blur-motion-pictures-are.html

http://john-chapman-graphics.blogspot.hu/2013/01/what-is-motion-blur-motion-pictures-are.html

http://mynameismjp.wordpress.com/the-museum/samples-tutorials-tools/motion-blur-sample/

http://www.gamedev.net/page/community/iotd/index.html/_/hybrid-motion-blur-r321

Film Grain (or noise)

So, suppose you have a perfect image rendered using PBR, but it still doesn’t look right. You may be missing some noise. This dreaded artifact of path tracing systems is actually quite common in real-world screnarios (although it is way more subtle).

Film Grain

Film Grain

In real photographs the images may look noisy due to the fact that the surface that captures the light is actually somewhat sensitive to it. The sensitivity is determined by the ISO speed setting on your camera. Large ISO settings mean that the surface is really sensitive to the light, and therefore you can take a picture even at night, or in a dark room. This comes at a cost though, the image will get noisy.

Here’s a link on how to implement this:

http://devlog-martinsh.blogspot.hu/2013/05/image-imperfections-and-film-grain-post.html

Lens flares

Lens flares are again a nice artifact of real world camera systems. They are formed by light reflecting and scattering around in the lens system of a camera. Sci-fi games usually like to abuse this by putting it every-f-in-where, even if it totally shouldn’t be there. Even JJ Abrams would be jealous of some games. http://en.wikipedia.org/wiki/Lens_flare

Lens Flares

Lens Flares

A nice, natural looking lens flare effect though may add a lot to the realism of the scene.

Here’s some links on how to implement it:

http://john-chapman-graphics.blogspot.hu/2013/02/pseudo-lens-flare.html

http://resources.mpi-inf.mpg.de/lensflareRendering/

Bloom

Boom is an effect that produces fringes of light extending from the borders of bright areas in an image. The light kind of bleeds to the dark parts of the image. http://en.wikipedia.org/wiki/Bloom_(shader_effect)

Bloom

Bloom

Here’s some links on how to implement it:

https://de45xmedrsdbp.cloudfront.net/Resources/files/The_Technology_Behind_the_Elemental_Demo_16x9-1248544805.pdf

https://software.intel.com/en-us/articles/compute-shader-hdr-and-bloom

Glare

While bloom is an artifact of the camera lens system, glare is actually an artifact of your eyes’ lens system. While your eye can adopt to sudden light intensity changes pretty quickly, it’s still not perfect, and therefore you may experience difficulty seeing. http://en.wikipedia.org/wiki/Glare_(vision)

Glare

Glare

You can simulate this by implementing eye adoptation, which changes the exposure of the image over time based on the average luminance.

Here’s a paper on this:

http://resources.mpi-inf.mpg.de/hdr/peffects/krawczyk05sccg.pdf

Vignetting

Vignetting in real world camera systems is produced by the lenses blocking light from each other.  This also means that the amount of vignetting is dependent on the f-stop number (aperture). http://en.wikipedia.org/wiki/Vignetting

The bigger the f-stop number is, the smaller the aperture diameter will be, and therefore more vignetting will be visible.

Vignetting

Vignetting

Implementation:

http://devlog-martinsh.blogspot.hu/2011/12/glsl-depth-of-field-with-bokeh-v24.html

http://qctwit.blogspot.hu/2012/01/glsl-vignette-and-alpha.html

Chromatic aberration

Chromatic aberration is an effect when the lens system fails to focus all colors to the same place. This results in fringes where there are large light intensity differences in an image. http://en.wikipedia.org/wiki/Chromatic_aberration

So probably when you implement it you should experiment with luminance based edge detection filters, so that only the large contrast areas will be affected.

Chromatic Aberration

Chromatic Aberration

Implementation:

http://gamedev.stackexchange.com/questions/58408/how-would-you-implement-chromatic-aberration

Putting it all together

So after you’ve implemented all of these effects, it’s time to link the parameters together, so that all of them work in a system simulating an actual camera.

Here’s an impressive demo that showcases this:

http://dl8.me/camera/client/

Remember that sometimes less is more, most of these effect are normally really subtle, so make sure you don’t go overboard.

[UPDATE] Just a small addition to the implementation part: the rendering order matters. So, suppose you have your fully lit scene, along with subsurface scattering and reflections blended on top. Then you do these effects in the following order:

1) Depth of Field

2) Motion blur (note that you can do these independently and mix them later as shown here, but quality suffers a bit)

3) Bloom rendering (input: motion blur output)

The usual bright pass, etc. plus eye adoptation!
4) Lens flare rendering (input: bright pass output)

5)  Combine the output of the motion blur pass, lens flares, bloom. Do the vignetting here, on the combined color. Do tone mapping and gamma correction here on the vignetted color.

6) Render chromatic aberration and film grain here (so after tonemapping, in gamma space)

7) Post processing AA comes here (eg. SMAA)

If you still didn’t quite understand all the theory behind all this (especially the real world part), then I recommend you read through this website:

http://www.cambridgeincolour.com/learn-photography-concepts.htm

Advertisements

OpenGL browser GUI lib

While I was working on other things I decided to create a UI for it.

I also wanted it to be quick and easy, so I took my already existing berkelium wrapper implementation and created this simple lib. With this you can easily add a UI to your OpenGL application. You can design the UI in a webpage designer application and hook up the js-cpp communication later. You can also implement a dummy cpp implementation in js for more complex projects, so that you can quickly prototype/develop/debug your webpage even in your browser!

So to sum it up:

  • -fast development
  • -easy to use
  • -develop the UI in your browser, hook up the cpp-js code later
  • -design the UI in a webpage designer application
  • -youtube support
  • -webgl support
  • -cross platform

check it out here:

https://github.com/Yours3lf/gl_browser_gui

Realtime Global Illumination techniques collection

What does this list contain?

So I decided to do a little research on realtime (mostly) global illumination techniques, and compile a little list here.

If you have any suggestions, or any techniques that I may have missed please let me know.

Note that I deliberately excluded path tracing, ray tracing or photon mapping based techniques.

Also I’ll try to focus on techniques that allow dynamic scene AND dynamic lighting in realtime.

Realtime should mean at least 30 fps on the latest hardware in 1080p.

Voxel Cone Tracing Global Illumination

Voxel Cone Tracing Global Illumination

Reflective Shadow Maps

Original paper: http://www.vis.uni-stuttgart.de/~dachsbcn/download/rsm.pdf

Splatting version: http://www.vis.uni-stuttgart.de/~dachsbcn/download/sii.pdf

Multiresolution splatting version (from GPU pro): http://homepage.cs.uiowa.edu/~cwyman/publications/files/multiResSplat4Indirect/multiResolutionSplatting.pdf

Paper on optimization: http://cgg.mff.cuni.cz/~jaroslav/papers/mlcourse2012/mlcourse2012%20-%2005%20-%20dachsbacher_notes.pdf

Clustering: http://cg.ivd.kit.edu/publications/2012/RSMC/RSMC.pdf

http://cg.ivd.kit.edu/downloads/i3d2011ssbc.pdf

Clustered visibility: http://www.mpi-inf.mpg.de/~ritschel/Papers/ClusteredVisibility.pdf

A demo: http://www.nichego-novogo.net/temp/RSM1.rar

Several techniques (rsm, lpv, voxel gi): http://www.tobias-franke.eu/?dev

Voxel Cone Tracing

Original papers:

[FIXED] https://research.nvidia.com/publication/interactive-indirect-illumination-using-voxel-cone-tracing

[FIXED] http://www.icare3d.org/research-cat/publications/gigavoxels-a-voxel-based-rendering-pipeline-for-efficient-exploration-of-large-and-detailed-scenes.html

Additional papers:

http://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-SparseVoxelization.pdf

http://on-demand.gputechconf.com/gtc/2012/presentations/S0610-GTC2012-Octree-Voxelization-Global.pdf

http://on-demand.gputechconf.com/gtc/2014/presentations/S4552-rt-voxel-based-global-illumination-gpus.pdf

[UPDATED] http://fumufumu.q-games.com/archives/Cascaded_Voxel_Cone_Tracing_final_speaker_notes.pdf

Demo with source: https://github.com/domme/VoxelConeTracing

http://www.geeks3d.com/20121214/voxel-cone-tracing-global-illumination-in-opengl-4-3/

Video: http://vimeo.com/75698788

http://www.youtube.com/watch?v=SUiUlRojBpM

Blog post(s) about it: http://realtimevoxels.blogspot.hu/

http://www.altdevblogaday.com/2013/01/31/implementing-voxel-cone-tracing/

Layered Reflective Shadow Maps

[UPDATE] Original paper and video: https://software.intel.com/en-us/articles/layered-reflective-shadow-maps-for-voxel-based-indirect-illumination

Note: this is a similar technique to voxel cone tracing, quality is similar, performance seems to be worse overall, but I think there’s room for more tricks… Oh, and surprisingly it runs on an Intel Iris pro (at 200ms 🙂 )

Light propagation volumes

Original papers:

http://www.crytek.com/download/Light_Propagation_Volumes.pdf

Cascaded;

http://www.crytek.com/cryengine/cryengine3/presentations/cascaded-light-propagation-volumes-for-real-time-indirect-illumination

http://www.vis.uni-stuttgart.de/~dachsbcn/download/lpv.pdf

http://www.crytek.com/download/2010-I3D_CLPV.ppt

VPL based: http://web.cs.wpi.edu/~emmanuel/courses/cs563/S12/slides/cs563_xin_wang_lpv_wk7_p2.pdf

Octree: http://www.jiddo.net/index.php/projects/other-projects/80-thesis-octree-lpv

Demos:

http://www.rafwrobel.com/lpv/lpv.zip

http://3d.benjamin-thaut.de/Files/file.php?id=8

http://blog.blackhc.net/wp-content/uploads/2010/07/LPVPrototype.zip

+ there’s a sample in the nvidia DX11 sdk

Videos:

http://www.youtube.com/watch?v=vPQ3BbuYVh8

In Fable Legends: http://www.lionhead.com/blog/2014/april/17/dynamic-global-illumination-in-fable-legends/

SSDO

Original paper: http://www.mpi-inf.mpg.de/~ritschel/Papers/SSDO.pdf

Deferred: http://kayru.org/articles/dssdo/

Note that this is not really a GI solution rather a local GI solution where local means that it works well on small details, but it fails to grab the whole picture…

Deferred radiance transfer volumes

Original paper: http://www.gdcvault.com/play/1015326/Deferred-Radiance-Transfer-Volumes-Global

Demo at: http://codeflow.org/entries/2012/aug/25/webgl-deferred-irradiance-volumes/

Radiance hints method

Original paper: http://graphics.cs.aueb.gr/graphics/docs/papers/RadianceHintsPreprint.pdf

[FIXED] playable at: http://tesseract.gg/

Implementing 2.5D culling for tiled deferred rendering (in OpenCL)

Introduction

In this post I’ll cover how to add 2.5D light culling to an existing tile based deferred shading solution in OpenCL.

An old screenshot from Linux Game Engine showcasing Tiled Deferred Shading

An old screenshot from Linux Game Engine showcasing Tiled Deferred Shading

What?

2.5D light culling is a method for improving tiled deferred shading performance (when parts of the scene are overlapping) by dividing up the depth interval into several parts.

It was invented by Takahiro Harada [1] [3], and it appeared in the 4th GPU Pro book [2].

There are several other methods of course to choose from when one decides to improve the tiled deferred performance for example: clustered deferred shading  [5].

Emil Persson also covered the topic [4].

Why?

The reason I choose 2.5D culling over the other methods is that it is really simple and it provides most of the performance improvement while it costs me only a little.

How?

In order to subdivide the depth interval one has to add a bit mask to each tile. This bit mask will identify each “slot” along the Z axis. When a pixel in a tile is in a slot you have to mark it as used.

Then you compute a bit mask for each light when you’re culling lights for each tile, and then &-ing the two bit masks will determine if the light intersects any bucket that is occupied.

Calculating the per-tile bit masks

Assuming you already have the min/max (vs_min_depth/vs_max_depth) depth for each tile computed, all you have to do is divide this up to 32 slots (since uint can store 32 bits). I did it like this: the nth slot start and end is defined like this:

[min + (n-1) * range / 32 ; min + n * range / 32]

where range is calculated like this:

float range = abs( vs_max_depth – vs_min_depth + 0.00001f ) / 32.0f;

The tiny correction is there to make sure that the pixel at vs_min_depth goes into the first slot, and the pixel at vs_max_depth goes into the last slot.

Then all you need to do is make sure that each pixel occupies the right slot. To do this you need to make sure that the first slot starts at 0, therefore the represented depth range will be [0…vs_max_depth].

You can do this by adjusting the per-pixel depth value:

vs_depth -= vs_min_depth;

Then you only have to calculate the depth slot and mark the corresponding slot as used.

float depth_slot = floor(vs_depth / range);

depth_mask = depth_mask | (1 << depth_slot);

Calculating the per-light bit masks

We assume that each light is represented as a sphere (spot lights too, to make it easy, that’s wasteful though) for the sake of simplicity.

Then you need to determine where the light starts and ends on the Z axis:
float light_z_min = -(light_pos.z + att_end);
float light_z_max = -(light_pos.z – att_end);

Adjust the light position to make sure that it goes into the right slot:

light_z_min -= vs_min_depth;

light_z_max -= vs_min_depth;

Calculate the min/max depth slots:

float depth_slot_min = floor(light_z_min / range);
float depth_slot_max = floor(light_z_max / range);

Then you just need to fill out the light’s bit mask:

depth_slot_min = max( depth_slot_min, -1.0f ); //clamp so that we don’t iterate from infinity to infinity…
depth_slot_min = min( depth_slot_min, 32.0f );

depth_slot_max = max( depth_slot_max, -1.0f );
depth_slot_max = min( depth_slot_max, 32.0f );

for( int c = (int)depth_slot_min; c <= (int)depth_slot_max; ++c ) //naiive implementation
if( c >= 0.0f && c < 32.0f )
light_bitmask = light_bitmask | (1 << c);

source code here: http://pastebin.com/h7yiUYTD

Also I’ll hopefully post on each Friday. Any suggestions, constructive criticism is welcome in the comments section 🙂

References:

[1] https://sites.google.com/site/takahiroharada/

[2] http://bit.ly/Qvgcsb

[3] http://dl.acm.org/citation.cfm?id=2407764

[4] http://www.humus.name/Articles/PracticalClusteredShading.pdf

[5] http://www.cse.chalmers.se/~uffe/clustered_shading_preprint.pdf