Welcome Guest ( Log In | Register )

[ Big| Medium| Small] -



Post new topic Reply to topic  [ 3 posts ] 
    Xilef
  Wed Apr 30, 2014 7:06 pm
User avatar
Staff

Big Dumb Guy

Location: UK
It only took me 3 years, but I finally decided to get shadow mapping done.

I remember a graphics programmer once saying 'If you CAN add shadows, do it, it will instantly make everything cooler', usually we cheat with shadows by having blobs under the characters or by pre-baking something into the environment, but one of the ways to add real-time shadows to a scene is to implement shadow mapping, a technique that requires a good knowledge of your 3D API.

So this is the scene I started with:
Image

Because I want to show off that this is real time shadows, this is actually an animated scene. The cube rotates on the XYZ axis and the wall on the right moves from below the floor to just above the floor and loops that animation.

The light-source is on the right hand-side, just off the edge of the screen. This is what the light is looking at:
Image
WOAH THAT LOOKS FUNKY

So the light I'm simulating in this will be light from the sun, to emulate this I had to remove as much perspective as possible because the sun is really big and really far away, if I were to do this at 1/10000000000th of the distance and depth that it should be then the sun's view would be this:
Image
Not very useful, it's easier to just remove the need to deal with depth by using an orthographic camera matrix for this (Removes depth perspective), essentially it removes distance from the equation and just looks at the scene as if the field of view is nearly zero degrees.

Handy!

Now to get into the meat of things. Shadows in real life appear because light is blocked by an object, if we were to break this down, we could say "Object A's surface is closer to the light source than the Object B at this point, so Object B has a shadow on it".
If we were to do super accurate, realistic shadows, we would be simulating the rays/waves/particles (Depending on your discipline!) of light from the light source and illuminating where they hit, but that is incredibly expensive to do on a computer in real-time, it is faster to presume everything is hit by light and then paint on some pretty shadows where things are blocking other things.

I'm pretty sure everyone has seen something like this in their physics classes at school:
Image

But what we want to do to emulate this is this:
Image

Something that the GPU has built in that handles depth and object occlusion...THE DEPTH BUFFER!
Sweet, so we can use the depth buffer to find out if the object is being occluded from the light or not! All we need to do is draw a depth buffer from the light's POV in the direction that it is looking and then we have the information on what surfaces are CLOSEST to the light!

This is the depth buffer of the funky orthogonal view that the light source has on the scene (Compare it with the one above)
Image
OpenGL (The API I am using here) encodes the depth buffer with 0.0 being closest to the camera's near plane and 1.0 being the furthest against the camera's far plane, we can represent that in colour with 0.0 being black and 1.0 being white.

This is the depth buffer of our main camera looking at the scene.
Image

Okay so we have these two depth buffers, one is generated by the light looking at the scene with an orthographic matrix and the other is our camera looking at the scene with it's perspective matrix.

During the stage where we render the scene from the camera's point of view (And generate that depth buffer) we can read from the light source's depth buffer and use the light source's orthographic view matrix to translate the object we're rendering and check it in the perspective of the light source a second time, but this time looking at the light source's depth buffer and judging if the object is behind the depth buffer's closest value.

That sounds like a lot to take in, (And it is when you have no idea what you're doing), so let me simplify it as much as I can.

AFTER drawing that orthographic depth buffer of what the light sees when looking at the scene, we move to the perspective of our main camera and start drawing the world from that perspective, keeping the light source's point of view and depth buffer on hand as a reference.
When we draw an object, we briefly move back to the perspective of the light, check how deep the object is from the light, and then compare that with the light's depth buffer we made earlier (Because we are reusing the light's matrix, the depth buffers are written in the same XY pixel coord, well not really, but it's easy to translate it).

Here's all that in GLSL 3.3 shader code:
Expand to see the code.


Expand to see the code.


NICE! Now that we know we can do this, let's plug it all in and try it out!
Image
Ah shit, it didn't work...

As 3D rasterisation is pretty much a massive hack (I mean, the depth buffer exists, proof enough?) we need to do a bit of tweaking to this perfect idea to make it actually work. First of all, the orthographic matrix used for that light? Let's give it a bit of a bias so we can reduce our checking depth, gives a 0.5 bias towards if something is in shadow or not.

This is how I did it with my C code:
Expand to see the code.

So this biasShadowProjectionMatrix is our new orthographic light point of view matrix thingy, let's upload that to the GPU for our camera's POV rendering.

Image
HURRAH! WE HAVE SHADOWS FINALLY JEEZ >:C

But...It's so ugly...

What's happening here is that the shadow itself is Z-fighting with the object that the shadow should be landing on, we need more of a bias to correct this! Doh! Back to the hackery...

Actually this one is a simple fix by using some knowledge of physics.

What if I told you that the shadows casted by this object:
Image
Are the same as shadows casted by this object:
Image

You can test this yourself, get a bowl and shine a light towards the bottom of the bowl and look at the size of the shadow, then flip the bowl around so the inside is facing the light source then check the size of the shadow again, it's the same. You can basically invert anything in the world and it will have the same shadow.

OpenGL by default culls the back of objects (Because, to be fair, you never see the back of things), to add a bias that is still physically correct we can invert the objects for the light source's POV, this adds more depth to the objects (Because the back is further from the front, which is physically true for anything!) so when we render it in the "correct" way, the depth of the object we're drawing will be closer than what the light source believes the depth is, putting it fully into light.

In OpenGL we can do this simply with:
Expand to see the code.


Image
AWESOME. BUT WHAT THE HELL MAN YOU DIDN'T FIX THE THING ON THE RIGHT???

So we've inverted the shapes, but now we're having the same z-fighting problem with the self-shadowing, simply said, because we flipped the shapes around, the depth of the object that is facing away from the light source is now the same as the depth of the object from the light source's POV.

There are two ways to fix this, you can either disable self-shadowing on the same surface and use another method for doing plane-based shadows (Such as the method that's been around forever, vertex/fragment lighting with the dot product...) or you can stick with shadow mapping and do some hackery.

The hackery method involves punch physics in the face and slapping on a bias of 0.0005.

Expand to see the code.


Image
Okay now it's working. Except for one problem.

Image
What the hell is happening here!?!?

Well, simply said, the shadow map's resolution is not high enough. Because the depth buffer is a block of computer memory the records fragment depth (It's pretty much a fixed-sized image) we lose all sorts of information about the depth between pixels.

Look at this image.
Image
Those yellow beams of light are hitting the inverse of the objects, so they hit the back of the cube, what the light source sees is coloured red, what the camera sees is in green. So those beams of light? Imagine they are not beams of light, imagine they are pixels on the depth buffer of the light source, imagine that each beam of light in that image is a pixel on a 512x512 image. Okay, but how do we know that the areas between the "beams" of light are in shadow? We don't. This code presumes everything is within light (Remember when I said that at the beginning? Yes I did.), so when it can't check between the fragments (pixels) on the depth buffer, it default to presuming it's in light.

This is the point where shadow mapping becomes insanely complex, this is relatively small amount of code, but to fix that final problem, you can end up with one MASSIVE block of fragment shader checking the neighouring fragments on the light source's depth buffer, finding the average, judging if an AREA is within shadow (No longer if a fragment of an object is), interpolating between spaces, etc, etc, it just becomes crazy.

The solution that I prefer? Turn off self shadowing and do them a different way. Remember that += 0.0005 in the fragment shader above? Change that to -= 0.0005 and then self shadows are gone.
Image
It looks pretty dumb now, but what we can do is implement a different algorithm for self shadows (dot product between the light and object normal!) and let the large distance between objects handle the real-time shadow mapping.

But that's for everyone else to deal with.

Here's a video of everything in action:


Top Top
Profile      
 

    Xilef
  Wed Apr 30, 2014 7:32 pm
User avatar
Staff

Big Dumb Guy

Location: UK
As a bonus, here's some shots looking at the cube's shadow for when you change the resolution of the light source's depth buffer. The article had a resolution of 1024x1024.

Image
Image
Image
Image

By increasing the resolution, you increase the accuracy of the shadow.


Top Top
Profile      
 

    Toams
  Sun May 11, 2014 8:28 pm
THAT foreigner
User avatar
Sponsor


Location: Netherlands
nicenicenice, it's indeed looking good with the shadows compared to without.

_________________
?????


Top Top
Profile      
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 


Who is online

Users browsing this forum: No users and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

We are an independent, not-for-profit game making community.
Homepage
Board Index
About Us
Downloadable Games
Free Browser Games
Games in Development
RPG Maker Support
Game Maker Support
Construct 2 Support
HBGames the eZine
Advanced RPG Maker
Site Announcements
Powered by phpBB © phpBB Group