Any OGs still around?

I've heard conversation coming out of animal pens that is more intelligent than what is going on in here.
rgaede
Posts: 766
Joined: Fri Mar 07, 2008 3:30 am
Team: Privateer
Location: Albuquerque, NM
Contact:

Re: Any OGs still around?

Post by rgaede »

I enjoy this thread. With working from home I don't have much urge to do anything on the computer, but still anonymously browse from time to time.
Rgaede Past numbers #333 #19 Now #373

DILLIGAF
baker
Posts: 283
Joined: Sat Jul 30, 2011 1:43 pm

Re: Any OGs still around?

Post by baker »

jlv wrote: Tue Oct 05, 2021 1:36 am
TeamHavocRacing wrote: Mon Oct 04, 2021 3:34 pm
ddmx wrote: Mon Oct 04, 2021 2:29 pm

?
ftfy and no, I have NOT hacked MXS. I wouldn't know where to begin.
I just figured you meant it doesn't depend on 150 different DLLs like it's popular to do now.

Thanks for the complement btw! John Carmack is definitely one of my heros. All of his 3d code is super precise. One funny story I heard about him developing Quake - apparently someone at iD noticed a single pixel error on one of their computers. Instead of just ignoring it like almost anyone else would do, Carmack went on a mission to find the error and eventually found he had a mislabeled pentium. It was a 75 MHz clocked at 90 or something like that and the overclocking made it have just enough errors to get an occasional pixel wrong. Hard to imagine anyone else noticing that!

I think that strict perfectionism hurt him as graphics programming turned into finding ways to fake things in a way that doesn't look too bad. He was doing super precise stencil shadows in Doom 3 while others were doing trashy screen space effects that, while trashy and imprecise, looked pretty good most of the time.
mxrewind665 wrote: Tue Oct 05, 2021 12:05 am
Wahlamt wrote: Thu Sep 30, 2021 8:30 pm
Email/PM JLV, you're a familiar name around here. Sure you can get it sorted.
I'll do that and see what comes of it. I seriously miss this community. Glad to still be remembered after years of silence :lol:
I've been much more strict about this recently since people have tried to steal keys this way. If you know your UID I might be able to match server log IP addresses against your forum post IP addresses. It's much simpler if the original email still works.
Carmack is an absolute legend. The inverse square root still gets me every time I see it 😅. I read something about it’s origin not too far back.
MOTO NATIOM112
Posts: 348
Joined: Sun Sep 11, 2011 9:16 pm
Team: PoGo

Re: Any OGs still around?

Post by MOTO NATIOM112 »

I pop back in here now and then to see how things are doing, haven't touched sim since 2018-2019 (i made the switch to MXB) it's refreshing to see a lot of familiar names in this thread, my UID is 3984 and it's crazy to see how high the UID's are now!
motokid499 wrote: Sat Oct 02, 2021 9:16 pm I genuinely miss the amount of time and effort that went into the OG national and GP tracks. Giopanda, Haggvist, checkerz, rafagas, their tracks were absolutely top notch when it came to realism and attention-to-detail. Everything about those tracks were perfect. It seems each year the tracks being released are regressing in quality.

I think the problem is if you tell an average player to download a 2012 national track they'd literally uninstall because it's too realistic. My biggest complaint about the current community is their obsession with turning MXS into an arcade game. Every year we move further and further from simulation as we compete on wide open highway "fun tracks"

Current tracks are fun as fuck don't get me wrong, but to tell me I have to compete on them in a REPLICA series on a SIMULATOR?

Pls.

#1 thing I miss about OG days were the quality of the tracks hands down. I still play 2012-2013 tracks all the time.
Redbud 2012 was by far the gnarliest track outdoor track made, and "So you think you know SX" was the hardest SX track.
Image
Shadow
Posts: 2772
Joined: Sun Dec 02, 2007 5:10 pm
Team: FSK
Location: Finland

Re: Any OGs still around?

Post by Shadow »

I'm just happy someone can mention any of my tracks and not have a ton of swearing after it in the same sentence. :lol:
Been away for a long time, but randomly got an itch to check out the forums again.
Image
Those who possess strength have also known adversity.
jlv
Site Admin
Posts: 14913
Joined: Fri Nov 02, 2007 5:39 am
Team: No Frills Racing
Contact:

Re: Any OGs still around?

Post by jlv »

baker wrote: Mon Oct 18, 2021 3:16 am Carmack is an absolute legend. The inverse square root still gets me every time I see it 😅. I read something about it’s origin not too far back.
The inverse square root trick is not his invention. I wrote one for Mesa 20 years ago and I based that one off of old newsgroup posts by Vesa Karvonen and James Van Buskirk.
Josh Vanderhoof
Sole Proprietor
jlv@mxsimulator.com
If you email, put "MX Simulator" in the subject to make sure it gets through my spam filter.
baker
Posts: 283
Joined: Sat Jul 30, 2011 1:43 pm

Re: Any OGs still around?

Post by baker »

jlv wrote: Tue Oct 19, 2021 12:49 am
baker wrote: Mon Oct 18, 2021 3:16 am Carmack is an absolute legend. The inverse square root still gets me every time I see it 😅. I read something about it’s origin not too far back.
The inverse square root trick is not his invention. I wrote one for Mesa 20 years ago and I based that one off of old newsgroup posts by Vesa Karvonen and James Van Buskirk.
Ah, oops. Well both points still stand. Carmack=legend and inverse square root trick=super clever 😅
jlv
Site Admin
Posts: 14913
Joined: Fri Nov 02, 2007 5:39 am
Team: No Frills Racing
Contact:

Re: Any OGs still around?

Post by jlv »

baker wrote: Tue Oct 19, 2021 1:14 pm Ah, oops. Well both points still stand. Carmack=legend and inverse square root trick=super clever 😅
His number one talent is timing. Throughout the 90's his games were in perfect sync with what mainstream hardware could do. That is way harder to do than it sounds.

IMO his most clever trick is Carmack's reverse. I do not give Creative Labs credit for inventing that since they didn't even realize why it was better than conventional stencil shadows. Here's his email to Mark Kilgard on it:

Code: Select all

John Carmack on shadow volumes...

I recieved this in email from John on May 23rd, 2000.

- Mark Kilgard


I solved this in a way that is so elegant you just won't believe it.  Here
is a description that I posted to a private mailing list:

----------------------------------------------------------

I first implemented stencil shadow volumes over two years ago in the
post-Q2 research period.  They looked great until you flew the viewpoint
into one of the volumes, and depending on the exact test you used, either
most of the screen went into negative shadow, or most of the shadows
disappeared.

The classic shadow volume works that stencil shadows are derived from
usually suggest "inverting the test when the view is inside a shadow
volume".  That is not a robust solution, because a non-zero near clip plane
will give situations where the plane is not cleanly on one side or the
other of the view point.  It is also non-trivial to make the "inside a
shadow volume" determination, especially after silhouette optimizations.

The conventional wisdom has been that you will need to clip the shadow
volumes to the view plane and cap with triangles, treating the shadow
volumes as if they were polyhedrons.

I implemented the easy cases of this, choosing to project the silhouette
points to either the far plane of the light's effect or the view plane.
For the clear-cut cases, this worked fine, allowing you to walk in front of
a shadowed object, or look directly at it with the light behind it.
Intermediate cases, where some of the vertexes should project onto the
light plane and some should project onto the view plane could also be
handled, but the cost of all the testing was starting to pile up.

Unfortunately, there are cases when an occluding triangle projects a shadow
volume that will clip to something other than a triangular prism.  There
are cases where real, honest volume clipping must take place.

Anything that requires finding convex hulls in realtime is starting to
sound like a Bad Idea.

I sweated over this for a while, with the code getting grosser and grosser,
but then I had an idea for a different direction.

It should be possible to let the shadow volumes get clipped off at the view
plane like they always do, then find the clipped off areas in image space
and correct them.

The way to find if a volume has been clipped off is to render the shadow
volume with depth testing disabled, incrementing for the front faces and
decrementing for the back faces.  If the stencil buffer ends up with the
original value, the shadow volume is well formed in front of the view volume.

My first attempt to utilize this involved a whole bunch of passes to
determine if it was well formed and combine it with the standard volume
stencil operations.  It was an interesting experiment with masking and
anding in the stencil buffer to perform two operations, but it turned out
that, while it worked for simple shapes, complex shapes needed more
information from the volume clipping than just "well formed" or not.

The next iteration involved attempting to "preload" the standard stencil
shadow algorithm by the number of clipped away planes.  I first drew the
shadow volumes with depth test disabled, incrementing for back sides and
decrementing for front sides.  This finishes with a positive value in the
stencil buffer for each plane that is clipped away at the view plane.  The
normal depth tested shadow volume is drawn next, with the change polarity
reversed, decrementing for back sides and incrementing for front sides.
The areas not equal to the initial clear value are in shadow.

That works all the time.

Later, I realized something else.  The algorithm was now basically:

Draw back sides, incrementing both with depth pass and depth fail.
Draw front sides, decrementing both with depth pass and depth fail.
Draw back sides, decrementing with depth pass and doing nothing with depth
fail.
Draw front sides, incrementing both with depth and doing nothing with depth
fail.

Rearrange the passes and you get:
Draw back sides, incrementing both with depth pass and depth fail.
Draw back sides, decrementing with depth pass and doing nothing with depth
fail.
Draw front sides, decrementing both with depth pass and depth fail.
Draw front sides, incrementing both with depth and doing nothing with depth
fail.

It is then obvious that they partially cancel each out and can be combined
into:

Draw back sides, doing nothing with depth pass and incrementing with depth
fail.
Draw front sides, doing nothing with depth pass and decrementing with depth
fail.

I was shocked.  I went from feeling pretty clever with my unbalanced
preloading algorithm (which I would only apply on surfaces that were likely
to intersect the view plane) to just feeling dumb that I had never seen the
trivial solution before.  Thinking about operating on depth test fails is a
bit non-intuitive, but if you work it through a couple times, what is going
on makes pretty good sense.

Shadows done this way have none of the "fragile" feel that geometric
algorithms tend to give.  You can use them for major occluders in the world
and noclip fly right through them without any problems at all.

Stencil shadows still aren't cheap by any means.  It can cost 3x the
triangle count of the source model (although <2x with some optimizations is
reasonable) per shadowing light, and it can have pathological fill rate
utilization in some cases, like a light shining out horizontally through a
jail cell door.  Still, they are quick operations even if there are a lot
of them.  The vertexes are just bare xyz points without texcoords or color,
and the fill rate is only to the depth/stencil buffer.

There are lots of subtleties to actually using this, like making sure your
shadow volumes are capped on both ends if they need to be (you can often
optimize away the caps based on culling information), making sure that none
of the shadow volumes get clipped off by your far clipping plane (which
would unbalance the count), and all the normal picky silhouette
optimization issues.

Depth buffer based shadows still sound like they have a lot of advantages:

Not much in the way of coding subtleties required.

The performance is more level (fixed fill rate overhead) and theoretically
somewhat faster (only one extra drawing of the surface into the shadow
buffer) in most cases.

They avoid the silhouette finding work that still needs to be done with the
shadow volumes (a per-face dot product and some copying), and don't require
any connectivity information.

Unfortunately, the quality just isn't good enough unless you use extremely
high resolution shadow maps (or possibly many offset passes with a lower
resolution map, although the bias issues become complex), and you need to
tweak the biases and ranges in many scenes.   For comparison, Pixar will
commonly use 2k or 4k shadow maps, focused in on a very narrow field of
view (they assume projections outside the map are NOT in shadow, which
works for movie sets but not for architectural walkthroughs), along with 16
jittered samples of the shadow map for each pixel and occasional hand
tweaking of the bias.

I still want to research the options for cropping and skewing shadow depth
buffer projection planes, but I am now positive that the stencil shadow
architecture works out.


John Carmack
Josh Vanderhoof
Sole Proprietor
jlv@mxsimulator.com
If you email, put "MX Simulator" in the subject to make sure it gets through my spam filter.
TeamHavocRacing
Posts: 8361
Joined: Thu Nov 19, 2009 5:52 am
Team: Havoc Racing
Contact:

Re: Any OGs still around?

Post by TeamHavocRacing »

I'll have to have Google lady say that to me tomorrow lol. It hurt just trying to stumble through that dry shit. :shock:
jlv wrote:If it weren't for Havoc I'd have been arguing with the 12 year olds by myself.
jlv
Site Admin
Posts: 14913
Joined: Fri Nov 02, 2007 5:39 am
Team: No Frills Racing
Contact:

Re: Any OGs still around?

Post by jlv »

TeamHavocRacing wrote: Wed Oct 20, 2021 4:14 am I'll have to have Google lady say that to me tomorrow lol. It hurt just trying to stumble through that dry shit. :shock:
I'll try to translate...
I first implemented stencil shadow volumes over two years ago in the post-Q2 research period. They looked great until you flew the viewpoint into one of the volumes, and depending on the exact test you used, either most of the screen went into negative shadow, or most of the shadows disappeared.
If you don't know, a shadow volume is the volume behind an object where the light is blocked. You've probably seen "god rays" coming through a window. A shadow volume is the shadowy equivalent of that.

Stencil buffer shadows work by counting how many times a ray cast from your eye to the surface you're looking at enters and exits shadow volumes. If it enters more than it exits, the surface is shadowed.

The stencil buffer works a little differently from the color buffer. It contains values that you can add or subtract 1 to everywhere where a pixel would have been drawn in the color buffer.

So for shadows you do this:
1. draw the scene
2. draw the surface of the shadow volume that faces you and is in front of the scene, adding 1 in the stencil buffer
3. draw the surface of the shadow volume that faces away from you and is in front of the scene, subtracting 1 in the stencil buffer

Now the stencil buffer will be 0 where it's lit, and >0 where it's in shadow.

This would be great but the problem is the shadow volumes can hit the front clipping plane and then it all goes to hell.
The classic shadow volume works that stencil shadows are derived from usually suggest "inverting the test when the view is inside a shadow volume". That is not a robust solution, because a non-zero near clip plane will give situations where the plane is not cleanly on one side or the other of the view point. It is also non-trivial to make the "inside a shadow volume" determination, especially after silhouette optimizations.

The conventional wisdom has been that you will need to clip the shadow volumes to the view plane and cap with triangles, treating the shadow volumes as if they were polyhedrons.

I implemented the easy cases of this, choosing to project the silhouette points to either the far plane of the light's effect or the view plane. For the clear-cut cases, this worked fine, allowing you to walk in front of a shadowed object, or look directly at it with the light behind it. Intermediate cases, where some of the vertexes should project onto the light plane and some should project onto the view plane could also be handled, but the cost of all the testing was starting to pile up.

Unfortunately, there are cases when an occluding triangle projects a shadow volume that will clip to something other than a triangular prism. There are cases where real, honest volume clipping must take place.

Anything that requires finding convex hulls in realtime is starting to sound like a Bad Idea.
Here he's talking about trying to solve the clipping problem by capping the shadow volumes in 3d space so they never touch the front clip plane. This would work but it'd be complicated.
I sweated over this for a while, with the code getting grosser and grosser, but then I had an idea for a different direction.

It should be possible to let the shadow volumes get clipped off at the view plane like they always do, then find the clipped off areas in image space and correct them.

The way to find if a volume has been clipped off is to render the shadow volume with depth testing disabled, incrementing for the front faces and decrementing for the back faces. If the stencil buffer ends up with the original value, the shadow volume is well formed in front of the view volume.

My first attempt to utilize this involved a whole bunch of passes to determine if it was well formed and combine it with the standard volume stencil operations. It was an interesting experiment with masking and anding in the stencil buffer to perform two operations, but it turned out that, while it worked for simple shapes, complex shapes needed more information from the volume clipping than just "well formed" or not.

The next iteration involved attempting to "preload" the standard stencil shadow algorithm by the number of clipped away planes. I first drew the shadow volumes with depth test disabled, incrementing for back sides and decrementing for front sides. This finishes with a positive value in the stencil buffer for each plane that is clipped away at the view plane. The normal depth tested shadow volume is drawn next, with the change polarity reversed, decrementing for back sides and incrementing for front sides. The areas not equal to the initial clear value are in shadow.

That works all the time.
Here he's talking about solving the clipping problem by detecting it in screen space. Instead of drawing the scene first and only changing the stencil buffer when the shadow volume is in front of the scene, he only draws the shadow volume. Now, since there is no scene to be shadowed, the stencil buffer should be 0 by definition, except where the volume hit the front clipping plane. Using that information he can fix the clipped volumes in screen space and get perfect shadows even when clipped.
Later, I realized something else. The algorithm was now basically:

Draw back sides, incrementing both with depth pass and depth fail.
Draw front sides, decrementing both with depth pass and depth fail.
Draw back sides, decrementing with depth pass and doing nothing with depth fail.
Draw front sides, incrementing both with depth and doing nothing with depth fail.

Rearrange the passes and you get:
Draw back sides, incrementing both with depth pass and depth fail.
Draw back sides, decrementing with depth pass and doing nothing with depth fail.
Draw front sides, decrementing both with depth pass and depth fail.
Draw front sides, incrementing both with depth and doing nothing with depth fail.

It is then obvious that they partially cancel each out and can be combined into:

Draw back sides, doing nothing with depth pass and incrementing with depth fail.
Draw front sides, doing nothing with depth pass and decrementing with depth fail.

I was shocked. I went from feeling pretty clever with my unbalanced preloading algorithm (which I would only apply on surfaces that were likely to intersect the view plane) to just feeling dumb that I had never seen the trivial solution before. Thinking about operating on depth test fails is a bit non-intuitive, but if you work it through a couple times, what is going on makes pretty good sense.
Here he does some algebraic simplifications and realizes you can do this all at once by tracking the shadow volumes *behind* the surfaces in the scene instead of in front. This was the really clever part.
Shadows done this way have none of the "fragile" feel that geometric algorithms tend to give. You can use them for major occluders in the world and noclip fly right through them without any problems at all.

Stencil shadows still aren't cheap by any means. It can cost 3x the triangle count of the source model (although <2x with some optimizations is reasonable) per shadowing light, and it can have pathological fill rate utilization in some cases, like a light shining out horizontally through a jail cell door. Still, they are quick operations even if there are a lot of them. The vertexes are just bare xyz points without texcoords or color, and the fill rate is only to the depth/stencil buffer.

There are lots of subtleties to actually using this, like making sure your shadow volumes are capped on both ends if they need to be (you can often optimize away the caps based on culling information), making sure that none of the shadow volumes get clipped off by your far clipping plane (which would unbalance the count), and all the normal picky silhouette optimization issues.

Depth buffer based shadows still sound like they have a lot of advantages:

Not much in the way of coding subtleties required.

The performance is more level (fixed fill rate overhead) and theoretically somewhat faster (only one extra drawing of the surface into the shadow buffer) in most cases.

They avoid the silhouette finding work that still needs to be done with the shadow volumes (a per-face dot product and some copying), and don't require any connectivity information.

Unfortunately, the quality just isn't good enough unless you use extremely high resolution shadow maps (or possibly many offset passes with a lower resolution map, although the bias issues become complex), and you need to tweak the biases and ranges in many scenes. For comparison, Pixar will commonly use 2k or 4k shadow maps, focused in on a very narrow field of view (they assume projections outside the map are NOT in shadow, which works for movie sets but not for architectural walkthroughs), along with 16 jittered samples of the shadow map for each pixel and occasional hand tweaking of the bias.

I still want to research the options for cropping and skewing shadow depth buffer projection planes, but I am now positive that the stencil shadow architecture works out.
Josh Vanderhoof
Sole Proprietor
jlv@mxsimulator.com
If you email, put "MX Simulator" in the subject to make sure it gets through my spam filter.
ddmx
Posts: 5373
Joined: Sun Apr 20, 2008 3:36 pm
Location: Midland MI

Re: Any OGs still around?

Post by ddmx »

Very interesting stuff. Read this and then went down a Wikipedia hole. As a mechanical engineer with a software bias all of the optimization and such is immensely entertaining. Video game programmers are a different breed though!!
baker
Posts: 283
Joined: Sat Jul 30, 2011 1:43 pm

Re: Any OGs still around?

Post by baker »

jlv wrote: Thu Oct 21, 2021 1:42 am
TeamHavocRacing wrote: Wed Oct 20, 2021 4:14 am I'll have to have Google lady say that to me tomorrow lol. It hurt just trying to stumble through that dry shit. :shock:
I'll try to translate...
I first implemented stencil shadow volumes over two years ago in the post-Q2 research period. They looked great until you flew the viewpoint into one of the volumes, and depending on the exact test you used, either most of the screen went into negative shadow, or most of the shadows disappeared.
If you don't know, a shadow volume is the volume behind an object where the light is blocked. You've probably seen "god rays" coming through a window. A shadow volume is the shadowy equivalent of that.

Stencil buffer shadows work by counting how many times a ray cast from your eye to the surface you're looking at enters and exits shadow volumes. If it enters more than it exits, the surface is shadowed.

The stencil buffer works a little differently from the color buffer. It contains values that you can add or subtract 1 to everywhere where a pixel would have been drawn in the color buffer.

So for shadows you do this:
1. draw the scene
2. draw the surface of the shadow volume that faces you and is in front of the scene, adding 1 in the stencil buffer
3. draw the surface of the shadow volume that faces away from you and is in front of the scene, subtracting 1 in the stencil buffer

Now the stencil buffer will be 0 where it's lit, and >0 where it's in shadow.

This would be great but the problem is the shadow volumes can hit the front clipping plane and then it all goes to hell.
The classic shadow volume works that stencil shadows are derived from usually suggest "inverting the test when the view is inside a shadow volume". That is not a robust solution, because a non-zero near clip plane will give situations where the plane is not cleanly on one side or the other of the view point. It is also non-trivial to make the "inside a shadow volume" determination, especially after silhouette optimizations.

The conventional wisdom has been that you will need to clip the shadow volumes to the view plane and cap with triangles, treating the shadow volumes as if they were polyhedrons.

I implemented the easy cases of this, choosing to project the silhouette points to either the far plane of the light's effect or the view plane. For the clear-cut cases, this worked fine, allowing you to walk in front of a shadowed object, or look directly at it with the light behind it. Intermediate cases, where some of the vertexes should project onto the light plane and some should project onto the view plane could also be handled, but the cost of all the testing was starting to pile up.

Unfortunately, there are cases when an occluding triangle projects a shadow volume that will clip to something other than a triangular prism. There are cases where real, honest volume clipping must take place.

Anything that requires finding convex hulls in realtime is starting to sound like a Bad Idea.
Here he's talking about trying to solve the clipping problem by capping the shadow volumes in 3d space so they never touch the front clip plane. This would work but it'd be complicated.
I sweated over this for a while, with the code getting grosser and grosser, but then I had an idea for a different direction.

It should be possible to let the shadow volumes get clipped off at the view plane like they always do, then find the clipped off areas in image space and correct them.

The way to find if a volume has been clipped off is to render the shadow volume with depth testing disabled, incrementing for the front faces and decrementing for the back faces. If the stencil buffer ends up with the original value, the shadow volume is well formed in front of the view volume.

My first attempt to utilize this involved a whole bunch of passes to determine if it was well formed and combine it with the standard volume stencil operations. It was an interesting experiment with masking and anding in the stencil buffer to perform two operations, but it turned out that, while it worked for simple shapes, complex shapes needed more information from the volume clipping than just "well formed" or not.

The next iteration involved attempting to "preload" the standard stencil shadow algorithm by the number of clipped away planes. I first drew the shadow volumes with depth test disabled, incrementing for back sides and decrementing for front sides. This finishes with a positive value in the stencil buffer for each plane that is clipped away at the view plane. The normal depth tested shadow volume is drawn next, with the change polarity reversed, decrementing for back sides and incrementing for front sides. The areas not equal to the initial clear value are in shadow.

That works all the time.
Here he's talking about solving the clipping problem by detecting it in screen space. Instead of drawing the scene first and only changing the stencil buffer when the shadow volume is in front of the scene, he only draws the shadow volume. Now, since there is no scene to be shadowed, the stencil buffer should be 0 by definition, except where the volume hit the front clipping plane. Using that information he can fix the clipped volumes in screen space and get perfect shadows even when clipped.
Later, I realized something else. The algorithm was now basically:

Draw back sides, incrementing both with depth pass and depth fail.
Draw front sides, decrementing both with depth pass and depth fail.
Draw back sides, decrementing with depth pass and doing nothing with depth fail.
Draw front sides, incrementing both with depth and doing nothing with depth fail.

Rearrange the passes and you get:
Draw back sides, incrementing both with depth pass and depth fail.
Draw back sides, decrementing with depth pass and doing nothing with depth fail.
Draw front sides, decrementing both with depth pass and depth fail.
Draw front sides, incrementing both with depth and doing nothing with depth fail.

It is then obvious that they partially cancel each out and can be combined into:

Draw back sides, doing nothing with depth pass and incrementing with depth fail.
Draw front sides, doing nothing with depth pass and decrementing with depth fail.

I was shocked. I went from feeling pretty clever with my unbalanced preloading algorithm (which I would only apply on surfaces that were likely to intersect the view plane) to just feeling dumb that I had never seen the trivial solution before. Thinking about operating on depth test fails is a bit non-intuitive, but if you work it through a couple times, what is going on makes pretty good sense.
Here he does some algebraic simplifications and realizes you can do this all at once by tracking the shadow volumes *behind* the surfaces in the scene instead of in front. This was the really clever part.
Shadows done this way have none of the "fragile" feel that geometric algorithms tend to give. You can use them for major occluders in the world and noclip fly right through them without any problems at all.

Stencil shadows still aren't cheap by any means. It can cost 3x the triangle count of the source model (although <2x with some optimizations is reasonable) per shadowing light, and it can have pathological fill rate utilization in some cases, like a light shining out horizontally through a jail cell door. Still, they are quick operations even if there are a lot of them. The vertexes are just bare xyz points without texcoords or color, and the fill rate is only to the depth/stencil buffer.

There are lots of subtleties to actually using this, like making sure your shadow volumes are capped on both ends if they need to be (you can often optimize away the caps based on culling information), making sure that none of the shadow volumes get clipped off by your far clipping plane (which would unbalance the count), and all the normal picky silhouette optimization issues.

Depth buffer based shadows still sound like they have a lot of advantages:

Not much in the way of coding subtleties required.

The performance is more level (fixed fill rate overhead) and theoretically somewhat faster (only one extra drawing of the surface into the shadow buffer) in most cases.

They avoid the silhouette finding work that still needs to be done with the shadow volumes (a per-face dot product and some copying), and don't require any connectivity information.

Unfortunately, the quality just isn't good enough unless you use extremely high resolution shadow maps (or possibly many offset passes with a lower resolution map, although the bias issues become complex), and you need to tweak the biases and ranges in many scenes. For comparison, Pixar will commonly use 2k or 4k shadow maps, focused in on a very narrow field of view (they assume projections outside the map are NOT in shadow, which works for movie sets but not for architectural walkthroughs), along with 16 jittered samples of the shadow map for each pixel and occasional hand tweaking of the bias.

I still want to research the options for cropping and skewing shadow depth buffer projection planes, but I am now positive that the stencil shadow architecture works out.
That’s really brilliant. I had to read that a few times lol. It’s amazing how many earlier game developers came up with solutions or worked around the limitation of their day. My favorite to read about has always been how Naughty Dog created Crash Bandicoot. I am a little biased as it’s one of my favorite games 😁
ddmx
Posts: 5373
Joined: Sun Apr 20, 2008 3:36 pm
Location: Midland MI

Re: Any OGs still around?

Post by ddmx »

If you haven't seen it, great video on Crash:

https://www.youtube.com/watch?v=izxXGuV ... rsTechnica
baker
Posts: 283
Joined: Sat Jul 30, 2011 1:43 pm

Re: Any OGs still around?

Post by baker »

ddmx wrote: Tue Oct 26, 2021 1:54 pm If you haven't seen it, great video on Crash:

https://www.youtube.com/watch?v=izxXGuV ... rsTechnica
Nice a video! I’ll watch.
https://all-things-andy-gavin.com/vide ... ing-crash/ I’ve read this series several times; I still think it’s nuts a major game was implemented in lisp, much less it’s own flavor of it. (GOAL)
jlv
Site Admin
Posts: 14913
Joined: Fri Nov 02, 2007 5:39 am
Team: No Frills Racing
Contact:

Re: Any OGs still around?

Post by jlv »

baker wrote: Tue Oct 26, 2021 3:08 pm Nice a video! I’ll watch.
https://all-things-andy-gavin.com/vide ... ing-crash/ I’ve read this series several times; I still think it’s nuts a major game was implemented in lisp, much less it’s own flavor of it. (GOAL)
Nothing wrong with Lisp. It can be fast with a good compiler. There was a Scheme (a Lisp dialect) compiler that produced code that was as fast a C. It worked by analyzing the entire program so it could determine type information based on all possible calling sites for a function. Unfortunately it hasn't been maintained for over a decade now. It was called Stalin.

The other advantage of Lisp is it's super easy to write an interpreter for it.
Josh Vanderhoof
Sole Proprietor
jlv@mxsimulator.com
If you email, put "MX Simulator" in the subject to make sure it gets through my spam filter.
baker
Posts: 283
Joined: Sat Jul 30, 2011 1:43 pm

Re: Any OGs still around?

Post by baker »

jlv wrote: Wed Oct 27, 2021 1:01 am
baker wrote: Tue Oct 26, 2021 3:08 pm Nice a video! I’ll watch.
https://all-things-andy-gavin.com/vide ... ing-crash/ I’ve read this series several times; I still think it’s nuts a major game was implemented in lisp, much less it’s own flavor of it. (GOAL)
Nothing wrong with Lisp. It can be fast with a good compiler. There was a Scheme (a Lisp dialect) compiler that produced code that was as fast a C. It worked by analyzing the entire program so it could determine type information based on all possible calling sites for a function. Unfortunately it hasn't been maintained for over a decade now. It was called Stalin.

The other advantage of Lisp is it's super easy to write an interpreter for it.
Sounds like a good optimization. I guess they are Stalin for a new release eh? I’ve done some Clojure but not many other lisp dialects. It’s always been a favorite of mine though.

Lisp interpreters are probably the easiest to implement, the best for learning general language parsing. Lambda calculus with some higher order functions and syntax. I’ve played with implementing one but all I’ve wrote to completion is a brainfuck interpreter 😅
Post Reply