Oblivion Graphics Extender

Post » Thu May 26, 2011 10:15 pm

I like what I see too, but I would also even like a decent DoF effect, and think wrinklyninja's example was quite interesting. If I get an effect that doesn't blur the nearest objects (or at least very, very little) but blurs the far-away objects a bit, I would surely use that. I actually think wrinklyninja's first example was quite good.
User avatar
Mandi Norton
 
Posts: 3451
Joined: Tue Jan 30, 2007 2:43 pm

Post » Fri May 27, 2011 12:54 am

I'm really impressed at the progress, everything I saw here is amazing, but the only thing I really think would be an awesome addition is shadows.
Of course, tough, everything has it's time, so I'll try to hold my patience as I follow the progress, and I wish the best for the project.
User avatar
Yvonne Gruening
 
Posts: 3503
Joined: Mon Apr 23, 2007 7:31 pm

Post » Fri May 27, 2011 8:14 am

I can try and muck with a water shader, but I'd need a few things. Scanti, what do we know about the view matrix? Ideally what I'd like to do is reconstruct world-space position using the inverse view matrix and some http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/ (again thank GDNet for that, those folks rock *hardcoe*) and then having a little two-pass fun with things :) First off would be to pull a Crytek and actually darken all underwater geometry using a kind of z-feathering with the water plane height; pretty basic and noticeably increases realism as detailed in the extremely wonderful http://www.crytek.com/fileadmin/user_upload/inside/presentations/gdc2008/GDC08_SousaT_CrysisEffects.ppt they put out a while back. I can likely merge caustics into this by reverse-tracing a ray up from the point we're shading up until it hits the water surface and then refracting it/dotting that with the sun direction. Pow, fundaments of physically plausible global illumination, son! I can probably also handle water/distance fog here as well-- by default Oblivion just uses a flat plane that does a lookup into some sort of heightmap(!!!) to determine water surface opacity. You can break this real nasty by looking into the water from about a 45-degree angle into deep regions. I've already determined how to remove water totally (thanks Oldblivion folks for the original idea, i.e. use the basic texture and give it 0 alpha) and until we can figure out a way of getting reflection to work (... erm, hi Scanti) I think it's as good a place to start as any. Point is, it's doable, but it's likely going to be really, really math-heavy shader.

But still-- :DDDDDD

EDIT: Wrinklyninja, if you'd like some tutes, I'd be more than happy to explain my thought processess throughout this. Just ask.
User avatar
Zach Hunter
 
Posts: 3444
Joined: Wed Aug 08, 2007 3:26 pm

Post » Thu May 26, 2011 11:03 pm

Sure, give me all you got! :)

As well as that, I'd like to know how you'd go about sampling the pixels aroundabout one pixel. Like say I had a pixel and I wanted to blend it's colour with the pixel next to it. I know that's what the blur effect I've already got does, but I'd like an explanation on how to just read the info of surrounding pixels. Could you indulge me? :D
User avatar
Gavin Roberts
 
Posts: 3335
Joined: Fri Jun 08, 2007 8:14 pm

Post » Fri May 27, 2011 11:14 am

We have a decent Depth of Field effect.
http://www.tesnexus.com/downloads/file.php?id=8771
User avatar
Nauty
 
Posts: 3410
Joined: Wed Jan 24, 2007 6:58 pm

Post » Fri May 27, 2011 9:52 am

No we don't. That's not really true depth of field. It just blurs everything a certain distance from the player. A depth-buffer DoF is the real thing, it gives blur on nearer objects and variable blur and blur eye adaption time and so on.

Speaking of DoF, I read that starcraft pdf that tombofsoldier linked to, very interesting. I'm going to re-write my shader to take advantange of stuff that was mentioned in the pdf, which are really self-evident, but will require me to learn new stuff, so that's what I'll do next. The DoF in Starcraft looks really good, hopefully I can manage the same effect. :)
User avatar
ruCkii
 
Posts: 3360
Joined: Mon Mar 26, 2007 9:08 pm

Post » Fri May 27, 2011 1:04 pm

What happened to the whole beautiful godray thing? :cold:
User avatar
Nathan Hunter
 
Posts: 3464
Joined: Sun Apr 29, 2007 9:58 am

Post » Thu May 26, 2011 10:46 pm

No we don't. That's not really true depth of field. It just blurs everything a certain distance from the player. A depth-buffer DoF is the real thing, it gives blur on nearer objects and variable blur and blur eye adaption time and so on.

Speaking of DoF, I read that starcraft pdf that tombofsoldier linked to, very interesting. I'm going to re-write my shader to take advantange of stuff that was mentioned in the pdf, which are really self-evident, but will require me to learn new stuff, so that's what I'll do next. The DoF in Starcraft looks really good, hopefully I can manage the same effect. :)

Oh okay.
Is this gonna be like ENB series but better? (not just the dof, but godrays and stuff)
User avatar
Robert Devlin
 
Posts: 3521
Joined: Mon Jul 23, 2007 2:19 pm

Post » Fri May 27, 2011 6:30 am

Are this the same as the pdf you are reading from wrinklyninja:

- http://www.scribd.com/doc/4898192/Graphics-TechSpec-from-StarCraft-2
User avatar
Kay O'Hara
 
Posts: 3366
Joined: Sun Jan 14, 2007 8:04 pm

Post » Fri May 27, 2011 7:44 am

@ strupekutter: Yes, that's the one.

@ Jjiinx: I believe that is being done by vtastek.

@ con-tur-eh: I certainly hope so. :)
User avatar
Hella Beast
 
Posts: 3434
Joined: Mon Jul 16, 2007 2:50 am

Post » Fri May 27, 2011 8:46 am

Sure, give me all you got! :)

As well as that, I'd like to know how you'd go about sampling the pixels aroundabout one pixel. Like say I had a pixel and I wanted to blend it's colour with the pixel next to it. I know that's what the blur effect I've already got does, but I'd like an explanation on how to just read the info of surrounding pixels. Could you indulge me? :D

WARNING: LOTS OF BACKGROUND DATA FOLLOWS. THE PROCESS IS SIMPLE, BUT THE WALL OF TEXT MAY BE INTIMIDATING.
Well, whenever you see the tex1D/tex2D/tex3D stuff going on, you've likely also noticed the 2 arguments. Under the hood, this corresponds to two things-- number 1 is which texture to read from (referred to as a sampler, as it does a number of signal-processing effects on the raw data stored inside the texture such as filtering/interpolation automatically) and 2 is where/how to read from said texture. Traditionally, you'll usually see the UV coordinates thrown in there from the vertex shader, though you can do a number of interesting things to them. These UV coordinates are literally offsets from an arbitary position on said texture, in DirectX I *believe* it's from the top-left corner. These are also affine, meaning they have no real concept of units or scale. A value of 1.0 (in this context) means you'll *always* be sampling from the opposite edge of the reference point I talked about earlier. If you're familar with vector math, the idea's real similar to normalization though there are some slight semantic differences. Anyway, sampling from areas that are 'physically' near to others works pretty much identically to how you might envision it-- just add or subtract small amounts to the value and read. The interesting part crops up when you're trying to be specific about it. As mentioned, texture coordinates are affine, and you have no real built-in way of saying 'shift left one pixel' and similar. That's where the whole rcpres[] thing pops up-- Timeslip added a small bit of extra data that specifies how 'big' a pixel is in terms of the texture size. It's just 1/width or 1/height-- very intuitive. Later versions of DirectX (starting with 11, I think) actually make this available to you out-of-the-box, so to speak. On the positive side, this affine coordinate system also makes extending these same ideas to 1- or 3D textures very similar-- you're just adding (or subtracting) one dimension.

You can also goof with the w component for some more nifty stuff, and this is where MIPmapping comes into play. There's one real rule in modern shader development, and it's that texture access is sloooowwwwww. It's just how the hardware is/was built, and the problem is very difficult to solve in a cheap/effective manner. In order to combat this, graphics cards now have something referred to as a 'texture cache' that gives the GPU a small amount of very fast-access storage for working with reads. Writes are handled separately, but that's a post unto itself and involves something referred to as 'fillrate.' I can explain that more later on, if you're interested, but I digress. Anyway, as stuff gets farther and farther away, you usually tend to jump farther and farther away with each read; this is related to the idea of derivatives and some elements of sampling theory. In order for the shader/texture unit to avoid constantly moving junk in and out of this cache (referred to as 'thrashing') Lance Williams had the bright idea to sort of average the surrounding texture info in such a way that the original 'feel' could be retained but you could theoretically get more coverage out of the same amount of cache memory at the cost of some overall storage space. It also has the very nice property of reducing texture aliasing (remember kids, point sampling makes for jaggy edges whether it be done on triangle edges or dependent texture reads) at the cost of making stuff go kinda yucky at oblique angles. If you'll recall, the fourth, W component of the 'where' vector is usually reserved for screwing around with which MIP level you want to use. It can mean a number of different things depending on what you call with it, for example the tex2Dbias intrinsic adds the w value to whatever MIP level the hardware determines to be optimal and samples accordingly whereas the tex2DLOD function uses it to explicitly determine which MIP level you want to use. You can do all sorts of interesting antialiasing stuff if you mess with it, but that may be a bit advanced when talking about stuff like postprocessing, etc. You can, however, exploit it to do a fast box blur as described by IW in that DoF article I linked to earlier, among other things. As with much of computer science, the end results of doing something are highly dependent on the context in which it's being performed in.

EDIT: actually just going to pony up and write a bigarse thread about graphics programming in the Oblivion section. Stay tuned.

What happened to the whole beautiful godray thing? :cold:

Waiting on screen-space sun position. At that point it'd likely take about less than five minutes to get working; ask scanti about it.
User avatar
Alex Blacke
 
Posts: 3460
Joined: Sun Feb 18, 2007 10:46 pm

Post » Fri May 27, 2011 9:30 am

Cool, thanks for that wall of text, very informative. The trouble with me is that while I'm competent enough with maths that I can handle pretty much anything that shader programming would throw at me (vectors, matrices, calculus), I don't know how to get numbers and set them. Your explanation has helped with that. :)
User avatar
Carys
 
Posts: 3369
Joined: Wed Aug 23, 2006 11:15 pm

Post » Thu May 26, 2011 11:07 pm

Cool, thanks for that wall of text, very informative. The trouble with me is that while I'm competent enough with maths that I can handle pretty much anything that shader programming would throw at me (vectors, matrices, calculus), I don't know how to get numbers and set them. Your explanation has helped with that. :)

It's a very complicated subject and I generally try to stay the hell away from sampling theory for the most part lest my head explode. Excused.
EDIT: Aforementioned thread http://www.gamesas.com/bgsforums/index.php?showtopic=1048729 for all interested parties.
User avatar
CORY
 
Posts: 3335
Joined: Sat Oct 13, 2007 9:54 pm

Post » Fri May 27, 2011 11:41 am

One more question: Is there any way I can store the output of a shader function to a new sampler? For instance, say I turn all the pixels pink on the sampler that I get the current frame from. I then don't want to send that as output to the screen, as it's just a small step before the next one, which is to turn them blue. But for the blue step I'd like to work from a whole image like I did for pink. Is there any way to do this?

Another anology: I touch up a pic, and want to send it to the reprographics deparment for scanning, before hanging it up on my wall. Is there a way of sending it to reprographics without cutting it up into very small bits, just as the whole picture?

The reason I ask is that it seems I need to get a few different samplers showing various stages of blur, and at different sizes.


EDIT:

And another question: I've swapped out whatever the previous blur was for a gaussian blur. At the moment, the blur is using values found off the internet, as I'm not sure how you get values. However, I also tried getting my own values, just plugging in the existing x,y positions into the formula, and setting the new coordinates to the values found. However, when I do that, in game is totally wierded out. I don't get why there is a set of values that is multiplied by a blur constant, added onto the original coordinates, and then repeated for the next set of values. Can someone explain why this is done to me?

Sorry for all the confusing questions. :)
User avatar
Cat Haines
 
Posts: 3385
Joined: Fri Oct 27, 2006 9:27 am

Post » Fri May 27, 2011 6:15 am

One more question: Is there any way I can store the output of a shader function to a new sampler? For instance, say I turn all the pixels pink on the sampler that I get the current frame from. I then don't want to send that as output to the screen, as it's just a small step before the next one, which is to turn them blue. But for the blue step I'd like to work from a whole image like I did for pink. Is there any way to do this?

Another anology: I touch up a pic, and want to send it to the reprographics deparment for scanning, before hanging it up on my wall. Is there a way of sending it to reprographics without cutting it up into very small bits, just as the whole picture?

The reason I ask is that it seems I need to get a few different samplers showing various stages of blur, and at different sizes.

It's tricky with OBGE, if only because it wasn't really designed to do what we're doing with it right now. I did implement a high-quality motion blur using a similar technique (referred to as 'ping-ponging' as a technical term, no joke) for OVEP that uses the lastpass sampler, though I'm unsure if that technique is what you're after. Can I ask for a technical context?

EDIT: Scratch that, I'm retarded. -_- You can probably get one pass, but TBH I think you're going to have to do without the 'multiple stages' part. I assume this is related to the IW blur? If so, you might be able to get by just blurring some of the pre-blurred regions, especially considering how they suggest you only use small sizes anyway.
User avatar
leni
 
Posts: 3461
Joined: Tue Jul 17, 2007 3:58 pm

Post » Fri May 27, 2011 10:58 am

Um, I'll have to think of one. What about my second (late edit) question?
User avatar
CORY
 
Posts: 3335
Joined: Sat Oct 13, 2007 9:54 pm

Post » Fri May 27, 2011 12:32 pm

@ UK47Howard: I don't know. I've never managed to get it working with this card (it's an ATI Radeon HD 4xxx). Anyone have any ideas on how to get it to work?

you have force AA through driver settings
User avatar
Jhenna lee Lizama
 
Posts: 3344
Joined: Wed Jun 06, 2007 5:39 am

Post » Fri May 27, 2011 1:55 pm

@ UK47Howard: If figured out it was actually either my shaders or OBGE screwing up the antialiasing - the shaders I've been using have just been setting the alpha channel to 1, so that might be it, but it could also be OBGE itself. No matter.
User avatar
Markie Mark
 
Posts: 3420
Joined: Tue Dec 04, 2007 7:24 am

Post » Fri May 27, 2011 8:08 am

This is the new thread for the godrays shader which is currently in development. Please read the first thread found http://www.gamesas.com/bgsforums/index.php?showtopic=1045352 for more information.

Please limit this thread to talk only about godrays or stuff related to its development.

Thank You



it would be a good idea to edit your topic

and add some download links ... so you wouldn't have to go through all of the pages to find something .. these kinds of topics can get very long :)

just a thought
User avatar
Sabrina Schwarz
 
Posts: 3538
Joined: Fri Jul 14, 2006 10:02 am

Post » Fri May 27, 2011 7:51 am

it would be a good idea to edit your topic

and add some download links ... so you wouldn't have to go through all of the pages to find something .. these kinds of topics can get very long :)

just a thought

Theres nothing to download yet...and its just picking up where I left off?

Once there is stuff to add i'll make a RELz thread... :rolleyes:

And this is an Open Community Thread. Im not going to take the time to add every piece of work in progress to the main topic until they are finished. The point of this tread is general mod discussion, not to show off what we have or what were releasing. That will be in the future.
User avatar
Vicki Gunn
 
Posts: 3397
Joined: Thu Nov 23, 2006 9:59 am

Post » Fri May 27, 2011 11:52 am

Theres nothing to download yet...and its just picking up where I left off?

Once there is stuff to add i'll make a RELz thread... :rolleyes:

And this is an Open Community Thread. Im not going to take the time to add every piece of work in progress to the main topic until they are finished. The point of this tread is general mod discussion, not to show off what we have or what were releasing. That will be in the future.



oki
User avatar
Jade MacSpade
 
Posts: 3432
Joined: Thu Jul 20, 2006 9:53 pm

Post » Thu May 26, 2011 10:17 pm

Actually I leave the WIPz and RELz threads to the people making the shaders. You guys are all BA :mohawk:
User avatar
Peetay
 
Posts: 3303
Joined: Sun Jul 22, 2007 10:33 am

Post » Fri May 27, 2011 8:41 am

I can try and muck with a water shader, but I'd need a few things. Scanti, what do we know about the view matrix? Ideally what I'd like to do is reconstruct world-space position using the inverse view matrix and some http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/ (again thank GDNet for that, those folks rock *hardcoe*) and then having a little two-pass fun with things :) First off would be to pull a Crytek and actually darken all underwater geometry using a kind of z-feathering with the water plane height; pretty basic and noticeably increases realism as detailed in the extremely wonderful http://www.crytek.com/fileadmin/user_upload/inside/presentations/gdc2008/GDC08_SousaT_CrysisEffects.ppt they put out a while back. I can likely merge caustics into this by reverse-tracing a ray up from the point we're shading up until it hits the water surface and then refracting it/dotting that with the sun direction. Pow, fundaments of physically plausible global illumination, son! I can probably also handle water/distance fog here as well-- by default Oblivion just uses a flat plane that does a lookup into some sort of heightmap(!!!) to determine water surface opacity. You can break this real nasty by looking into the water from about a 45-degree angle into deep regions. I've already determined how to remove water totally (thanks Oldblivion folks for the original idea, i.e. use the basic texture and give it 0 alpha) and until we can figure out a way of getting reflection to work (... erm, hi Scanti) I think it's as good a place to start as any. Point is, it's doable, but it's likely going to be really, really math-heavy shader.

But still-- :DDDDDD

EDIT: Wrinklyninja, if you'd like some tutes, I'd be more than happy to explain my thought processess throughout this. Just ask.


Working out all the games constants is at the top of my to do list at the moment. The shaders will be pretty limited without it. I'm trying to trace how the game's engine gets the data instead of just ripping it out of a memory address.

One more question: Is there any way I can store the output of a shader function to a new sampler? For instance, say I turn all the pixels pink on the sampler that I get the current frame from. I then don't want to send that as output to the screen, as it's just a small step before the next one, which is to turn them blue. But for the blue step I'd like to work from a whole image like I did for pink. Is there any way to do this?

Another anology: I touch up a pic, and want to send it to the reprographics deparment for scanning, before hanging it up on my wall. Is there a way of sending it to reprographics without cutting it up into very small bits, just as the whole picture?

The reason I ask is that it seems I need to get a few different samplers showing various stages of blur, and at different sizes.


At the moment OBGE only remembers what was written in the last pass. I could fairly easily make it so it remembers a set number of passes instead. i.e the last 3 passes. Of course it will eat into your card's vram doing so.

@ UK47Howard: If figured out it was actually either my shaders or OBGE screwing up the antialiasing - the shaders I've been using have just been setting the alpha channel to 1, so that might be it, but it could also be OBGE itself. No matter.


According to the documents for the Ati graphics cards, enabling a readable depth buffer will disable any antialiasing. It's a card limitation. I believe that you can read an aliased depth buffer in DX10 but you have to resolve it first which converts it into a readable format.

it would be a good idea to edit your topic

and add some download links ... so you wouldn't have to go through all of the pages to find something .. these kinds of topics can get very long :)

just a thought


This thread is to just to throw around some ideas and hopefully shape the new version of OBGE into something that everybody will find useful. I'll probably throw it a few test plug-ins so the tec heads that read the thread can try some new stuff out.
User avatar
Bones47
 
Posts: 3399
Joined: Fri Nov 09, 2007 11:15 pm

Post » Fri May 27, 2011 1:09 pm

This thread is to just to throw around some ideas and hopefully shape the new version of OBGE into something that everybody will find useful. I'll probably throw it a few test plug-ins so the tec heads that read the thread can try some new stuff out.



I for one would be most appreciative if you throw us salivating tech heads a bone.... :D
User avatar
meg knight
 
Posts: 3463
Joined: Wed Nov 29, 2006 4:20 am

Post » Fri May 27, 2011 3:52 am

I for one would be most appreciative if you throw us salivating tech heads a bone.... :D

I concur, i would like to see how much i can ruin my game :P ( wannabe tech-head may be an appropriate name for me)
User avatar
naana
 
Posts: 3362
Joined: Fri Dec 08, 2006 2:00 pm

PreviousNext

Return to IV - Oblivion