Sure, give me all you got!

As well as that, I'd like to know how you'd go about sampling the pixels aroundabout one pixel. Like say I had a pixel and I wanted to blend it's colour with the pixel next to it. I know that's what the blur effect I've already got does, but I'd like an explanation on how to just read the info of surrounding pixels. Could you indulge me?

WARNING: LOTS OF BACKGROUND DATA FOLLOWS. THE PROCESS IS SIMPLE, BUT THE WALL OF TEXT MAY BE INTIMIDATING.
Well, whenever you see the tex1D/tex2D/tex3D stuff going on, you've likely also noticed the 2 arguments. Under the hood, this corresponds to two things-- number 1 is which texture to read from (referred to as a sampler, as it does a number of signal-processing effects on the raw data stored inside the texture such as filtering/interpolation automatically) and 2 is where/how to read from said texture. Traditionally, you'll usually see the UV coordinates thrown in there from the vertex shader, though you can do a number of interesting things to them. These UV coordinates are literally offsets from an arbitary position on said texture, in DirectX I *believe* it's from the top-left corner. These are also affine, meaning they have no real concept of units or scale. A value of 1.0 (in this context) means you'll *always* be sampling from the opposite edge of the reference point I talked about earlier. If you're familar with vector math, the idea's real similar to normalization though there are some slight semantic differences. Anyway, sampling from areas that are 'physically' near to others works pretty much identically to how you might envision it-- just add or subtract small amounts to the value and read. The interesting part crops up when you're trying to be specific about it. As mentioned, texture coordinates are affine, and you have no real built-in way of saying 'shift left one pixel' and similar. That's where the whole rcpres[] thing pops up-- Timeslip added a small bit of extra data that specifies how 'big' a pixel is in terms of the texture size. It's just 1/width or 1/height-- very intuitive. Later versions of DirectX (starting with 11, I think) actually make this available to you out-of-the-box, so to speak. On the positive side, this affine coordinate system also makes extending these same ideas to 1- or 3D textures very similar-- you're just adding (or subtracting) one dimension.
You can also goof with the w component for some more nifty stuff, and this is where MIPmapping comes into play. There's one real rule in modern shader development, and it's that texture access is sloooowwwwww. It's just how the hardware is/was built, and the problem is very difficult to solve in a cheap/effective manner. In order to combat this, graphics cards now have something referred to as a 'texture cache' that gives the GPU a small amount of very fast-access storage for working with reads. Writes are handled separately, but that's a post unto itself and involves something referred to as 'fillrate.' I can explain that more later on, if you're interested, but I digress. Anyway, as stuff gets farther and farther away, you usually tend to jump farther and farther away with each read; this is related to the idea of derivatives and some elements of sampling theory. In order for the shader/texture unit to avoid constantly moving junk in and out of this cache (referred to as 'thrashing') Lance Williams had the bright idea to sort of average the surrounding texture info in such a way that the original 'feel' could be retained but you could theoretically get more coverage out of the same amount of cache memory at the cost of some overall storage space. It also has the very nice property of reducing texture aliasing (remember kids, point sampling makes for jaggy edges whether it be done on triangle edges or dependent texture reads) at the cost of making stuff go kinda yucky at oblique angles. If you'll recall, the fourth, W component of the 'where' vector is usually reserved for screwing around with which MIP level you want to use. It can mean a number of different things depending on what you call with it, for example the tex2Dbias intrinsic adds the w value to whatever MIP level the hardware determines to be optimal and samples accordingly whereas the tex2DLOD function uses it to explicitly determine which MIP level you want to use. You can do all sorts of interesting antialiasing stuff if you mess with it, but that may be a bit advanced when talking about stuff like postprocessing, etc. You can, however, exploit it to do a fast box blur as described by IW in that DoF article I linked to earlier, among other things. As with much of computer science, the end results of doing something are highly dependent on the context in which it's being performed in.
EDIT: actually just going to pony up and write a bigarse thread about graphics programming in the Oblivion section. Stay tuned.
What happened to the whole beautiful godray thing? :cold:
Waiting on screen-space sun position. At that point it'd likely take about less than five minutes to get working; ask scanti about it.