I don't get it, I can understand it being a rediculous performance killer, but why is it practically impossible to the point where you basically need to write the engine yourself? Could someone explain it? I'm seeing it from a purely mathematical (well, not really. pure maths svcks, imho) POV, so I'm missing the programming side of things, but this is how I see it:
You have a texture showing what is rendered on screen. You have a matrix that turns each point of that 2D texture into a 3D point in the game world. (My reasoning is that there is one for the 3D point -> 2D point, so it's just a matter of reversing it). You use that matrix to get the 3D position of the point in the world from the camera's perspective. You have the position of the sun in the world from the camera's point of view. You then (somehow, I don't know if this is possible) have a matrix that turns coordinates in the camera coordinate system into coordinates in the sun coordinate system (the same point from the sun's perspective). You then do whole 3D -> 2D switch in the sun's persective to get a texture of what you'd see if you were where the sun was. Compare the depth of the same (2D) point with the depth of your point in the 3D world, and if they are not the same, then something must be in front of it, and so a shadow should be cast at that point. You switch back to the original (camera perspective) 2D point on the texture and make it a bit darker. Rinse and repeat.
Now, it's fairly obvious that is a lengthy process, but how is it practically impossible? (I'm not trying to argue a point or anything, I'm just wanting an explanation as to where my thinking fails)
EDIT: Yep, rereading, it definitely would be a perf. killer. But that's not the point.
@ Jjiinx: When we get the ability to pass variables to the shaders from Oblivion scripting, that (and more) shall certainly happen.

@ Snow_EP: You're definitely not intruding. And thanks for the fix. :thumbsup: