» Thu May 17, 2012 7:49 pm
Most graphics API documentation and performance tuning guides recommend allocating resources you need the most as soon as possible, to ensure they get stored in the fastest storage available on the graphics card.
A 4096x4096 shadow map uses four bytes per pixel, so each one of them requires 64 megabytes of memory, and the game most needs multiple ones to do the distance cascading trick. Other shadow-casting light sources probably need a few maps too. Let's assume there's about three 4096x4096 maps worth of shadowmap buffers there (no, they're not all 4096x4096, but the total sum of pixels might be close). That's 192 megabytes of memory on shadow maps alone.
Then there's the framebuffer -- triple buffering and 4xAA @ 1920x1200, 8 bit per component RGBA (or ABGR) so again 4 bytes per pixel = 1920 * 1200 * 4 * 4 * 3 = 105 megabytes for just that. These don't absolutely need to be multisampled, but if you force AA through drivers it's probably going to happen.
The internal buffers the game uses for its HDR probably have 16-bit color components instead of the framebuffer's eight, so it's _8_ bytes per pixel. Need to multisample them too, no point in AA otherwise. There's at least one of them, maybe two. 141 megabytes in the latter case.
Oh, and a depth buffer. Got to have at least one of those. 4 bytes per depth sample, and we're still multisampling. 1920 * 1200 * 4 * 4 = 35 megabytes.
192+105+35+140 = 472 megabytes of video memory.
Yes, there's plenty of reason for this game to allocate lots of memory on startup.
Edit: brainfart corrected.