Will the PC version take advantage of 4 or 6 core procesors?

Post » Wed Mar 30, 2011 1:34 am

Parallelism is hard, I wouldn't put much hope into complete usage of every core on a hex core system - but then, there's also simply not enough work to go around to do that. There's still a fair amount of stuff that can be run in parallel, and bethesda would have to be idiots to not take advantage of that!

Even so, there comes a point where you're processing everything you need to process, and modern processors are really, really fast. There's simply not 6 cores worth of work, even if the load could be evenly distributed.

There would be work for more cores IF IT WAS ADDED. We could get MUCH better AI for example. But like said the programming is hard...and we have to wait until more people have more advanced hardware. Hopefully after the next console generation the AIs in games are improved, and not just graphics.
User avatar
Lindsay Dunn
 
Posts: 3247
Joined: Sun Sep 10, 2006 9:34 am

Post » Tue Mar 29, 2011 10:37 pm

Well part of the engine upgrade may have been to better handle the 3 cores in the xbox and the many cores in the ps3 and that would funnel over to the pc version.. Things like ai and physics likely run on anouther core from what many of the other bits do.
User avatar
Emily Shackleton
 
Posts: 3535
Joined: Sun Feb 11, 2007 12:36 am

Post » Wed Mar 30, 2011 7:04 am

There would be work for more cores IF IT WAS ADDED. We could get MUCH better AI for example. But like said the programming is hard...and we have to wait until more people have more advanced hardware. Hopefully after the next console generation the AIs in games are improved, and not just graphics.

Nah, that'd be nice, but that's not going to happen - good AI is also very hard, and a lot of effort to go to for a subset of a subset of your market. Doing that for graphics? Fine, you can swap your renderer out and it won't change anything, but AI? Half the game hinges around that. Even so, it's very far from an easy task to thread your AI to an arbitrary number of cores - and with greater complexity comes both more, and more esoteric, bugs.

@Wintermane; While the 360 is a three cored machine, the PS3 is different enough that it doesn't have an easy parallel with desktop processors.
User avatar
Wanda Maximoff
 
Posts: 3493
Joined: Mon Jun 12, 2006 7:05 am

Post » Wed Mar 30, 2011 1:26 am

I think this is where the DX11 support will be most utilized.

If you're referring to multithreaded rendering, I don't believe either AMD nor Nvidia have written support for this into their drivers... or if they have it's not very good support. Performance goes down for me (or doesn't change at all) when multithreaded rendering is enabled in my testing of it, and generally, in my research on it nobody is really making use of it yet because of this lack of driver support. However I do believe preliminary support is making its way into drivers currently, since actual shipped games are pushing for it, like Civ 5. They have expanded on their utilization of DX11 with each successive patch.

Also, since DX11 is mostly just a middleman between the rest of the executable and the pixels drawn on the screen, multithreaded rendering honestly has nothing to do with general multithreading performance. This is all handled in the actual game engine itself, not the graphics code. And just like DX11 there are libraries to facilitate multithreading and multitasking such as OpenMP, Intel's TBB, and so on.

From what I understand, Dx11 inherently supports however many un-dedicated threads are running on your OS. They don't have to set a number for the engine to use because just supporting DX11 means DX can handle it for them. Least ways that's what I've heard but take that with a grain of salt as very few technical specs available for DX11 can be confirmed as official.

Nothing is inherent to DX11, per se. You have to write the code yourself. It's just an API. It's like you're given a box of K'nex (or Legos) and told to build something functional with it. DirectX does not control multithreaded rendering itself... you have to tell it to use multiple CPU cores for certain tasks (Which still only relate to rendering. Not game logic, physics, or the like). The same goes for these "free" performance gains people talk about when they mention DX11. There is no such thing, and frankly it's just hype and word of mouth. If you convert something from DX9 to DX11 it will perform basically the same until you start refactoring the engine to make use of the new pipeline, the new shaders, and the new shader features. There may be minimal changes I'm not aware of between two identical pieces of DX9-DX11 code, that give measurable performance boosts, but by the same token, the entire pipeline has been rearranged. There are more shader stages, more bells and whistles, and thus more abstraction. There are very likely areas that perform worse as well, then, and I'd imagine it about evens out.

Todd is even guilty of talking about DX11 in this manner, and it's odd since he has even stated that Skyrim's shaders are at a DX9 level, which would seem to imply they're not even using Compute shaders, or Shader Model 5, and who knows if they're actually utilizing anything at this point. I'd like to see where these magical "free" performance gains are. Because like I said, nothing is "inherent" to DX11, especially not performance gains. On the contrary, most games perform much worse under DX11 because DX11 can actually do more, and DX11 games usually utilize all these extra features. And don't let my skepticism/pessimism fool you. I'm a staunch proponent of DirectX 11, but not because I think it's a magical performance band-aid like most other people... I like it because the pipeline just makes more sense, the additional shader stages are great to have, and it has allowed new techniques that previously weren't feasible in realtime graphics.
User avatar
Killer McCracken
 
Posts: 3456
Joined: Wed Feb 14, 2007 9:57 pm

Post » Tue Mar 29, 2011 9:51 pm

If you're referring to multithreaded rendering, I don't believe either AMD nor Nvidia have written support for this into their drivers... or if they have it's not very good support. Performance goes down for me (or doesn't change at all) when multithreaded rendering is enabled in my testing of it, and generally, in my research on it nobody is really making use of it yet because of this lack of driver support. However I do believe preliminary support is making its way into drivers currently, since actual shipped games are pushing for it, like Civ 5. They have expanded on their utilization of DX11 with each successive patch.

Also, since DX11 is mostly just a middleman between the rest of the executable and the pixels drawn on the screen, multithreaded rendering honestly has nothing to do with general multithreading performance. This is all handled in the actual game engine itself, not the graphics code. And just like DX11 there are libraries to facilitate multithreading and multitasking such as OpenMP, Intel's TBB, and so on.


Nothing is inherent to DX11, per se. You have to write the code yourself. It's just an API. It's like you're given a box of K'nex (or Legos) and told to build something functional with it. DirectX does not control multithreaded rendering itself... you have to tell it to use multiple CPU cores for certain tasks (Which still only relate to rendering. Not game logic, physics, or the like). The same goes for these "free" performance gains people talk about when they mention DX11. There is no such thing, and frankly it's just hype and word of mouth. If you convert something from DX9 to DX11 it will perform basically the same until you start refactoring the engine to make use of the new pipeline, the new shaders, and the new shader features. There may be minimal changes I'm not aware of between two identical pieces of DX9-DX11 code, that give measurable performance boosts, but by the same token, the entire pipeline has been rearranged. There are more shader stages, more bells and whistles, and thus more abstraction. There are very likely areas that perform worse as well, then, and I'd imagine it about evens out.

Todd is even guilty of talking about DX11 in this manner, and it's odd since he has even stated that Skyrim's shaders are at a DX9 level, which would seem to imply they're not even using Compute shaders, or Shader Model 5, and who knows if they're actually utilizing anything at this point. I'd like to see where these magical "free" performance gains are. Because like I said, nothing is "inherent" to DX11, especially not performance gains. On the contrary, most games perform much worse under DX11 because DX11 can actually do more, and DX11 games usually utilize all these extra features. And don't let my skepticism/pessimism fool you. I'm a staunch proponent of DirectX 11, but not because I think it's a magical performance band-aid like most other people... I like it because the pipeline just makes more sense, the additional shader stages are great to have, and it has allowed new techniques that previously weren't feasible in realtime graphics.

So I guess what it's going to boil down to is the skill level of BGS's programmers to utilize the best possible pipeline for the game engines features. Once again Thanks jwd for your insights.
User avatar
Laura Simmonds
 
Posts: 3435
Joined: Wed Aug 16, 2006 10:27 pm

Post » Wed Mar 30, 2011 4:08 am

Has any one heard anything about how many cores the PC version will take advantage of. As I understand it, programs must be designed to take advantage of multi core PC's or your stuck with just 1 core per program. Anyone have any technical know-how to speculate on this, or has anyone heard anything specific? I just fired up my new system today and would love to hear that the PC version will utilize my killer processor!

I've had some insight in this at university. Basically, all modern middleware (which Beth IS using for Skyrim) is designed to take advantage of as many of the system's processor's cores as possible automatically where appropriate, depending on the circumstances, without the devs needing to even lift a finger. The one confirmed piece of middleware that supports this is Havoc Behaviour that Beth uses to make their animations fluid, so in the very least that much will fill up your cores. I assume they would also take that approach in the core program as well, for the purposes of having direct control over load balance. :spotted owl:
User avatar
Steve Fallon
 
Posts: 3503
Joined: Thu Aug 23, 2007 12:29 am

Post » Wed Mar 30, 2011 3:06 am

I would be quite disappointed if the game wouldn't be able to do so.
User avatar
Sista Sila
 
Posts: 3381
Joined: Fri Mar 30, 2007 12:25 pm

Post » Wed Mar 30, 2011 2:32 am

I do pretend to video ENCODE (rendering? eh?), of course. In any case, AMD already stated that Bulldozer will be affordable as well (unlike those Intel i7 worth $900), so it's a win either way.

Well, that in the case the benchmarks are favorable to them. We'll see...


You can get a solid i7 for around $300.
User avatar
Avril Churchill
 
Posts: 3455
Joined: Wed Aug 09, 2006 10:00 am

Post » Wed Mar 30, 2011 1:57 am

Software mutli-threading doesn't mean "run process on multiple cores".
User avatar
Epul Kedah
 
Posts: 3545
Joined: Tue Oct 09, 2007 3:35 am

Post » Wed Mar 30, 2011 9:08 am

I'm not super-technical so I'm coming from a place of logic, not knowledge. If Skyrim is played on a Quad-Core computer, is it likely that the game is programmed/scripted to take advantage of all cores *if* they are there? If the game is not programmed that way, does the Quad-Core computer handle things by using all cores? I have Quad and that's why I'm wondering. Bonus question: So does this mean more Cores will play it better or it doesn't matter. Thanks!
User avatar
Lynne Hinton
 
Posts: 3388
Joined: Wed Nov 15, 2006 4:24 am

Post » Wed Mar 30, 2011 6:56 am

I suspect it wont really take advantage of more then 2 cores fully. But that doesnt mean that a quad/hexa core wont help. Windows will manage some background processes on the other cores. I do a lot of video encoding on my machine and while its very fast, I plan on upgrading to a new bulldozer class and 8 or 16GB of RAM when they come out.
User avatar
Big mike
 
Posts: 3423
Joined: Fri Sep 21, 2007 6:38 pm

Post » Wed Mar 30, 2011 3:07 am

I'm not super-technical so I'm coming from a place of logic, not knowledge. If Skyrim is played on a Quad-Core computer, is it likely that the game is programmed/scripted to take advantage of all cores *if* they are there? If the game is not programmed that way, does the Quad-Core computer handle things by using all cores? I have Quad and that's why I'm wondering. Bonus question: So does this mean more Cores will play it better or it doesn't matter. Thanks!

Multi-core support is two level - The OS needs to support multiple processors (cores). XP (with SP2), Vista, Win7 support multi-cores. The program needs to be written to support multiple cores and that takes a bit of effort. one good thing even if the program is written for only one core - the other cores can be used for other things, all the processes that run in the background. Since the main target for Skyrim is the Xbox 360 and it has 3 cores, I hope the PC version of the game will support at least dual core processors.


JimC
User avatar
Ria dell
 
Posts: 3430
Joined: Sun Jun 25, 2006 4:03 pm

Post » Tue Mar 29, 2011 9:11 pm

Considering the Xbox360 uses a Triple-Core processor, I'd imagine Skyrim is going to be optimized to be able to run on two, but take advantage of four on the PC, since a Triple core processor on the PC market is an anomaly in the larger scheme of things.

Honestly, to think that it's not going to at least use, and be min-spec'd for two cores, is probably stupid.
User avatar
Thema
 
Posts: 3461
Joined: Thu Sep 21, 2006 2:36 am

Post » Wed Mar 30, 2011 1:38 pm

it wont .not every pc has 4 or 6 core processors but feel free to modify it with the creation kit once you get the hang of how it works. still haven't figured out the morrowind one.i tried to make a kothringi race but yea to no avail
User avatar
Chris Johnston
 
Posts: 3392
Joined: Fri Jul 07, 2006 12:40 pm

Post » Wed Mar 30, 2011 1:54 am

I'm not super-technical so I'm coming from a place of logic, not knowledge. If Skyrim is played on a Quad-Core computer, is it likely that the game is programmed/scripted to take advantage of all cores *if* they are there? If the game is not programmed that way, does the Quad-Core computer handle things by using all cores? I have Quad and that's why I'm wondering. Bonus question: So does this mean more Cores will play it better or it doesn't matter. Thanks!


The way it works is that one process can have multiple threads. If your game has 4 threads, and you have 4 cores, each core will be assigned a thread by the CPU scheduler in your OS, and everything will be dandy. If you have two cores, each core will be assigned two threads. If you have 8 cores, 4 cores will be assigned a thread and 4 cores will be idle.

Most things that can stretch into many cores do so in a sort of "worker pool", which creates as many threads as there are cores, and then assigns tasks to those threads - so generally they can scale very well. Because of the overhead of managing the threads, it's never going to be CPU speed * number of cores, but it can get close.

The problem is, two threads executing at the same time execute at the same time. In a game engine, both of these threads are working off the same state, and interacting with the same state. This creates a huge problem, in that every frame you have many tasks executing at the same time that rely on each other's result before they can do their job. Multithreaded logic can get very, very complex, very, very quickly, and sometimes you simply cannot parallelise something. Why am I telling you this? Because the answer to your question is "It completely and utterly depends on what you're doing".

Anyway, the upshot of having a generalist CPU scheduler means that the actual management of threads is unimportant to the game, and one can freely support multithreading without significantly affecting single cored users.
User avatar
Big mike
 
Posts: 3423
Joined: Fri Sep 21, 2007 6:38 pm

Post » Wed Mar 30, 2011 3:10 am

I'm willing to bet that 4 core processors get skipped and are never seen in the current 2 core slot. By the time those 50% are ready to upgrade they will be moving to 6 or 8 core and 4 core will be passed over.

In five years it will look something like this.

2-4 core: Only 8%
6-8 cores: 52%
10-12 cores: 37%


Maybe but I doubt it. The casual computer user is going to go for the cheapest upgrade possible, which is going to mean a quad core.
User avatar
!beef
 
Posts: 3497
Joined: Wed Aug 16, 2006 4:41 pm

Post » Wed Mar 30, 2011 12:14 am

Basically, all modern middleware (which Beth IS using for Skyrim) is designed to take advantage of as many of the system's processor's cores as possible automatically where appropriate, depending on the circumstances, without the devs needing to even lift a finger.


Well that's certainly not the case. Oblivion never used anymore than 2 cores and even then did so poorly. And many Fallout players with quads had to add a line to their ini files in order to shut off any cores beyond 2 as it was causing crashes and freezes. It was a very common bug with the game. So quite obviously the devs needed to direct the game to make use of the various cores, rather than allowing the software to do it's own thing. While what you're suggesting may be true theoretically, it doesn't hold up on a practical basis. Multi-core support has to be specifically written into the software in order for the various cores to be utilized properly. Your university professors obviously didn't know what they were talking about.
User avatar
D IV
 
Posts: 3406
Joined: Fri Nov 24, 2006 1:32 am

Post » Wed Mar 30, 2011 12:55 am

Thank you for answering my question everyone. I had to look up the definition of a thread but I could mostly understand what you all explained. Do we know how the Bethesda programming teams usually do this regarding cores? In their recent releases, have they done them all a certain way? This might help with the answer. Thank you again...always trying to learn. :)

Here's the link to that thread definition I found in case someone else reading would like to see it.
http://en.wikipedia.org/wiki/Thread_(computer_science)

:tes:
User avatar
Emma Louise Adams
 
Posts: 3527
Joined: Wed Jun 28, 2006 4:15 pm

Post » Wed Mar 30, 2011 4:42 am

From what I know most games don't take fully advantage of more than two cores so I wouldn't expect it from skyrim.
Bonus if they do though.
User avatar
Solina971
 
Posts: 3421
Joined: Thu Mar 29, 2007 6:40 am

Post » Wed Mar 30, 2011 2:43 am

what is the fastest processor a pc can have, the i9?


Lol the i9 doesn't exist :D

The fastest gaming processor is currently the Intel i7 990x, although there are some server processors that go quicker but are unnecessarily quick for gaming.
User avatar
rebecca moody
 
Posts: 3430
Joined: Mon Mar 05, 2007 3:01 pm

Post » Wed Mar 30, 2011 11:06 am

Quad, sure; hex, unlikely.
User avatar
!beef
 
Posts: 3497
Joined: Wed Aug 16, 2006 4:41 pm

Post » Wed Mar 30, 2011 4:22 am

if the game doesn't use all of the processors you can always dedicated the neglected ones to other tasks. At least that might help with the game.
User avatar
Laura Wilson
 
Posts: 3445
Joined: Thu Oct 05, 2006 3:57 pm

Post » Tue Mar 29, 2011 10:38 pm

I do pretend to video ENCODE (rendering? eh?), of course. In any case, AMD already stated that Bulldozer will be affordable as well (unlike those Intel i7 worth $900), so it's a win either way.

Well, that in the case the benchmarks are favorable to them. We'll see...



The i7-2600k is almost as fast as the i7-980x, but much cheaper ($314)

http://www.newegg.com/Product/Product.aspx?Item=N82E16819115070

The i7-960 is signifantly slower than the 2600k, but not bad, and a little bit cheaper ($289)

http://www.newegg.com/Product/Product.aspx?Item=N82E16819115224&cm_re=i7-_-19-115-224-_-Product

I don't know what apps you use for video transcoding, but if you will use any CS5 apps, they run much faster with Intel CPUs and NVIDIA GPUs.

Benchmarks:
http://ppbm5.com/Benchmark5.html

http://ppbm5.com/Interpreting.html

AMD processors, like the Phenom II X6 and even dual hexa cores, like the Opteron 2431 are way outclassed by Intel i7 processors, because of the lacking SSE4.1 extensions that are extensively used in CPU intensive tasks. AMD CPU's are not advised for video editing.

Dual processor [such as Xeon] setups are regularly believed to be the best, especially with the X56xx series, but this test shows that in practice they are quite good at their task, but not better than the more affordable i7-980X.

Amount of memory has a major impact on performance now that CS5 is 64 bit.

Disk setup is regularly underestimated as having an impact on video editing performance, with the argument that a single disk is fast enough for almost any video stream, but in practice this is not true.
influencing results.

The more cores, cache and the higher the clock speed, the better. Intel processors are preferred over AMD, that lack SSE 4.1+ support, which is heavily used during CPU intensive (read AVCHD, MPEG taks). The basic entry point level for new systems appears to be the i7-930/950 for economical systems and the i7-970/980X for more high-end systems. i7 systems from any series below the 9xx series are not advised, let alone the i3 or i5 series. They suffer too much from the memory controller, the chipset on the motherboard or the lack of Hyper Threading.

The more capabilities to adjust clock speed and memory speed, the better. Overclocking can lead to substantial gains and HP / Dell and the like do not allow that. Be warned. The major brands suffer from lack of overclock ability, fixed (low) memory speed and configurations unsuitable for editing. Better build yourself or turn to a reputable custom builder with demonstrated expertise in video editing. X58 motherboards are currently the best choice.

Definitely use a CUDA/MPE capable video card. It can reduce rendering time by a factor 10 and assists with scaling on export, while improving export quality. SLI is no consideration, since it is not supported. For the time being ATI is out of the game and only nVidia cards with 1 GB+ video memory are worth considering.

Specifically for MPEG encoding, the amount of memory is critical. The more the better. 24 GB is far better than 12 GB. The faster the memory, the better. First is rating (1600 or 1866), then CAS latency. Use at least 12 GB but preferably even more. To use the faster memory, BIOS adjustments are required.

The faster the disk(s), the better. Raids do improve performance. Notice that all Top 20 Performers use Raid configurations and sometimes even multiple Raids. Even SSD's, though widely touted for their speed, benefit significantly from Raid configurations.

User avatar
Emma-Jane Merrin
 
Posts: 3477
Joined: Fri Aug 08, 2008 1:52 am

Previous

Return to V - Skyrim