That's a very very small scale thing and does not represent at ALL a game world. Can't even compare the two scenarios.
It's just not possible to make a living breathing game world with this yet. No computer could handle it.
But I don't want to have to buy a PC more expensive than my house just to run it
Actually, given the explanation of how the system works it would need very little resources. Just about any dirt cheap modern computer could run this system, the amount of data sent through the system at any given time interval would be the equivalent of a photograph whose resolution is equal to your monitor's resolution. The system seems very efficient on that front.
we should also remind ourselves that nothing visible in the demo video resembled anything more than moving a camera around a map. just because this technology can render lots of nurbs and organic shapes...it could be an absolute nightmare for devs to work with for all we know at this point...
I agree with the post in the link... BULLL SHIIITTTTT.
Maybe its because im stubborn, but if you have a model of a tree, and its made of all points, and you want to mod that tree.... wtf do you do? Open up 3ds, get your point plugin, and make points to create a tree?
Sure it sounds good on paper, but creating these models has to be the biggest pain in the ass any game maker could ever hope for.
Actually in later, more recent videos, they address how artist will eventually, although the software isn't yet in place to, be able to take polygon rendered object and convert it into point art. The idea being that artist start to render the objects on the maximum number of polygons, and then port them into their software package. You might ask yourself "If they can render the objects with more polygons in the first place, why not just do that and avoid using this system?" They don't do that because when you are processing an entire game cell full of objects, all of which are rendered so finely, you would need a very very powerful computer to handle all that data. This system doesn't necessarily improve the ability of the computer to render graphics, because people would still render them as polygons first. What it allows the computer to do is run higher level graphics in real time by using fewer system resources.
A good anology might be to imagine you were having a dinner party and you want to have an uncle over, for the sake of the argument you have two uncles and can only invite one. Your first uncle is hilarious, but he eats EVERYTHING, your second uncle is also hilarious, but he will not eat ALL the food, so your other guest can eat too. Well the first uncle represents polygon graphics, the second represents this system of graphical rendering, how funny they are represents how good they render graphics, and how much they eat represents system resources in the computer. Both uncles are just as funny, just you wouldn't have to pay for a catering service if yo go with the second, less resource consumptive uncle.
Just to add another of my two cents, the renders they showed in that vid look like crap. Everything looked fake, as if they were trying too hard to add too much detail.
Also, what they are doing is simmilar to what is already done, models not in your view are not rendered, they are merely taking it a step further and ONLY showing you the pixels of where things are to your screen, this would put a LOT more strain on your CPU rather than your GPU would it not? Its more calculations than rendering yeah?
Yes and no. More data would be pushed through your processor, but an amount which is not even comparable to what your GPU would have to process in current graphics engines. As for being similar to what is already done, it is not very alike at all. Your computer currently does only allow you see the same end number of pixels, this is true. However that is the end product of the process it is running. With this system your computer searches for information relevant to each pixel on screen. Normally the computer runs through a process which retrieves information (both relevant and not relevant), reads it, edits it, and readies it to be placed on screen. You cut out the entire process of editing the data by including that with the process of retrieving it, it doesn't need to ready the data to be output by cutting it into pixels and outputting it because it only asked for data relevant to those pixels already, and then you also throw out the need to retrieve unnecessary by first determining what data is necessary.
In simple terms current graphics would be similar to taking an anolog picture and cutting it down into pixels, there is no true number of pixels in the picture as it is anolog. So you must first process far more data than you need because you have to process the ENTIRE picture and create the information for the pixels from scratch.
With this system its the opposite. Your computer knows precisely how many pixels it is looking to place in the proverbial digital copy, so rather than process the entire picture, it would just process information relevant to the pixels it is requesting.
Also the demo only svcked because like the narrator explained, it was programmer artwork not a professional artist art. As a programmer I can tell you most programmers are not artistically skilled. Quite frankly I was surprised it looked as good as it did.
I still dont think that this point system is the next best thing, it would make every 3d program out there incompatible with this format of model for one... And whats this about their algorithm? That means its something the company owns, that if any other company wanted to use this they would have to go through this one company for their one algorithm. Sounds like a monopoly, sounds expensive. Sounds like Nvidias PhysX thing. Not everyone is going to jump on board, and its generally just a advertisemant/w.e to push people to buy nvidia cards.
What kind of confuses me though is that what do four points make up? A plane. When you lay a texture over an object, no matter if its made of points or shapes, there will always be an empty space between the two points somewhere no matter how small. Also, wouldnt it be waste of time and performance to render an infinite amout of points on the side of a wall when you could just have a few points to make up that plane?
All software consist of algorithms, games not being an exception. An algorithm is simply a step by step approach to resolve a task, so all they mean by their algorithms is their software, in this case: unlimited detail.
Secondly, how textures would be rendered in this system would work quite differently than polygon based graphics. Instead of having flat surfaces which have textures laid over them, individual points are each assigned a color, when viewed from a distance they make a texture and/or image. Its similar to pointillism. Plus there would be "no empty space", unless the artist intended it. Put simply, each point represents an individual pixel on your screen, and while there is distance between the pixels on your screen you cannot detect this distance because the pixels are so small and glow with such intensity. As you stare at the screen right now surely you don't notice the gaps between the pixels of your monitor. Even if you do, this problem is not resolved with the polygon rendering, and it i a hardware problem rather than a software problem.