My thoughts about mediatechnology, games & interfaces.

The evolution of games: are computers graphics ever improving?

Posted by Thomas van Vessem - -

A combination of boredom and the need to spoil myself a little bit led me to purchase a new game voor my Xbox. After a bit of searching it appears the game Crysis 2 is highly recommended, not only because of the gameplay, but especially due to the awesome computer graphics. Since I’m a bit of a graphics-junkie myself, I decided to buy it. I wasn’t really expecting much of the graphics, since I already saw a couple of videos of it and I’m quite accustomed playing really nice looking games like Modern Warfare 2 and Bad Company 2.

But how surprised I was to see the incredible graphics appearing on screen: incredible razor-sharp textures, unbelievably realistic lightning and blur-effect just like in real life. Well, not really like real life, but you get my point. I really did think that it wouldn’t be possible that my already 6 years old Xbox 360 could create such graphical power. You could tell that the Xbox had problems with it, whenever large areas where rendered the framerate dropped a little bit below the minimum framerate of 25 for smooth gameplay. This worried me a bit and led me to believe that, although the graphics look great, gameplay should never be sacrificed to gameplay at all; they always should be in balance of each other and if I would choose between them it would be gameplay. Too bad, gameplay always will remain at a certain level during the development of games throughout the years and graphics will (at least most of the time) improve, causing many games to be focussed too much on graphics alone and less on gameplay.

The evolution of graphics has always been eluding us, because we don’t know where it will end. Of course, developers of games always will increase their system requirements just above the average system specs to force the gamers to buy new computer systems, like the PC-game Crysis which was released a couple of years ago and caused many die-hard gamers to ugrade their systems, myself not included. But how many times are the graphics from such a level, that gamers are bound to buy such a new system, not because their hardware don’t cope with the new requirement, but because they think the graphics are not new enough? I think the difference between current-gen and next-gen games will become smaller, especially in the (far) future, but will never disappear. This has to do with several problems I think will emerge in the future and one of them is money.

When graphics become more beautiful, it means that the graphics engine needs to be tuned in more detail and this costs more time and thus more money. When in the far future, every little detail of the real world has to be copied to the virtual world, so much money will be spend on doing this that creating an entire full-length game is almost an impossible task. This gap is partially filled by using automated technology that does the job for you, like scanning objects in great detail and capturing motions. Especially this last technique is a well-known technology for not only capturing general body movements for games using martial arts, but also for copying facial expressions. The game Heavy Rain is a very good example and proves how realistically digital faces can become. To further process these captured images still remains a big task.

So as I mentioned, money and time are two very important aspects of the stagnation of computer graphics and I think some serious automating technology has to come in play to partially create a virtual environment for you, but there is also a limit not only to the software (which is the programming of computer graphics) but also the hardware. In the past years, computer chips are baked at as increasingly smaller scale, from 90nm (nm = nanometer; 1 nanometer = 1/1.000.000 milimeter = 1/1.000.000.000 meter!) to 45nm. Sooner or later, when this scale reaches the absolute atomic level and squishing even more data on the same surface, psychics prohibited that scale to be increased even further. This is a very big problem and I think a fundamentally different computer chip has to be developed to gain more power from computer chips when this stage is reached. In the far future, I think that computer chips are not only produced by inorganic materials, but also holds organic materials as wel, such as a neural network formed by real (human) biological neurons. Time will tell, but I would sure would like to know if we will reach that phase as a everyday technology during my lifetime.

In case the problems I mentioned above are all solved, moral aspects come into play when computer graphics reach the point of absolute realism. If someone does not see when a game is played or not, how can they know if the lives they live are their real lives, especially when feedback given by the computers is directly putted into the human brain and other feedback-related devices do not longer exists? Stealing information or changing the perception from a human being should create a whole new dimension of cyber criminality, just like in the movie Inception. (If you haven’t seen this movie, shame on you!)

OK. That’s all about computer graphics and although I really planned on posting just a small blog this time, it turns out that this is one of my biggest blogs yet! Oh well, you must know the feeling to write everything you think about, once you start writing.

Stay tuned for upcoming blogs!

Leave a Reply