Thursday 20 December 2007

What's up with Intel and graphics anyway?

During this last year I've read quite a lot about Intel getting into the graphics business. Well, maybe I didn't say that 100% correct: Of course, I'm talking about discrete graphics business. Intel is already quite a big player in integrated graphics market and as far as I know, it's still #1 player there although NVidia and ATI (now AMD) are catching up quite fast.
To get back to the topic at hand, all the writing about it (Intel) getting into the business never managed to produce any concrete info on what exactly they intend to do anyway. Seems to me Intel really managed to hush-hush their employees this time or that all the rumors were simply untrue.
Just in the same time frame there were also lots of news about Intel demonstrating their newest processors by showing off how fast they can render 3D scenes ray-traced. We've seen four and eight core systems displayed that could render ray-traced images pretty much in real time. And I'm not talking about 640x480 resolutions here. See where I'm getting at?
I believe Intel finally went to try to do something ATI, NVidia or any other GPU maker should do quite some time ago: they went for the holy grail of the graphics - real time ray-tracing.
Of course this may prove a bit more difficult as it may seem. One may even achieve enough muscle to render ray-traced images in real time at decent resolutions, but just where does accomplishing such a feat take us? Currently there are only a few software programs available that perform ray-tracing and they would most certainly appreciate an accelerator they could use to perform their work faster. But: would that make the entire enterprise profitable? Selling a few accelerators to studios and geeks that need / play with such programs? I think not. If you go for such a goal, you go for it all the way, counting on millions of gamers out there who would appreciate the realism ray-tracing delivers.
But what about backwards compatibility? And all the new games coming out? Will the new accelerator be able to support them well? Well, this is where the problems kick in. 2D is not a problem - one only needs to slap in some extra 100K transistors and you have a decent enough 2D engine. I think Intel has this territory covered well already. What isn't so easy are the 3D games. If you look at this from ray-tracing perspective, current 3D games's look is all based on deception. A game can look really good and realistic and shadows in it are nothing short of amazing, but this is all based on special algorithms that require data prepared in a very specific way for them to work. And then there's another problem with the quantity of data required for various algorithms: current algorithms require no information about material an object is composed of whereas ray-tracing does. Or did you think those super-duper transparency and reflections always boasted in ray-tracing software come out of thin air?
While both methods require polygons to describe objects and textures to describe the colors of surfaces, this is just about everything they have in common. The main difference between the "new" and the "old" way is in the way a particular pixel is drawn on the screen. Current GPUs draw triangles and in the end just make sure that the pixel from the "topmost", that is, the triangle closest to the user is displayed. Ray-tracing on the other hand starts from the pixel and then determines all the triangles that affect it's final color. This is also reason for ray-tracing not needing the currently-so-famous anti aliasing. Anisotropic filtering is needed, but it's meaning is completely different as in the "old" method. Pixel shaders: well, let's not even go there. To cut this a bit shorter: taking any current 3D game and trying to play it on a ray-tracing card would result in pretty miserable image quality since the driver would have to guess the missing parameters as well as guess which code / objects are unnecessary (shadows for example). Not to even mention that shadows in particular require different source objects for ray-tracing than for the "old" method.

So, what really happened with the discrete graphics at Intel?
If my reasoning is correct and they really went for ray-tracing, I'm guessing they're currently struggling with making their products backwards compatible. It would be some time before developers started developing their software designed for ray-tracing instead of the current methods even if the appropriate cards would already be available. So compatibility is an important task to do with such a product. Trying to make it so that it would at the same time show some benefits of the new algorithms must be a really tough job and I really hope the team can do it. I'm looking forward to a third strong player, even if it means this player will be Intel :)
I can't wait to play my favorites (Deus Ex, Morrowind) on such a card and I'm looking forward to a new generation of games even more.