Has the graphics world come a full circle now that we see Intel’s first tech presentations of Larrabee? Will we see a resurgence of people writing custom software rasterizers? Is the heyday of the GPU truly coming to an end? Are APIs like OpenGL and Direct3D going to become redundant? I have seen these and a lot of similar questions being asked the past couple of days. People even going as far as the saying that technologies like Larrabee could be used to write custom graphics APIs. This has been, in part, due to the huge emotional response to the OpenGL debacle a couple of days back and partly due to the fact that Intel unveiled portions of it’s (up until now mysterious) Larrabee technology recently. Some people seem to have thus drawn up conclusions that soon we may not require the currently used graphics APIs anymore. Larrbee does promise freedom from the traditional hardware based approach. Rendering APIs today are closely connected to the underlying hardware and the graphics programmer using them is, thus, limited to what the hardware offers him/her.
Technologies like Larrabee do offer immense flexibility and power. There is no doubt in my mind that if needed one could create a custom graphics API using them. Unfortunately writing custom APIs might not be the answer or an option and there are good reasons to not do that. The first and probably what people see as a less important reason, is the fact that APIs like OpenGL and Direct3D are standards and therefore it is not advisable to dismiss them outright. What if code needs to ported across platforms where Larrabee might not be available? Then how do you scale custom API for that hardware? But one could argue that you could probably get more performance cutting across any layer that sits inbetween and using a direct access to Larrabee hardware. Call me a skeptic but I see issues here as well. It maybe very easy to hack up a simple rasterizer, but it’s a completely different thing to produce a vector optimized one even for a technology like Larrabee. It’s not a trivial task even if we have the best vector optimizing compilers from Intel. I would lay my bets on the star team working at Intel to produce a better rasterize than I probably can. Also I am pretty sure this (rasterizer) will be exposed via Direct3D and/or OpenGL interfaces. Yes you could probably make certain specific portions of your engine highly optimal using generic Larrabee architecture but a custom rendering API may not necessarily be the best option.
As a piece technology Larrabee is very interesting especially for real-time graphics. For the first time you will have the capacity to be truly and completely (maybe not completely) free from the shackles of hardware. There are so many more things you could accomplish with it. There are other things you could use Larrabee for, like for instance parallel processing and/or for doing intensive highly vectorized computations very efficiently.