Larrabee isn’t necessarily a means to a custom graphics API.

Has the graphics world come a full circle now that we see Intel’s first tech presentations of Larrabee? Will we see a resurgence of people writing custom software rasterizers? Is the heyday of the GPU truly coming to an end? Are APIs like OpenGL and Direct3D going to become redundant? I have seen these and a lot of similar questions being asked the past couple of days. People even going as far as the saying that technologies like Larrabee could be used to write custom graphics APIs. This has been, in part, due to the huge emotional response to the OpenGL debacle a couple of days back and partly due to the fact that Intel unveiled portions of it’s (up until now mysterious) Larrabee technology recently. Some people seem to have thus drawn up conclusions that soon we may not require the currently used graphics APIs anymore. Larrbee does promise freedom from the traditional hardware based approach. Rendering APIs today are closely connected to the underlying hardware and the graphics programmer using them is, thus, limited to what the hardware offers him/her.

Technologies like Larrabee do offer immense flexibility and power. There is no doubt in my mind that if needed one could create a custom graphics API using them. Unfortunately writing custom APIs might not be the answer or an option and there are good reasons to not do that. The first and probably what people see as a less important reason, is the fact that APIs like OpenGL and Direct3D are standards and therefore it is not advisable to dismiss them outright. What if code needs to ported across platforms where Larrabee might not be available? Then how do you scale custom API for that hardware? But one could argue that you could probably get more performance cutting across any layer that sits inbetween and using a direct access to Larrabee hardware. Call me a skeptic but I see issues here as well. It maybe very easy to hack up a simple rasterizer, but it’s a completely different thing to produce a vector optimized one even for a technology like Larrabee. It’s not a trivial task even if we have the best vector optimizing compilers from Intel. I would lay my bets on the star team working at Intel to produce a better rasterize than I probably can. Also I am pretty sure this (rasterizer) will be exposed via Direct3D and/or OpenGL interfaces. Yes you could probably make certain specific portions of your engine highly optimal using generic Larrabee architecture but a custom rendering API may not necessarily be the best option.

As a piece technology Larrabee is very interesting especially for real-time graphics. For the first time you will have the capacity to be truly and completely (maybe not completely) free from the shackles of hardware. There are so many more things you could accomplish with it. There are other things you could use Larrabee for, like for instance parallel processing and/or for doing intensive highly vectorized computations very efficiently.

DirectX 9 to DirectX 11, where did 10 go?

This week there was a lot of buzz about DirectX 11. Yes, the newest version of the graphics API was unveiled by Microsoft at the XNA game fest and it has an interesting feature set that, I think, were long overdue. Most of DirectX 11 doesn’t diverge from version 10 (and the almost not eventful, version 10.1), but I think DirectX 11 should see a renewed interest from game developers since it provides features that were desperately needed in light of recent hardware developments. 11 (of course with the features of 10 and 10.1) now seems to be a more complete API to addresses issues related to game and graphics development and seems to be a more complete solution for the future.

What is really interesting to see is the emergence of what Microsoft terms as the “Compute Shader”, no doubt a marketing speak for GPGPU which they claim will allow the GPU, with it’s awesome power to be used for “more than just graphics”; which smells like CUDA (Compute Unified Device Architecture) to me. I wouldn’t be surprised if both turned out to be very similar (remember Cg/HLSL). In any case, what is important is the fact that such technology will be available to game developers under version 11. Technologies like CUDA (GPGPU) are the requirement of the hour and this could be the fact that 11 might see a lot more interest than the earlier (10.x) versions.

There is a lot of talk about hardware based tessellation, but frankly I haven’t seen too many details on that. At least not enough to make a detailed comment on it. From what little is being said, DirectX 11 hardware based tessellation could be used to make models appear “more smooth”. How this ultimately translates to actual implementation will be clear when more details come out. I am hazarding a guess here, but there should be something along the lines of some technology that allows sub-surf LODs to be calculated in real-time and/or displacement/bump/normal mapping to be done on the fly. I am not too sure as yet, but could be something along those lines, or maybe something in-between, or a combination of those techniques. Whatever it is, this would mean really good looking games in the future.

Issues like multi-threaded rendering/resource handling are things that were long time coming and yes, it’s a good thing we will finally see them in the newer version. It just makes my job as a game developer a whole lot easier. Most details on Shader Model 5.0 are pretty sketchy, so I won’t go into things like shader length and function recursion. However, I hope such issues are addressed satisfactorily in the newer shader model.

So will DirectX 11 succeeded where DirectX 10 failed? Will it get mass adoption like DirectX 9? Difficult to say. While most cutting edge games have adopted DirectX 10, it’s usage remains low because of several factors. For one many people still use XP which doesn’t support version 10 (or greater) of the API (for whatever reason) which means most developers have to adopt the lowest common denominator of the alternatives available, and that generally is DirectX 9.0. Also many people still don’t have DirectX 10 class hardware and that is also another reason not to go for 10.x. The issue with DirectX 10.1 is a total mess. It’s interesting, but there is even talk that NVIDIA might skip over 10.1, giving the version a total miss and aim directly for version 11 class hardware. There is logic to that decision; given that most games (except of the really high end ones) don’t even bother to use DirectX 10 let alone 10.1. All this makes adoption of 10.x a non lucrative issue for game developers.

Version 11 does bring in some really good features to gaming in general but that is not necessarily the reason the API will succeed. As a game developer, 11 holds some serious promise and could be a success if Microsoft plays it’s cards right. However there are some issues (mentioned above) that still bother me. Microsoft is still fixated on releasing version 11 only for Vista, so don’t expect your XP machines to ever run DirectX 11 even if you buy brand new hardware. That said, like most previous versions, DirectX 11 is backward compatible with version 10 and 10.1 and even 9.0. It would be impossible for Microsoft to ignore 1000s of games that already use DirectX 9 so it’s almost a written fact that newer versions of the API will continue to be backward compatible until and unless we see a complete divergence of a sizable amount of games to newer versions, and that could be a long way away since many games even today are still being produced on the 9.0 version.

Are integrated graphics chips the new battlezone?

In what could be an “one up” and almost a sucker punch to Intel, AMD announced an amazing new chipset, the 780G which is sure to create some flutter in the industry. The 780G puts a full fledged GPU on to the main-board and while I was reading the specs, it does seem to be substantially better than any of the other on-board or, ( in correct terminology,) integrated graphics chips out there. While Intel claims to have “more half of the graphics” market, the graphics or should I say “craphics” cards supported by Intel (, and to some extent AMD earlier) are nothing more than a big joke. The only reason they have such a huge portion of the market is because the average joe/jane is stuck with it and because it came pre-installed. I was recently quizzed by an acquaintance as to why his system could not run Bioshock and the only answer I could give him really was, “Well your system isn’t designed for that sort of gaming”. To that his reply was “Oh I just got a brand new system. How is it that it can’t run a latest game?”

It’s really disturbing for people who buy a brand new PC only to see it fail, utterly miserably I might add, to even push a last generation game at a shallow 24 FPS. Most are clueless, and while their PCs may be brand new with a “fast” multi-core processor with gazillions of RAM at it’s disposal, it can only but run their Office applications. Yes they run faster and better! No such luck with games though. People have to realized, having a faster CPU or for that matter having more cores doesn’t really help too much with games. It does to some extent, but as it stands right now, I would rather have a top-line graphics card like the 8800 GTX than a quad core CPU. It’s a very deceptive concept, I know, but thats how it is.

Anyone who has worked on graphics knows how utterly lousy and how much of a pathetic abomination integrated graphics chips can be. I have battled with all sorts of problems, everything from broken drivers to faulty implementations to near absent feature support. I hope things are finally changing for the better. The question is where does that leave Intel? Intel has been desperately trying to get a better graphics solution on to it’s boards without too much luck. The chipset that AMD has thrown up beats anything that Intel can conjure up hands down! At least in the near future that is. While Intel may add on more cores, they aren’t going to be too useful for people who want to run the latest games. With quality of integrated graphics on offer by Intel, users will have to install, at the very least, a low end graphics card. Sorry Intel, that’s how bad things are!

Then what has the Green Brigade (NVIDIA) have to say to all this? AMD’s acquisition of ATI is finally showing it’s advantages. While the graphics chips may not be the fastest out there, they are indeed very attractive considering the price point. Chipzilla and Graphzilla better get their acts together because if 2007 was the year both ruled in their respective departments, there is a new kid in town. He’s got better and faster guns, and looking more attractive than any of the old boyz!