The HD 4850 and the story with AMD/ATI.

First the HD 4850. I was testing the game on the new HD 4850 (Palit 512MB) today and some interesting things I observed with the graphics card. For one it gives a serious bang for the buck. Doofus 3D clocked at about 140 FPS at a resolution of 1024×768, AF 16x with graphics quality set to high. Even with AA 2x Doofus 3D clocks more than 120 FPS and I have a strong suspicion the game was going CPU bound at those frame-rate, since the machine had a 3 year old CPU. I can tell you for a fact, the card is a serious performance monster, but then again Doofus 3D ain’t a top line game. However, for me, this is the first time I have seen Doofus 3D under 4x AA and 16x AF running at a playable FPS since up until now I have had only GeForce 6200, 6600 (and to some extent the 8600) cards. There is no denying that the HD 4850 is more than worth it’s price for someone who is looking for a budget card and expects to run most of the top-line games today. The card runs a little bit hot but that’s to be expected given the amount of triangles it can push and effects it can deliver. Hats off to AMD/ATI in that regards. If you are someone who is looking for a mid-range card right now, the HD 4850 is excellent value for money.

That was the overview from non-programming point of view. Now the programmer in me has something to say. The card maybe excellent, however it’s not all that cozy with ATI drivers. OpenGL drivers are a mess, with the bundled driver not even having extensions like EXT_stencil_two_side support. Even basic functionality like (for example glDrawRangeElements() ) seems to be broken at times, even showing messed up graphics when using Vertex Arrays on older cards. Now this exact same functionality is available under DirectX. Lets say it’s safe to assume that GL drivers haven’t been updated in a while and\or AMD/ATI just isn’t interested. The only issues that were reported in this round of testing were on ATI cards, so I had to literally debug the application on ATI hardware to ascertain that these were indeed driver problems. Some of the issues I have mentioned occur on guess what, the HD 4850 also. The only workaround seems to be, vendor specific hacks! That doesn’t make me a happy programmer at all!

The story with Direct3D is a lot better and no issues were observed under DirectX renderer of the game. That just tells you something doesn’t it!

Tweaking the game to run on a wide range of hardware.

For the last week I have been involved in rather uninteresting activity. Well, I have been literally throwing the game on all possible hardware configs hoping it will run. All of this (yes, again) to find out how the game fares when exposed to different hardware configurations. Well it may seem like this activity is rather mundane, then let me assure you — it is. Well, not entirely 😀 . It takes some effort to get a game to scale seamlessly to all kinds of hardware and currently I am enduring all the pain of crappy drivers and broken functionality, which,  should I say, underscores some of the major headaches in real-time graphics development. It’s not like you can throw the game with it’s peak setting ON and expect it to run on a crappy Intel on-board graphic cards. Such a thing will just end in a disaster. The game must scale to different kinds of hardware and in our case especially so; that too seamlessly and effectively.

Doofus 3D is uniquely placed. It doesn’t aim to be a top-line, hardware intensive, hard-core gamer only, triple A (AAA) title. Neither is it a 2D game capable of running flawlessly under software rasterized graphics on your grandma’s old school PC. It is geared more towards intermediate level hardware. Hardware that most people have on their work laptops and home desktops. This effectively means an extremely wide range of hardware to cater to, and that in turn means scaling the game’s software paths (internally) based on a *lot* of underlying factors. Assuming a player to have a specific functionality available on his hardware setup can be catastrophic and disastrous. Such assumptions could mean a total failure of the game on a machine and could mean a potential loss of a buyer in the end.

While drawing up specs of Doofus 3D we were especially careful not to go overboard with graphics galore. Even with careful planning, there was significant feature creep, and with each new feature that was added, new countermeasures had to be put in place so that the game would scale to lower-end hardware. Not everything was straight forward, but we still did manage to push it through. If you have been following my blog for some time now, you would know that this is not the first time I am into such activity. I (personally) run such tests after each beta (feature addition/ feature freeze) of the game. That is probably why we haven’t faced too many problems this time around.

Under Doofus 3D we followed a process that is a bit different from traditional software development. Every beta under this game project was actually a feature complete runnable version of the game. Before or between any beta, every release was an internal alpha version. A beta meant, “A set of features is complete enough to be tested”. After each beta, each feature was tested on various hardware setups. Something like an iterative method of software development, but not quite. I would say, a process tailored specifically for our project and more specifically for our situation given our limitations.

Doofus 3D runs on most middle rung hardware without too much problems. It will run on on-board graphics cards too, but I find Intel on-board graphics to be an abomination. Hopeless hardware support for 3D graphics and equally crappy driver support! Enough reason for the engine to scale the game to run on a low setting when it detects an Intel graphics card. The situation with NVIDIA and ATI cards is a lot better with ATI’s low end cards (,assuming the price point, ) to be consistently outperforming NVIDIA cards. That said, NVIDA has the most stable hardware and drivers and most settings work uniformly across cards and driver setups, though there can be problems there as well. ATI’s drivers can be buggy at times and in case of OpenGL can be totally broken. Fortunately the O2 Engine and the Doofus Game can use either Direct3D or OpenGL as rendering APIs. For any high end or for that matter even for most mid-range graphics cards, Doofus 3D is not a problem at all.

Selecting a scripting engine – Part 4.

This is the fourth installment in the series (here are 1, 2, and 3) and this time it was the turn of Javascript. Yes I know a lot of buzz has been going on about the sudden jump in speed of the scripting language, thanks to the introduction of the blazingly fast V8 and, yes an even faster SquirrelFish Extreme (SFX) engines (oh and not to forget Mozilla’s Tracemonkey). Javascript has been at the back of my mind and I have been fiddling around with Javascript for some time now. I had actually tried out Javascript as a scripting language earlier for a completely different project altogether, but that prototype was left on the back-burner since there were more pressing issues to solve. Then, of course, I left the project and the experiment didn’t get beyond the prototyping phase. Too bad really. That was the time when Javascript was pretty slow, or should I say slower than it is today. However the story with the new generation Javascript engines is very different from the old ones. Most engines compile Javascript code to “fast native code” on the platform they run on, and that’s the reason for the sudden spike in speed. According to the SFX blog, SFX goes a step further and does something the call as “polymorphic inline cache” (explained on the link provided above) which they claim to be a exciting new optimization. I haven’t taken SFX for a ride yet, but I have had a peek at V8 and I must say I am more than impressed by Google’s implementation.

If you haven’t already guessed by now, my main interest in any scripting language is obviously an integration with our Game engine (, code named O2 Engine). From my previous posts it’s pretty clear; I am on a quest for an elusive perfect solution. A please all, perfect scripting language (, which I know I will not find). Unfortunately ideal solutions are impossible and most of the times it’s a compromise between what is available and/or what you can afford with the resources you have. With Javascript things are no different. All kinds of remarks have been made about the scripting language. Everything from, “Javascript is the worst language ever!”,  to “Javascript: The world’s most misunderstood language.” Google around and you will see what I mean. But there are two sides to every story. While most programmers who work with stricter languages frown upon Javascript, the fact remains, Javascript is easy to understand than most other languages. It is used by programmers and non-programmers alike and is used by countless web-sites to deliver Internet content.

Javascript’s shares it’s syntax to some extent with C and Java, but the similarities end there. It is not a very “strict” language and this is in fact by design and is actually intentional. Less strictness means non-programmers can adapt to the language more easily. Javascript is probably the least strict language among most other languages I have used. This can be good and bad, but while considering a scripting language, I can probably live with that given the rapid prototyping capabilities it offers. If you look at the language itself, Javascript shares most of the common properties found in other scripting languages (Lua, Python and others). The interesting thing about Javascript is the function-object duality the language proposes. This can be confusing for a programmer working on a language that uses an imperative approach (something like C++ or Java), maybe not so if you started out with Javascript or already know functional programming. However the thing that is really interesting is the fact that functions are first class citizens under Javascript. This concept itself is not something that is unique to Javascript, Lua has similar concept too and both languages can be used for functional (style) programming. Inheritance under Javascript is prototype based which is again similar to how Lua implements inheritance, but is different from the classical inheritance that defines object-oriented languages like C++ and Java. I don’t see any particular problem with that, given that there is a way to export class hierarchy.

Integrating or embedding Javascript with an application is no different than what it is for most other scripting languages. I found Google’s V8 code to be the easiest and cleanest to understand and integrate among all other languages/embedding libraries I have encountered. However, the entire process can be extremely repetitive and time consuming if a large library/classes/functions have to exported. Again, this is no different for most other scripting languages (except maybe Python, where you have an automatic process). I haven’t had time to look at SFX and TraceMonkey yet, but I am sure the process should be similar.

So what is special about Javascript? If you consider the language and the embedding platform/technique, nothing actually. Head-to-head it’s probably equivalent to Lua or any other embeddable scripting language out there. However, what sets Javascript apart from the rest of the pack today is actually mentioned in the first paragraph of this article. Yes,  it’s the new generation of scripting engines from Mozilla, Google and the Webkit team. They give a huge advantage to Javascript in terms of speed and performance. JIT compilation to native code and the optimizations that have been introduced, do result in significant increase in speed of the script code that is being executed.

I know most game developers (incuding myself) are performance junkies, but hold on there before you jump in and start integrating Javascript into your favourite game engine. There are some points here worth noting. For native code compilation to work, the Javascript compiler needs an assembler to convert the compiled code to native code instructions. So on platforms for which the Javascript compiler has no assembler, you are basically out of luck. V8 has assemblers for Intel and ARM as of now. So for any other architecture the V8 engine will be of no help, unless that is, you implement a version of the in-built assembler for that platform yourself. I am not too sure how this will scale to platforms like the PS3 and XBox and could very well be a sticking point. The other thing that caught my attention is that JIT compilation to native code means Javascript code has be distributed as plain text files. Such code could be subject to manipulation and a potential cause for concern if your code has to adhere to security specifications.

That said, given the fact that so much RnD is going into Javascript, makes the language a top contender for the post of a scripting engine.

In-game advertising.

I was recently asked as to what I thought about in-game advertisements. Given the fact that I am a game developer, the whole concept of in-game advertisements does seem like an intriguing subject. There are two ways to really look at the whole idea of in-game advertisements.The first and the most obvious one is of course as a potential money spinning tool for developers/publishers and is quite an interesting idea to explore. But there is a also a view that such ads inside games will end up polluting and degrading the overall game experience and could attract an ire from the players or the community as a whole, and in the worst case have an adverse effect on games sales. The last thing you want is a commercial break or a cutscene marketing something inside a fast paced game. Such a thing will be a disaster and I am sure players will frown on games that take this road.

My feeling is, game designs don’t need to this drastic to have the whole in-game advertising thing working for them. It would be a horrendous mistake to have advertisements inside the game that degrade gameplay. However having interactive ads that are properly integrated into the game may not necessarily be a bad prospect. That is exactly why ads in games need to be more than just inert props and simple banners. To a player such things make little impact and might or might not get noticed. The impact from any 2D inert items will be limited, especially if the game features a 3D world (2D games could still make use of banners more effectively than 3D ones, but even they could do better if the ads were interactive). Games typically differ from other traditional forms of digital entertainment in the fact that they are an interactive medium and therefore like every other gameplay element, ads within games will also be most effective when the player interacts with them.

Gamers (, especially hard-core gamers) might frown on the idea of having in-game ads but it may not be all bad. Not all games are ideal for in-game ads, I will touch on that point a little later. I personally haven’t seen any ads in most of the games that I have played. But I have seen it in some games, the example I can sight is Second Life. Second Life is a great example of how ads can make into games without being immediately frowned upon. The player is given a choice of whether he/she wants to interact and view the content (of the ad) instead of something being forced on him/her. A player will appreciate this, and the ad campaign will thus be successful. In the brief time I played Second Life (here), I visited a couple of interesting places where the in-game advertisements looked really great. Music seems to be the best appreciated followed by fashion when it came to ads there, but I am sure people must be advertising all sorts of stuff over there. As I said, my stint with Second Life was pretty brief.

As a game developer I look at the whole scenario of in-game advertisements positively. I am sure using proper game design advertisements can be made sufficiently interactive so that they could “fit into” a game without actually nagging the player. I would even go further and say that if used effectively a game could actually be enhanced (case in point Second Life) so that marketing inside games could be something that player can look forward to and actually find an interest in the whole idea of having an online try-before-you-buy opportunity. Can all games be effective as in-game advertising mediums? No, some might do a better job at it while others might not be as effective. MMOGs like Second Life will probably be far better at it than say a FPS with it’s setting on an alien planet. Some games like shoot-them-up tournaments might not be effective at all. Having said that, we can never be too sure about marketing ideas and how “genius minds” work. So, someone might just find some way to insert a “Matrimony Online” banner inside the Strogg Nexus on Stroggos, you never can tell.

Larrabee isn’t necessarily a means to a custom graphics API.

Has the graphics world come a full circle now that we see Intel’s first tech presentations of Larrabee? Will we see a resurgence of people writing custom software rasterizers? Is the heyday of the GPU truly coming to an end? Are APIs like OpenGL and Direct3D going to become redundant? I have seen these and a lot of similar questions being asked the past couple of days. People even going as far as the saying that technologies like Larrabee could be used to write custom graphics APIs. This has been, in part, due to the huge emotional response to the OpenGL debacle a couple of days back and partly due to the fact that Intel unveiled portions of it’s (up until now mysterious) Larrabee technology recently. Some people seem to have thus drawn up conclusions that soon we may not require the currently used graphics APIs anymore. Larrbee does promise freedom from the traditional hardware based approach. Rendering APIs today are closely connected to the underlying hardware and the graphics programmer using them is, thus, limited to what the hardware offers him/her.

Technologies like Larrabee do offer immense flexibility and power. There is no doubt in my mind that if needed one could create a custom graphics API using them. Unfortunately writing custom APIs might not be the answer or an option and there are good reasons to not do that. The first and probably what people see as a less important reason, is the fact that APIs like OpenGL and Direct3D are standards and therefore it is not advisable to dismiss them outright. What if code needs to ported across platforms where Larrabee might not be available? Then how do you scale custom API for that hardware? But one could argue that you could probably get more performance cutting across any layer that sits inbetween and using a direct access to Larrabee hardware. Call me a skeptic but I see issues here as well. It maybe very easy to hack up a simple rasterizer, but it’s a completely different thing to produce a vector optimized one even for a technology like Larrabee. It’s not a trivial task even if we have the best vector optimizing compilers from Intel. I would lay my bets on the star team working at Intel to produce a better rasterize than I probably can. Also I am pretty sure this (rasterizer) will be exposed via Direct3D and/or OpenGL interfaces. Yes you could probably make certain specific portions of your engine highly optimal using generic Larrabee architecture but a custom rendering API may not necessarily be the best option.

As a piece technology Larrabee is very interesting especially for real-time graphics. For the first time you will have the capacity to be truly and completely (maybe not completely) free from the shackles of hardware. There are so many more things you could accomplish with it. There are other things you could use Larrabee for, like for instance parallel processing and/or for doing intensive highly vectorized computations very efficiently.

OpenGL 3.0 is finally released, and it disappoints.

ARB has released the much anticipated OpenGL 3.0 spec and if you were the one following developments of OpenGL for sometime, you would know that hopes were riding high on the fact that OpenGL 3.0 would be a revolutionary redesign of an ailing and a rather old API. Apparently it’s none of that and even worse it’s actually nothing at all. OpenGL was drugging along for the past 15 years, adding on layer upon layer of muckish extensions to the point that many had expected ARB to really go ahead and make radical changes in the 3.0 specification. None of that has happened. Most of the radical changes promised have not been delivered. All that seems to have happened is the standardization of already existing extensions by making them a part of the the standard. Sad, really sad.

As a game developer and more as someone who has been using OpenGL for the past 8 years I am pretty disappointed. I was hoping to see a refreshing change to OpenGL. I am at a loss of words here; no really I am. There is really nothing more to say. The changes have been so shallow, that I wonder why it called for a major version number change in the first place. 2.1 to 3.0, phooey, it should have been 2.1.1 instead. Let me put it in another way; my current OpenGL renderer which is based on OpenGL 2.x could be promoted to 3.0 probably with 4 or 5 minuscule changes or maybe none at all! Where is the Direct3D 10+ level functionality what was hyped about? Where is the “radically forward looking” API?

What does this say for the future of OpenGL? Sadly not very much at least in the gaming arena. It was already loosing ground and there was a lot of anticipation that ARB would deliver a newer OpenGL to “take on” Direct3D. I must say that a powerful Direct3D (thanks to DirectX 11) looks all set to become the unequivocal champion when it comes to gaming graphics. OpenGL will clearly take a back seat to DirectX here. While some may argue that OpenGL will continue to flourish in the CAD arena, I am not so sure that Direct3D wont find favor over there as well. OpenGL drivers from most vendors already fall short of their Direct3D counterparts. That’s to be expected. It’s not their fault either. What else can they do when you have a 15 year old API to support whose legacy functionality is out of touch with modern day reality.

EDIT: The major thing missing as far as OpenGL 3.0 was a clean API rewrite. When you compare OpenGL 3.0 with Direct3D 11 it’s how things look from here on forward is what bothers me. Direct3D is more streamlined to address developments in hardware and while vendors could also expose similar functionality via OpenGL using vendor specific extensions, the whole situation doesn’t look too good. Making a driver that is fully OpenGL compatible will cost more in terms of manpower. That is because the specification is so large. Yes there is opportunity to deprecate things but I am not too sure how things will pan out there as well. Supporting older features on newer hardware means compromises and sacrifices in quality and performance. Driver writers cannot optimize for everything and that is why in the end performance suffers; or in worst case, ships out broken.

Avoid online tutorials as a learning resource.

If you starting with programming (of any kind, especially game programming,) avoid online tutorials as a source of reference as far as possible. I am not talking about online course-ware offered by institutes, I am referring to code snippets and short tutorials that show small but very attractive demos which could be easily mistaken by a newbie as his launchpad to the next Crysis. I too was guilty of these very things in the past and it’s after having been down that road you realize that some of the things taught were not the correct way to learn those things. The problem with online tutorials is, most authors who write these tutorials have little clue on how to tutor and/or present learning material. While their intentions are Nobel and the authors themselves do have a grasp on the topic (at least some do), it doesn’t necessarily translate into a great learning experience for a beginner. There may be exceptions, I am not saying all tutorials are bad. However such tutorials are far from being productive for a beginner. In fact, I would say they are actually counterproductive. As I have often found, the main focus in such tutorials is mostly on what the author himself knows and in worst case these issues could be totally irrelevant or not as important from a beginners point of view .

Lets make a distinction here. It’s not that tutorials are bad, it’s just that they are not meant for a total beginner trying to get his/her “feet wet” with the subject. They are often excellent resources to put ideas across, or to demonstrate advanced topics on a subject to an audience that has a fair amount of experience on that subject. The best way to begin learning anything is to go to your nearest book-store or Amazon.com, find the best book on the relevant topic and invest some money into buying it. Those books are rated as the best in their class for good reason. It’s because people have previously used that material and have actually gained knowledge after having read through them. A lot of painstaking effort goes into creation of a good book and a lot of experts review it before it hits the shelves, at least this is the case with most good ones out there. Start with chapter No. 1 and read through the book step-by-step even if the examples and material might look downright mundane. By the time you’ve finished with it, you would would have gained more all-round knowledge regarding the subject you were trying to learn than if you had referred to some online tutorial.

It’s true, trueSpace is indeed free.

Update: Microsoft has taken down the Caligari website and terminated trueSpace. Don’t bother looking for it, trueSpace is dead. If you are looking for a free powerful 3D modeling package, try Blender 3D.

I couldn’t believe it at first, but after the acquisition of Caligari, Microsoft has released the fully-featured 3D authoring packagetrueSpace for free. Simply put trueSpace is a 3D modeler and seems a pretty good one looking at the features it supports. It may not dethrone Maya or Max anytime soon, but for nada it packs a lot of punch, especially if you are an indie game studio or a budding 3D artist and can’t or don’t have the finance to invest in something along the lines of the top modelers mentioned above. I am not saying trueSpace is the best, quite frankly I haven’t even given the package a complete look through as yet. It takes a considerable amount of time and a sizable investment in effort to fully grasp any 3D authoring package. Well, it takes probably a lot more before you can become truly productive at it. trueSpace is no different. I haven’t personally gone and modeled anything with it as yet, nor do I currently have the time to invest in such an endeavor (maybe after the game ships).

However from the looks of it a free trueSpace seems to be something that can’t be ignored. The next thing I wanted to look for is whether the modeler could be integrated with a dev cycle for a game. That would require the package to have some sort of scripting system and/or allow an SDK, using which custom export scripts and engine functionality can be integrated with the authoring system. I was browsing the website and from the looks of it, C++/C SDKs and Python scripting is in fact offered by trueSpace. Again I haven’t had a good look at it, but the fact that it’s there should be a good enough reason to have a look at it if you are interested. The most important factor in selecting any authoring package is the availability of tutorials and that’s also another reason trueSpace stands out. The documentation and the video tutorials are also made available along with the package. Yes, I know it seems too good to be true. Video tutorials are invaluable while learning any 3D modeling. I remember years ago it was Blender video tutorials that really got me going with Blender. While my 3D skills leave a lot to be desired, most of the current game wouldn’t have been possible without those tutorials.

All of the above points make trueSpace a serious option to consider if you are a beginner or an indie game developer. While not the best, trueSpace is very attractive given the feature set and the price (which is 0). To be fair, I have only given the package a fleeting glimpse and that’s not how I would like to evaluate trueSpace, or for that matter any 3D package. So make your own assessments about the strengths and weaknesses of trueSpace by using the package yourselves. I would recommend having a go at the videos and tutorials first.

Doofus gets a dose of Optimizations.

Ah! It’s the optimization phase of the project and I am knee deep in both CodeAnalyst and NVIDIA PerfHUD. As far as memory-leak testing goes, most, no all of the memory leak testing is done by my own custom memory manager built directly into the engine core, so no third-party leak detectors are needed by the game. AMD’s CodeAnanlyst is a utility that is invaluable when it comes to profiling applications for CPU usage and the fact that it’s free makes it even better. NVIDIA PerfHUD is probably the champion among graphics performance utilities and which, I think, is vital when it comes to bullet proofing any graphics application for GPU performance. Too bad it doesn’t support OpenGL yet, but the O2 Engine’s renderers mirror each other almost to the point where an performance enhancement under the Direct3D renderer is almost similarly experienced under the OpenGL renderer. I would have really liked PerfHUD to have supported OpenGL though. There are some issues under GL; like for instance, FBOs under OpenGL perform a tad bit slower than Render-Targets under Direct3D (on the same hardware), which I must admit has left me a little dumbfounded. Maybe it is just for my GPU (yeah My GPUs are a bit old I must say,) or maybe the drivers are at fault but I have noticed a performance variance between the two even after considerable experimentation and optimization. It would have been good to have a utility like PerfHUD to probe directly at the dra calls and/or FBO switches. I am trying my luck with GLExpert, but I am not there yet. I must however say that GLExpert is nothing compared to PerfHUD.

Code Analyst
AMD CodeAnalyst

NVIDIA PerfHUD
Doofus running under NVIDIA PerfHUD

DirectX 9 to DirectX 11, where did 10 go?

This week there was a lot of buzz about DirectX 11. Yes, the newest version of the graphics API was unveiled by Microsoft at the XNA game fest and it has an interesting feature set that, I think, were long overdue. Most of DirectX 11 doesn’t diverge from version 10 (and the almost not eventful, version 10.1), but I think DirectX 11 should see a renewed interest from game developers since it provides features that were desperately needed in light of recent hardware developments. 11 (of course with the features of 10 and 10.1) now seems to be a more complete API to addresses issues related to game and graphics development and seems to be a more complete solution for the future.

What is really interesting to see is the emergence of what Microsoft terms as the “Compute Shader”, no doubt a marketing speak for GPGPU which they claim will allow the GPU, with it’s awesome power to be used for “more than just graphics”; which smells like CUDA (Compute Unified Device Architecture) to me. I wouldn’t be surprised if both turned out to be very similar (remember Cg/HLSL). In any case, what is important is the fact that such technology will be available to game developers under version 11. Technologies like CUDA (GPGPU) are the requirement of the hour and this could be the fact that 11 might see a lot more interest than the earlier (10.x) versions.

There is a lot of talk about hardware based tessellation, but frankly I haven’t seen too many details on that. At least not enough to make a detailed comment on it. From what little is being said, DirectX 11 hardware based tessellation could be used to make models appear “more smooth”. How this ultimately translates to actual implementation will be clear when more details come out. I am hazarding a guess here, but there should be something along the lines of some technology that allows sub-surf LODs to be calculated in real-time and/or displacement/bump/normal mapping to be done on the fly. I am not too sure as yet, but could be something along those lines, or maybe something in-between, or a combination of those techniques. Whatever it is, this would mean really good looking games in the future.

Issues like multi-threaded rendering/resource handling are things that were long time coming and yes, it’s a good thing we will finally see them in the newer version. It just makes my job as a game developer a whole lot easier. Most details on Shader Model 5.0 are pretty sketchy, so I won’t go into things like shader length and function recursion. However, I hope such issues are addressed satisfactorily in the newer shader model.

So will DirectX 11 succeeded where DirectX 10 failed? Will it get mass adoption like DirectX 9? Difficult to say. While most cutting edge games have adopted DirectX 10, it’s usage remains low because of several factors. For one many people still use XP which doesn’t support version 10 (or greater) of the API (for whatever reason) which means most developers have to adopt the lowest common denominator of the alternatives available, and that generally is DirectX 9.0. Also many people still don’t have DirectX 10 class hardware and that is also another reason not to go for 10.x. The issue with DirectX 10.1 is a total mess. It’s interesting, but there is even talk that NVIDIA might skip over 10.1, giving the version a total miss and aim directly for version 11 class hardware. There is logic to that decision; given that most games (except of the really high end ones) don’t even bother to use DirectX 10 let alone 10.1. All this makes adoption of 10.x a non lucrative issue for game developers.

Version 11 does bring in some really good features to gaming in general but that is not necessarily the reason the API will succeed. As a game developer, 11 holds some serious promise and could be a success if Microsoft plays it’s cards right. However there are some issues (mentioned above) that still bother me. Microsoft is still fixated on releasing version 11 only for Vista, so don’t expect your XP machines to ever run DirectX 11 even if you buy brand new hardware. That said, like most previous versions, DirectX 11 is backward compatible with version 10 and 10.1 and even 9.0. It would be impossible for Microsoft to ignore 1000s of games that already use DirectX 9 so it’s almost a written fact that newer versions of the API will continue to be backward compatible until and unless we see a complete divergence of a sizable amount of games to newer versions, and that could be a long way away since many games even today are still being produced on the 9.0 version.