Larrabee isn’t necessarily a means to a custom graphics API.

Has the graphics world come a full circle now that we see Intel’s first tech presentations of Larrabee? Will we see a resurgence of people writing custom software rasterizers? Is the heyday of the GPU truly coming to an end? Are APIs like OpenGL and Direct3D going to become redundant? I have seen these and a lot of similar questions being asked the past couple of days. People even going as far as the saying that technologies like Larrabee could be used to write custom graphics APIs. This has been, in part, due to the huge emotional response to the OpenGL debacle a couple of days back and partly due to the fact that Intel unveiled portions of it’s (up until now mysterious) Larrabee technology recently. Some people seem to have thus drawn up conclusions that soon we may not require the currently used graphics APIs anymore. Larrbee does promise freedom from the traditional hardware based approach. Rendering APIs today are closely connected to the underlying hardware and the graphics programmer using them is, thus, limited to what the hardware offers him/her.

Technologies like Larrabee do offer immense flexibility and power. There is no doubt in my mind that if needed one could create a custom graphics API using them. Unfortunately writing custom APIs might not be the answer or an option and there are good reasons to not do that. The first and probably what people see as a less important reason, is the fact that APIs like OpenGL and Direct3D are standards and therefore it is not advisable to dismiss them outright. What if code needs to ported across platforms where Larrabee might not be available? Then how do you scale custom API for that hardware? But one could argue that you could probably get more performance cutting across any layer that sits inbetween and using a direct access to Larrabee hardware. Call me a skeptic but I see issues here as well. It maybe very easy to hack up a simple rasterizer, but it’s a completely different thing to produce a vector optimized one even for a technology like Larrabee. It’s not a trivial task even if we have the best vector optimizing compilers from Intel. I would lay my bets on the star team working at Intel to produce a better rasterize than I probably can. Also I am pretty sure this (rasterizer) will be exposed via Direct3D and/or OpenGL interfaces. Yes you could probably make certain specific portions of your engine highly optimal using generic Larrabee architecture but a custom rendering API may not necessarily be the best option.

As a piece technology Larrabee is very interesting especially for real-time graphics. For the first time you will have the capacity to be truly and completely (maybe not completely) free from the shackles of hardware. There are so many more things you could accomplish with it. There are other things you could use Larrabee for, like for instance parallel processing and/or for doing intensive highly vectorized computations very efficiently.

Couldn’t have put it better myself….

OpenGL 3.0All in a day’s work!

OpenGL 3.0 is finally released, and it disappoints.

ARB has released the much anticipated OpenGL 3.0 spec and if you were the one following developments of OpenGL for sometime, you would know that hopes were riding high on the fact that OpenGL 3.0 would be a revolutionary redesign of an ailing and a rather old API. Apparently it’s none of that and even worse it’s actually nothing at all. OpenGL was drugging along for the past 15 years, adding on layer upon layer of muckish extensions to the point that many had expected ARB to really go ahead and make radical changes in the 3.0 specification. None of that has happened. Most of the radical changes promised have not been delivered. All that seems to have happened is the standardization of already existing extensions by making them a part of the the standard. Sad, really sad.

As a game developer and more as someone who has been using OpenGL for the past 8 years I am pretty disappointed. I was hoping to see a refreshing change to OpenGL. I am at a loss of words here; no really I am. There is really nothing more to say. The changes have been so shallow, that I wonder why it called for a major version number change in the first place. 2.1 to 3.0, phooey, it should have been 2.1.1 instead. Let me put it in another way; my current OpenGL renderer which is based on OpenGL 2.x could be promoted to 3.0 probably with 4 or 5 minuscule changes or maybe none at all! Where is the Direct3D 10+ level functionality what was hyped about? Where is the “radically forward looking” API?

What does this say for the future of OpenGL? Sadly not very much at least in the gaming arena. It was already loosing ground and there was a lot of anticipation that ARB would deliver a newer OpenGL to “take on” Direct3D. I must say that a powerful Direct3D (thanks to DirectX 11) looks all set to become the unequivocal champion when it comes to gaming graphics. OpenGL will clearly take a back seat to DirectX here. While some may argue that OpenGL will continue to flourish in the CAD arena, I am not so sure that Direct3D wont find favor over there as well. OpenGL drivers from most vendors already fall short of their Direct3D counterparts. That’s to be expected. It’s not their fault either. What else can they do when you have a 15 year old API to support whose legacy functionality is out of touch with modern day reality.

EDIT: The major thing missing as far as OpenGL 3.0 was a clean API rewrite. When you compare OpenGL 3.0 with Direct3D 11 it’s how things look from here on forward is what bothers me. Direct3D is more streamlined to address developments in hardware and while vendors could also expose similar functionality via OpenGL using vendor specific extensions, the whole situation doesn’t look too good. Making a driver that is fully OpenGL compatible will cost more in terms of manpower. That is because the specification is so large. Yes there is opportunity to deprecate things but I am not too sure how things will pan out there as well. Supporting older features on newer hardware means compromises and sacrifices in quality and performance. Driver writers cannot optimize for everything and that is why in the end performance suffers; or in worst case, ships out broken.

Avoid online tutorials as a learning resource.

If you starting with programming (of any kind, especially game programming,) avoid online tutorials as a source of reference as far as possible. I am not talking about online course-ware offered by institutes, I am referring to code snippets and short tutorials that show small but very attractive demos which could be easily mistaken by a newbie as his launchpad to the next Crysis. I too was guilty of these very things in the past and it’s after having been down that road you realize that some of the things taught were not the correct way to learn those things. The problem with online tutorials is, most authors who write these tutorials have little clue on how to tutor and/or present learning material. While their intentions are Nobel and the authors themselves do have a grasp on the topic (at least some do), it doesn’t necessarily translate into a great learning experience for a beginner. There may be exceptions, I am not saying all tutorials are bad. However such tutorials are far from being productive for a beginner. In fact, I would say they are actually counterproductive. As I have often found, the main focus in such tutorials is mostly on what the author himself knows and in worst case these issues could be totally irrelevant or not as important from a beginners point of view .

Lets make a distinction here. It’s not that tutorials are bad, it’s just that they are not meant for a total beginner trying to get his/her “feet wet” with the subject. They are often excellent resources to put ideas across, or to demonstrate advanced topics on a subject to an audience that has a fair amount of experience on that subject. The best way to begin learning anything is to go to your nearest book-store or Amazon.com, find the best book on the relevant topic and invest some money into buying it. Those books are rated as the best in their class for good reason. It’s because people have previously used that material and have actually gained knowledge after having read through them. A lot of painstaking effort goes into creation of a good book and a lot of experts review it before it hits the shelves, at least this is the case with most good ones out there. Start with chapter No. 1 and read through the book step-by-step even if the examples and material might look downright mundane. By the time you’ve finished with it, you would would have gained more all-round knowledge regarding the subject you were trying to learn than if you had referred to some online tutorial.

It’s true, trueSpace is indeed free.

Update: Microsoft has taken down the Caligari website and terminated trueSpace. Don’t bother looking for it, trueSpace is dead. If you are looking for a free powerful 3D modeling package, try Blender 3D.

I couldn’t believe it at first, but after the acquisition of Caligari, Microsoft has released the fully-featured 3D authoring packagetrueSpace for free. Simply put trueSpace is a 3D modeler and seems a pretty good one looking at the features it supports. It may not dethrone Maya or Max anytime soon, but for nada it packs a lot of punch, especially if you are an indie game studio or a budding 3D artist and can’t or don’t have the finance to invest in something along the lines of the top modelers mentioned above. I am not saying trueSpace is the best, quite frankly I haven’t even given the package a complete look through as yet. It takes a considerable amount of time and a sizable investment in effort to fully grasp any 3D authoring package. Well, it takes probably a lot more before you can become truly productive at it. trueSpace is no different. I haven’t personally gone and modeled anything with it as yet, nor do I currently have the time to invest in such an endeavor (maybe after the game ships).

However from the looks of it a free trueSpace seems to be something that can’t be ignored. The next thing I wanted to look for is whether the modeler could be integrated with a dev cycle for a game. That would require the package to have some sort of scripting system and/or allow an SDK, using which custom export scripts and engine functionality can be integrated with the authoring system. I was browsing the website and from the looks of it, C++/C SDKs and Python scripting is in fact offered by trueSpace. Again I haven’t had a good look at it, but the fact that it’s there should be a good enough reason to have a look at it if you are interested. The most important factor in selecting any authoring package is the availability of tutorials and that’s also another reason trueSpace stands out. The documentation and the video tutorials are also made available along with the package. Yes, I know it seems too good to be true. Video tutorials are invaluable while learning any 3D modeling. I remember years ago it was Blender video tutorials that really got me going with Blender. While my 3D skills leave a lot to be desired, most of the current game wouldn’t have been possible without those tutorials.

All of the above points make trueSpace a serious option to consider if you are a beginner or an indie game developer. While not the best, trueSpace is very attractive given the feature set and the price (which is 0). To be fair, I have only given the package a fleeting glimpse and that’s not how I would like to evaluate trueSpace, or for that matter any 3D package. So make your own assessments about the strengths and weaknesses of trueSpace by using the package yourselves. I would recommend having a go at the videos and tutorials first.

Doofus gets a dose of Optimizations.

Ah! It’s the optimization phase of the project and I am knee deep in both CodeAnalyst and NVIDIA PerfHUD. As far as memory-leak testing goes, most, no all of the memory leak testing is done by my own custom memory manager built directly into the engine core, so no third-party leak detectors are needed by the game. AMD’s CodeAnanlyst is a utility that is invaluable when it comes to profiling applications for CPU usage and the fact that it’s free makes it even better. NVIDIA PerfHUD is probably the champion among graphics performance utilities and which, I think, is vital when it comes to bullet proofing any graphics application for GPU performance. Too bad it doesn’t support OpenGL yet, but the O2 Engine’s renderers mirror each other almost to the point where an performance enhancement under the Direct3D renderer is almost similarly experienced under the OpenGL renderer. I would have really liked PerfHUD to have supported OpenGL though. There are some issues under GL; like for instance, FBOs under OpenGL perform a tad bit slower than Render-Targets under Direct3D (on the same hardware), which I must admit has left me a little dumbfounded. Maybe it is just for my GPU (yeah My GPUs are a bit old I must say,) or maybe the drivers are at fault but I have noticed a performance variance between the two even after considerable experimentation and optimization. It would have been good to have a utility like PerfHUD to probe directly at the dra calls and/or FBO switches. I am trying my luck with GLExpert, but I am not there yet. I must however say that GLExpert is nothing compared to PerfHUD.

Code Analyst
AMD CodeAnalyst

NVIDIA PerfHUD
Doofus running under NVIDIA PerfHUD

DirectX 9 to DirectX 11, where did 10 go?

This week there was a lot of buzz about DirectX 11. Yes, the newest version of the graphics API was unveiled by Microsoft at the XNA game fest and it has an interesting feature set that, I think, were long overdue. Most of DirectX 11 doesn’t diverge from version 10 (and the almost not eventful, version 10.1), but I think DirectX 11 should see a renewed interest from game developers since it provides features that were desperately needed in light of recent hardware developments. 11 (of course with the features of 10 and 10.1) now seems to be a more complete API to addresses issues related to game and graphics development and seems to be a more complete solution for the future.

What is really interesting to see is the emergence of what Microsoft terms as the “Compute Shader”, no doubt a marketing speak for GPGPU which they claim will allow the GPU, with it’s awesome power to be used for “more than just graphics”; which smells like CUDA (Compute Unified Device Architecture) to me. I wouldn’t be surprised if both turned out to be very similar (remember Cg/HLSL). In any case, what is important is the fact that such technology will be available to game developers under version 11. Technologies like CUDA (GPGPU) are the requirement of the hour and this could be the fact that 11 might see a lot more interest than the earlier (10.x) versions.

There is a lot of talk about hardware based tessellation, but frankly I haven’t seen too many details on that. At least not enough to make a detailed comment on it. From what little is being said, DirectX 11 hardware based tessellation could be used to make models appear “more smooth”. How this ultimately translates to actual implementation will be clear when more details come out. I am hazarding a guess here, but there should be something along the lines of some technology that allows sub-surf LODs to be calculated in real-time and/or displacement/bump/normal mapping to be done on the fly. I am not too sure as yet, but could be something along those lines, or maybe something in-between, or a combination of those techniques. Whatever it is, this would mean really good looking games in the future.

Issues like multi-threaded rendering/resource handling are things that were long time coming and yes, it’s a good thing we will finally see them in the newer version. It just makes my job as a game developer a whole lot easier. Most details on Shader Model 5.0 are pretty sketchy, so I won’t go into things like shader length and function recursion. However, I hope such issues are addressed satisfactorily in the newer shader model.

So will DirectX 11 succeeded where DirectX 10 failed? Will it get mass adoption like DirectX 9? Difficult to say. While most cutting edge games have adopted DirectX 10, it’s usage remains low because of several factors. For one many people still use XP which doesn’t support version 10 (or greater) of the API (for whatever reason) which means most developers have to adopt the lowest common denominator of the alternatives available, and that generally is DirectX 9.0. Also many people still don’t have DirectX 10 class hardware and that is also another reason not to go for 10.x. The issue with DirectX 10.1 is a total mess. It’s interesting, but there is even talk that NVIDIA might skip over 10.1, giving the version a total miss and aim directly for version 11 class hardware. There is logic to that decision; given that most games (except of the really high end ones) don’t even bother to use DirectX 10 let alone 10.1. All this makes adoption of 10.x a non lucrative issue for game developers.

Version 11 does bring in some really good features to gaming in general but that is not necessarily the reason the API will succeed. As a game developer, 11 holds some serious promise and could be a success if Microsoft plays it’s cards right. However there are some issues (mentioned above) that still bother me. Microsoft is still fixated on releasing version 11 only for Vista, so don’t expect your XP machines to ever run DirectX 11 even if you buy brand new hardware. That said, like most previous versions, DirectX 11 is backward compatible with version 10 and 10.1 and even 9.0. It would be impossible for Microsoft to ignore 1000s of games that already use DirectX 9 so it’s almost a written fact that newer versions of the API will continue to be backward compatible until and unless we see a complete divergence of a sizable amount of games to newer versions, and that could be a long way away since many games even today are still being produced on the 9.0 version.

What’s so special about the ribbon control?

Having seen Autocad 2009 I was exploring the usage of tab controls for a modeling application and that is when I read about the ribbon control being patented. I was kinda shocked to see that, not surprised though. I don’t get it. What’s so special with the ribbon control that it warrants a patent for it? If you look at it objectively it’s nothing more than a tab control with fancy buttons and controls and a carefully managed layout system. The layout system could probably be done with existing layout systems present in Qt and wxWidgets. OK, so toolkits like MFC and the newer .NET ones don’t have fancy layouts, but nothing has been innovated and is certainly not to an extent of being ground-breaking as claimed. Please! Many (many) applications already have tabbed tool-bars. Having controls, pretty pictures, an outlandish theme and a cool name doesn’t automatically qualify something as an innovation. OK, I would agree that that those tabbed menus do offer productivity, or maybe not (depends on the user’s taste) but the fact remains that tabs controls have been in GUI systems ever since GUI itself became mainstream.

Unfortunately I have never used Office 2007. All of my documents are done using Open-Office, so I really don’t know what “special” innovation Microsoft has done with the ribbon. I however, have used the tab-notebook far too often in GUI designs. I know the patent hasn’t been acquired yet, but to turn around, replace menus with tabs and claim that somehow this is an innovation, is outrageous. Having a patent just complicates matters. As said on the wiki page “the ribbon is licensed to third-party developers royalty free” but the control has to conform to Microsoft’s guidelines and users have to sign an NDA. So let me get this straight, if I were to use a tab-notebook control in my application and somehow “infringe” on the patent, then would it mean I am in for a lawsuit from MS? Oh OK then, whenever I use the tab-notebook I have to let Microsoft know about it. Wow!

Beyond C++

It’s a fact that C++ has been the most successful and probably the most widely used programming language till date in the gaming industry. Even if you were to look at the software industry as a whole, a near countless software projects owe their existence to the language and probably many more will eventually be made using C++. Universities around the world still churn out thousands of CS grads with C++ knowledge and there will be no shortage of programmers who know C++, at least for some time to come. So why is C++ so popular? The reasons may be many (, I am sure there are other near infinite blogs which touch on this,) but at the end of the day it just boils down to the simple fact, “it gets the job done”! While it made good business sense to employ C++ (some years ago), it doesn’t make all that much sense when we consider scalability looking into the future. C++ in it’s current form is — well, simply inadequate. Most people will agree with me that C++ has probably outlived it’s time, and it was time the programming community as a whole moved away from the language. Easier said than done though, but the question to ask is, what real options do we have which provide radical new changes to the way C++ operates? Very few approaches I would say. Before you raise your hand and give out the name of <insert your favourite language here>, let’s look at some of the challenges facing future game development and/or rather future software development as a whole.

Lets first look at the C++ language itself. It’s well known that C++ has it’s faults. It’s not an easy language to learn and an even more difficult language to master (, compared to other languages). It takes a substantial amount of time and experience to understand the intricacies of the language and takes even more time and effort to fully grasp the quirks and subtleties involved in software creation using C++. Typically it takes quite a lot of time before an average programmer can become truly productive with C++. The learning curve for the language is high and not for the faint hearted. Besides this the language has increasing come under flak for allowing seemingly undefined behavior without complaining too much. The language does very little to deter flawed assumptions regarding some (very) basic constructs that contradict how things actually work. Even with proper planning, strict development cycles and stringent coding practises, software development using C++ is difficult. Memory management is a bane and can cause unexpected and unwarranted catastrophes which are known all too well in the industry.

There are a growing number of languages that address the shortcomings of C++. Be it Java, C#, Python or most new(er) languages, all try to fill in the gaps left out in C++. As a matter of fact most languages do a good job at it. However with all it’s faults, C++ still stands out as a viable game development choice. That’s because of 2 primary reasons; a) it has vastly more libraries, code dumps (, I am talking about game development only), engines, examples and everything really, that it simply wins over the argument right there. True many libraries have bindings to other languages, but most of them seem rather inadequate or poorly maintained. Besides there are a lot more examples on cutting edge technologies (,especially graphics) written in C++ than there are in all other languages put together. and b) It’s easier to get programmers for C++ than any other programming language, Java being the exception. Things are changing though and there are some concerted efforts being made to promote other languages and platforms (XNA, PyGame) as viable game development alternatives. However all those remain some distance away from challenging C++ for the number one position.

The above mentioned points in support of C++ are non trivial. They go a long way in weighing out the demerits of building a game using the language. So the question really is, do we really have any viable options beyond C++? The answer is somewhere in between a complete YES and a total NO. As we stand today the best scenario is probably building the core engine using C++ and then having a scripting system on top of it. Be it Lua, Squirrel, Python, or whatever. That way you can always find a middle ground between having to reuse existing code and at the same time allow rapid development and prototyping capabilities. Many engines/games take this route and there is little doubt that such a process proves to be advantageous in the game building process. There are already a lot of games out there that use scripting language for rapid prototyping and in some cases building large sections of the game. Having a scripting language on top of the engine core is clearly a step in the right direction.

Scripting languages solve but some of the problems. They do a part of the job and they do it pretty well. However, there are issues related to game development which require newer and more radical approaches. The challenge facing game development in the future is building an engine which can effectively and efficiently use parallel programming/computing techniques (Invasion of the multi-core machines). Current generation programming techniques fall short of addressing the issue effectively. More importantly most newer approaches to effectively address multi-core problem are just way too complicated to be implemented effectively in C++. Efforts are on to find radical new solutions (C++ STM), but thus far they look good only on paper and still seem too cryptic to be put in production use. The issue of effectively using multiple cores of a CPU will  probably be the biggest challenge for the next generation engine designer. The more natural choice for addressing the multi-core and parallel programming issue is the the use of functional programming languages. Since functional programming approaches are typically side effects free, parallelizing functional programming code is easier than imperative programming. However mixing functional and imperative styles can be an equally daunting task. As my argument in the above paragraphs suggest, there will still be a lot of code in C++ that will need, someway, of interacting with any functional language that may be used.

It’s debatable if going “strictly functional” will solve all the challenges the future will throw at us. A more plausible scenario would be to have specific portions of the engine/game parallelized either by using a functional language or by having a subsystem level parallelism. This obviously would allow existing (C/C++) code to be reused, however there are still challenges to overcome even with such approaches. Sub-system parallelism means having each subsystem (physics, renderer, logic, AI, sound, network…) run in a separate thread/s. This however is a very optimistic approach since sub-systems tend to overlap and in some cases critically depend on each other. Such a system could be achived with existing code also, however I remain very skeptical whether such an approach will actually work on the ground. Another approach is to have job based parallelism. Divide your most CPU intensive tasks into jobs and then have a kernel system to marshal them based on priority. This is something similar to what an OS does and seems the best way to shoot for parallelism with existing mechanisms. This approach however requires you to split your design into a job based approach and that could prove challenging. Having a functional language as a script system (on top of a C/C++ based engine) is another idea to think about. I am not really sure how helpful this would be and can’t really comment on this (, since I myself haven’t tried such a radical approach, maybe I ought to give it a shot). But it seems very much possible to have a functional language as a scripting system, and could in theory be used to parallelize sections of the game/engine.

So it would seem C++ might just survive in the game engine of tomorrow, although in a less prominent form compared to it’s position today. It may happen that over the years C++ may eventually fade out, however it’s part can’t be total ruled out. Transition from C++ to any other language will be slow and may be tumultuous, with teams opting for a hybrid approach than just downright building existing functionality from scratch. As new technologies progress and CPUs with 100s of cores start being commonplace, we sould see the popularity of C++ waning and been replaced by some other (maybe functional) language. As time progresses C++ might well become increasingly irrelevant as more and more libraries get promoted to newer and more modern languages or newer more efficient ones take their place.

New screens of the Doofus 3D Game.

Update: Doofus Longears – Get ’em Gems has been released and can be found on www.doofuslongears.com

Whew, finally found some time to update the blog. I have been frantically working on putting final polish to the game, business related activities, tweaking graphics, ironing out small glitches in gameplay, play-testing levels, and the list goes on!

My major headache was the background. There were a lot of people who had complained about the background not being up to the mark. So I decided to paint a brand new background from scratch. It was a hell of a lot difficult though. Doofus 3D being a cartoon game, I wanted to have a flamboyant background (, rich and colorful with a distinct cartoon touch). However it’s not quite that simple, it’s not as easy as firing up good ol’ GIMP and just having a go at painting any ordinary scene. Since you are painting for a sky-box you really have to be a lot more careful and lot more sensitive about how to handle depth in your scene, plus you have to paint for a full panoramic view. A lot of experimentation went into this one, believe me! Lot’s of hits and misses later, and after studing some other skyboxes this is what I ended up with.

As you can see the background is a whole lot better than the muddy dingy background from the screenshots of the previous beta. Plus there is something more. Yes, the first pictures of new characters. More later 😉 .