Cg 2.0 Beta.

NVIDIA has released Cg 2.0 beta a few days back. I downloaded the new version including the docs and was browsing through them. What particularly interested me was the sentence “New OpenGL profiles (gp4vp, gp4gp, and gp4fp) for GeForce 8 extensions” in the features list. It’s a little bit confusing though, because the docs don’t go into how exactly to do that, or is it just some extensions. One another thing that seems a miss is how exactly to access new DirectX 10 features from Cg, or am I missing something? I don’t have a DirectX 10 compatible card so can’t test anything really, but I checked out the examples provided with the SDK and there seems to be no examples for DirectX 10.

The docs mention 2.0 is compatible with 1.5 so  I tried out some of my older test shaders written with previous versions (1.5) and they worked OK. But then again those weren’t pretty extensive.

The GUI of choice for cross-platfrom development – wxPython.

I just finished off a small project in wxPython and I must say I am pretty impressed with the library. A couple of weeks back I had to do a small assignment (, not relating to the game or the engine I am currently working on) which required a fairly amount of GUI. The specification of the project demanded that the software be able to run on Linux, Win32 and Solaris. Solaris support however, was later dropped due to time constraints. It was a pretty small project (, considering of course I am working on the engine and the game for well over 2 years now). About a sum total of 5 days of work in the end. I was a bit hesitant to take on the work initially but changed my mind once it was decided that Python and wxPython was OK.

I’ve been meaning to take wxPython for a ride for a while now and this seemed like the perfect opportunity. If you have read my Selecting a scripting language articles (1 & 2) you will know I have a secondary interest in working on Python. I am looking to integrate Python into the engine. This just gave me an opportunity to have a look at the wxPython code and bindings in detail. That actually is a totally different topic in itself and I will go in to further details in the third part of the series (whenever I get around to writing it, probably soon). Coming back to wxPython, it’s been about 2 years since I worked on wxPython and I liked it then and I like it now.

Today wxPython is probably the best python GUI according to me. I haven’t worked with PyQt but I had tried messing around with PyGTK. PyGTK to be fair is kinda OK, but is less supported than wxPython. Also the fact that wxPython is far more extensive in widget and framework support than PyGTK, makes the former a more attractive choice. PyQt was just out of the contention for me because of 2 things, a) It has a very shady license, I think it is not LGPL compatible. You can’t trust it enough to use it in commercial product without breaking some license somewhere, too controversial! AND b) Qt is just such a horrible framework! I know people are going to disagree with me on this one, but Qt’s moc compiler thing just makes me cringe. I have worked on a large project involving Qt and C++, and those moc_* files can be a nightmare at compile times. Besides Qt uses archaic and redundant C++ practices to maintain compiler compatibility. So I generally tend to “run away” from Qt and Qt projects.

Continue reading

A vest that lets you FEEL the Game!

A US surgeon has devised a vest that when worn and plugged to your computer allows you to feel the blows from virtual game characters (read here). Now that is really interesting! It will allow the player to experience another dimension of gameplay never before possible, a consequence of his action in the virtual game world occurring on himself in the physical world. The vest allows you to feel the shocks, stabs and hits occurring inside the game. Players wearing the vest will now have to deal with implications for their actions. Ah! Responsibility! No mad shooting in an FPS game from now on. That’s refreshing for a change. It’s no secret, I am a fan of games where the player has to deal with consequences for his actions. FPS games today just lack any variety, and this vest, it seems, may just provide the right impetus to what has become a rather monotonous game genre.

Response to a rant: Is OpenGL being abandoned?

It is no secret that the gaming industry is dominated by Windows platforms and the API of choice is DirectX. There are some staunch followers of the OpenGL way of life, but their numbers seem to be dwindling rather rapidly. I have read a lot of blogs claiming that OpenGL is better than DirectX or vice-versa. I even got a mail (, or two) from an unknown person recently claiming that my War of the graphic APIs was rather biased towards DirectX. Let me assure anyone and everyone that it is certainly not the case. I am supporting both APIs in my engine and let me say this again, “Both APIs are functionally equivalent. It is not the API that determines the performance of a game, but rather the underlying hardware.”

The email further went on to show some in game screen-shots to claim that OpenGL based games looked better than DirectX games. Now, anybody who has worked on a game that uses either APIs knows how much of a folly this is. “It is not the API that determines how a game looks, it is the artist that creates the content and the engine programmers that provide the technology (, like Shaders/Level Builder/Script support) to the artists which determines how the game looks.” Game design also plays an important role. In any case it’s not the API. The mail didn’t have a valid sender so I could not mail him/her with my response. In any case, Dude, you could have just posted a comment and I would have been glad to respond.

No OpenGL is not dying out. However I am sorry to say, OpenGL is falling behind. It is being increasingly abandoned in favor of DirectX. Don’t believe me? OK read this, at least you believe him. I think I made it pretty clear why that was the case in my earlier post. I am not going to outline the same points again.

Chipping the Bug.

There is an old saying in software development circles which goes something like this, “A bug can never be created nor destroyed. It can only be changed from one form to another.” While that is not to be taken too literally, bug fixes do often lead to regressions that are difficult to track down and fix. Especially the ones that occur at the end of release cycles, or the worst ones that occur at the end user’s or the customer’s machines. Well not if two researchers, Chad Sterling and Ron Olsson, from UC Davis have their way.

Their research has lead to the creation of a new technique in debugging software which reduces a large piece of software into smaller fragments called “variants” which are then used to track down the bugs (reference). The technique is called “chipping”. They have even developed a software (“chipper”) called ChipperJ in Java that they claim reduced a large program down to 20 to 35 % of it’s size. I have no idea how the program does this but apparently it uses the original software code to do it. While it is debatable if and how such a system would apply to industry level software that spans more than million lines of code, it is certainly something to be interested in if you are a developer. While the technique may not remove the human factor all together from the process of debugging, it certainly is a novel idea that could push automated testing to another level.

The authors of the ChipperJ program seem to suggest that their system could be applied to large and complex projects. Their research paper does provide interesting insights into the method. The authors are of the view that their paper is just a preliminary draft of what promises to be a new approach to debugging software. The method, they claim can be combined with other more traditional methods like “slicing” in the debugger to get even better results.

Soft body physics with FastLSM.

I have always been interested in soft-body physics, but this is one of those algorithms which promises to stand out of the crowd. While soft body simulations have been under considerable research, their applications to the games in general have been limited. That is due to the fact that such simulations are complex and time consuming. Simulating a lot of soft bodies at a time can be punishing to the frame rate. Th FastLSM algorithm appears to be pretty impressive in term of speed and accuracy even for a fair number of objects in the scene. However I am not so sure how it will scale to an arbitrarily complex world often encountered in a game.

I am particularly interested in soft body physics for simulation of liquids ever since I first saw soft body simulation (and that was a while back). I never got around to really “getting my hands dirty” with this stuff though. Maybe I should give it a try one of these days. I must say the authors have been extremely kind enough to give fully commented source code along with their paper. That will definitely help any aspiring candidates like me who want to implement such simulations.

Silverlight looks promising.

Recently Microsoft released version 1.0 of it’s “cross-browser, cross-platform plug-in for delivering the next generation of .NET based media experiences and rich interactive applications for the Web”, called Silverlight. Last week I had a sneak peek at the software and I was presently surprised to see that Microsoft is also developing a Linux version of the plug-in with the help of Novell, codenamed Moonlight.

As such Silverlight is just a plug-in that can deliver interactive .NET content on to your web-pages. It is particularly interesting to see where this leads to with regards to game development. Silverlight is almost certainly a challenger to Flash’s monopoly on web-based games. It offers a flexible programming model that supports javascript, .NET, and other languages, so it will particularly hit a cord with programmers that already know these languages.

Are we going to see more Silverlight based games? I am sure we will. .NET applications are very easy to build now that there are great environments like the Visual Studio, Visual Studio Express and #develop. It just means people are naturally going to build games that run on Silverlight. This software is clearly something to watch out for if you develop web-based games. I hope Microsoft offers graphics hardware acceleration in some form on the Silverlight platform. That will just be amazing.

Does Google “Ad Sense”?

It seems like when it’s Google, they can find a marketing edge anywhere. Google is all set to take it’s advertisements for website service a step further and actually have “AdSense for Games”. Those annoying ads that fill up a website, well now will appear in your favorite games too. I doubt if major development houses or AAA games will have them, but you can pretty much expect every small indie flash games and casual games to carry some sort of Google ads very soon. Currently Google is targeting web based games, but soon plans to move to console and PC games as well. Go figure! Apparently it will work similar to the current system where developers can basically put AdSense into theirs game and advertisers can then use Google’s system to have advertisements in the game while playing it.

Advertisement in games in not new, but the entry of heavy weights like Google and Microsoft is significant. I remember Coke tried something like this once, but without too much success. However players like Microsoft and Google have the technology engines to do it right. So advertisement in games could be here to stay. If it does become a success then we could well see most game publishers get bit by the money bug sooner rather than later. It would mean a whole new method of generating cash inflow for developers and publishers alike, and that means ads in games could well be the norm of the future.

Games are essentially a interactive entertainment medium so you could see yourself interacting with the ad too. Nothing is preventing developers and designers to have innovative ways to get the player’s attention just so that they can earn an extra buck. Does this mean, our experience with games will soon be marred by ads? Well frankly I don’t know. As a player I wouldn’t want to have nagging ads in an otherwise intense gameplay. But as a developer or a publisher, I could earn an extra buck every time a player interacts with an ad. It’s essentially who wins in the end. I hope sanity prevails.

Hmm… so the next time I am fragging Stroggs on Stroggos I guess I shouldn’t be surprised to see a hoarding inside the game saying, “Matrimony Online! Come Find Your Life Partner!” or “Get your degree online! No need to study, just play games!”. Or ..ooh, we could we have a Commercial-Break in a middle of a game, “HappyDent White Chewing-gum. Keeps your teeth shining white so you can see Alien HellSpawn in the Dark!”. Nice, can’t wait to see that day!

The war of the graphic APIs.

OpenGLSo it’s finally here! I mean OpenGL 3.0 update, formerly known as Longs Peak, has officially been announced at Siggraph. Read more about it here. It’s been like way too long already. I know the announcement was made a while back but I didn’t get time to read the details and proposals till this weekend. The Birds Of a Feather(BOF) presentations are interesting and if you are an avid OpenGL follower then they are a must look at. While true and detailed specs are not yet released, the discussion threads do provide some interesting insight of things that can be expected. If you are already a an OpenGL programmer, expect significant changes to the way you worked with OpenGL. The update removes some redundant and archaic practices that have existed in OpenGL for way too long which makes me happy. The last time OpenGL was rewritten was in 1992 and has survived this long arduous journey of hardware updates pretty much unchanged. Some would argue that that the current mechanism of extensions have cluttered the API just too much and can be a pain to work with. I don’t blame them. Lets just hope the new update addresses these and other issues better than the previous versions. Personally I never found extensions too difficult to work with. Generally complaints are from DirectX guys who have switched over to OpenGL and have to adapt to the new way of working with OpenGL.

DirectX 10Talking about DirectX, it’s been quite a while since the release of DirectX 10. It is apparent that DirectX has a clear advantage at the moment on OpenGL with regards to exposing newer functionality. I don’t buy the argument that DirectX is in anyway superior or that matter inferior to OpenGL. Many people have argued this over the years, but the argument is far from conclusive. While DirectX may be able to expose newer and latest functionality faster than OpenGL, the functionality eventually does get exposed via OpenGL too. So to all nay sayers, my response will be, it’s not so much the graphics API but the underlying hardware that determines how well your card performs and how much graphics throughput you get. The O2 Engine abstracts both APIs and through the years of developing the engine I have seen that both APIs are neck to neck when compared to speed. There is no clear winner when comparing the two APIs from a purely technical perspective. While DirectX is more favored one in game development circles, OpenGL is the API of choice in CAD and Science & Research circles, but that is clearly for very different reasons and not speed or performance. What people fail to realize is that fact that DirectX and OpenGL are very similar, more similar than you would think. With the release of the 3.0 update, I think (and this is my personal opinion) that the two APIs will end up being even more similar. The DirectX 10 update fixed shortcomings of DirectX 9 and previous versions and OpenGL 3.0 has addressed issues that were problematic with legacy OpenGL. So in the future, and I hope, there will be little to choose between the two APIs.

This is like a zoomed up API level and technical level picture. Let’s stand back a bit and look at the bigger picture. So DirectX was first of the block, so is it wining the race. At the moment it would seem so. Programmers and developers have already begun working on the API and getting their hands dirty. Hardware for shader model 4.0 has been available for quite a while and is now considered reasonably cheap. It simply means that development houses have already come up with good engines. But there is a catch to all of this. DirectX 10 is only available for Vista machines that could prove to be somewhat of a stumbling block. Now don’t get me wrong, there is nothing wrong with Vista as such, but I can’t understand the sanity in not releasing DirectX 10 for XP! The driver architecture reason given by Microsoft is crap! If they had released it for XP, that would have meant even larger market for games that used DirectX 10. The only reason I can think of as to why this decision was made was because Microsoft wants to push it’s OS on to end user’s desktops. Gabe Newell of Valve Software touched this point recently where he emphatically said that DirectX 10 for Vista was a mistake! I don’t buy into his argument completely. I think users will eventually buy Vista anyways and we have seen this arm twisting by Microsoft before, and it does eventually force people to buy their OS.

The question to ask is, does DirectX 10 offer truly unique graphics than 9.0? The answer is at the moment NO, but eventually YES. You could get similar graphics with 9.0 or OpenGL 2.0. That said the XBox 360 does not support DirectX 10 yet, so cross platform developers are going to stick with 9.0 for a while. While OpenGLversion 3.0 promises a lot, it’s true potential will only be realized when it comes on to our desktop. Also the spec for Mt Evans update that promises geometry shaders is somewhat still to be fully operationalized. So I think it will be after, probably much after, the 3.0 release. The OpenGL committee seems to be slow again as usual. While DirectX 10 games are here, DirectX 9 is not history as yet.

Extending the code documentation generator.

When it comes to documenting code, there is nothing better than a documentation generator, and when you talk about document generators, none is perhaps more widely adopted than Doxygen. I am using Doxygen in my current project and I have been using it in on the last 3 projects. Doxygen is pretty good when it comes to generating documentation and I must add, does the job pretty well too. But besides it’s conventional use, I decided to try and experiment with it a little bit more for this project. I have always believed that unnecessary and excessive documentation is counterproductive to a project life-cycle. Mindlessly making Word documents and adding it to a folder often has no value addition what-so-ever to a project. This is simply because this documentation never gets properly referenced by the developer, designer or the team-leader especially during the time of intense project pressures.

Before you brand me as a development process heretic, or deem me as a free willed do-what-you-like vagabond, let me make things clear. I am neither of those. I am very much in favor of proper and detailed documentation and I myself follow strict processes (, too much sometimes,) in every cycle of the development process. Regarding documentation, the thing I find inconvenient is that fact that, documentation gets fragmented into different files and often into different locations. Some teams are careful never to let this happen and often do control this by using very specialized software (Rational products do ofter such software). But products like these can leave a hole in the pocket of large corporations, and for budget teams like ours this is simply not a viable solution.

From the very beginning I had this gut feeling that documentation should be at one single place and accessible via one single interface. I will go a step further and say that I, as a developer, would like to see design plus the documentation plus the code at a single place at any given time. This is immensely helpful when you are into heavy development. You can link back to the design at a click of a button and if you are intuitive enough to use the right tool, quite literally link to the exact spot you want to go. This to-and-fro movement from documentation to design to code and vice versa can serve as a powerful counter check mechanism. So the docs, the code and the design have to be in perfect sync for it to work. When one changes, everything needs to be updated. I found this very helpful in managing the current project. Kept me and the team always focused on macro and micro level design from the very beginning.

We started off with this idea in our project and decided to implement it via Doxygen. Since Doxygen builds docs in HTML, extending it to incorporate such functionality was, well, trivial. Believe it! I haven’t spent more than about 1 day (total) on this stuff. We have made no code changes to Doxygen program. All our changes are superficial needing only to use already built-in functionality of Doxygen and some minor (but intuitive) HTML tweaks. During this release we went a bit further and decided to link our bug-base to the design-documentation-code nexus. It has proved to be even better than expected. Not all things have been ironed out yet, infact a lot remain, but it already seems like a nice way to handle things.

Design documentation.

Design documentation

Documentation.

Class Documentation

Class Diagram.

Class Diagrams

I am not at a liberty to show more diagrams for the design but rest assured they are there.