Code::Blocks 8.02 has been released.

The much awaited release of Code::Blocks has finally happened. I was waiting for this for like forever! It’s time to go get the new IDE. From the initial looks of it, it’s been worth the wait. More later!

Into the mind of a casual gamer.

I haven’t said or ranted about gaming in quite a while now. Quite unlike me. I often have to say and rant a lot, especially about gaming. If you read the last 10 posts you would never guess I was a game developer. So I guess it’s time to put the Blog back on track! The absence of any game related post have been for a couple of reasons. For one, I haven’t installed any new games as of now. Too busy with finishing off Doofus 3D. Neither have I upgraded my old aging NVIDIA 6200 and 6600 cards. So anything new like Bioshock or Crysis is out! I have promised myself a brand new 8800 GTS or something newer as soon as I complete and ship the game, and that’s been now like, forever! On the contrary, for the past 6-8 months I have had the opportunity of actually watching others playing games and it’s been a learning experience as a game designer. You wouldn’t believe it, one of them happens to be my mother. She has been doing all the playing while I have been working my ass off to finish my game, though I must say her taste is very (very) different from mine. No Half Life-s or Oblivion-s for her, she kinda enjoys the Zuma-s and the Insaniquarium-s; casual games in general.

Just watching and being a spectator can teach you volumes about how people enjoy computer games. Casual gamers in many ways can very different from hardcore counterparts and can be very similar in other respects. A casual gamer is generally not interested in complex gameplay. They are the ones who enjoy simple yet immersive games. Simple puzzles and not overly complex challenges are ones that are favored over long story lines and dark alleys. Also appreciated are short levels. For instance, I tried selling a game called Caesar III to my mother but nope, didn’t work. For those who don’t know, Caesar III was a city building game, kinda like Sim City and I loved that game when I played it. The game has no gore, so I was pretty sure she would love it. However, the comments I got from her were, “Too complex”, “Very difficult to manage”, “Can’t understand it”. It just goes to show casual gamers tend to favour easier set of “game-rules”. Anything that is more complex than a few set of keystrokes is a big NO.

Over the past few months I observed and chatted with a few other casual gamers too. Mostly friends and relatives who often play casual games. It’s always good to understand the market mentality, especially if you are developing a product catering to them. Most casual gamers I chatted with have played 2D games, but quite a few I must say, have played 3D games. Interesting, since I am developing a 3D casual game myself. When asked if they would be interested in such a game, the overwhelming response was, Yes, to obviously a simple 3D game. Most of them said they would be very interested, but were unsure if they would like it. Some even told me they wouldn’t mind learning a few extra keys to play something like Marble Blast. So it means Doofus 3D does have a market after all, but it remains to be seen if it would indeed translates into sales.

In general I have found casual gamers dislike having to learn a lot of rules, period! Games that involve significant learning effort quickly become unpopular. So I guess games like Oblivion are out. Also not so much appreciated is, gore. There is an interesting paradox here, unexpected I must say; many casual players admitted to playing arena games like Quake III. The reason; such games was easy and fast paced. Most enjoyed it, some said the games were violent but they generally didn’t mind. None of the casual gamers I talked to knew exactly what game genres were, none cared. Most said they looked at the game cover or screenshots to decide if they would indeed play a game. 90% said they played flash games. About half said they don’t understand things like DirectX or OpenGL or h/w acceleration. Nearly all said they had only on-board graphic cards. More than half said they didn’t even know you could install separate graphics cards. More than 75% said they had and do play games on laptops.

OK, my survey is far from being completely accurate. All I have done is maybe asked a few questions to people who I have met in the past year or so. I have in no way used a proper methodical approach for the survey. However, it does give you an insight into the mind of a casual gamer. As a game developer I generally tend to forget these important lessons and sometimes do go overboard. Then, I usually drag myself back to reality!

Virtualization: What is it exactly?

There has been a lot of buzz about virtualization and a lot has been said about it everywhere. Recently everybody and anybody who has something to do even remotely with operating systems is talking about it. So what exactly is it? And more importantly, what benefits do you gain from virtualization? Is it even that useful? A lot of questions, lets look at them one by one. First, lets look at what virtualization actually is. By it’s very definition virtualization means “abstraction of computer resources”. To put it simply it means sharing of physical resources on one system via several or multiple logical resources. In it’s true sense virtualization is a broad term and can mean different things. A CD/DVD emulation software can also be called as one form of virtualization. However virtualization is often mixed and used interchangeably with hypervisor. A hypervisor is a virtualization platform, meaning it is a platform using which you can run multiple operating systems simultaneously, often under one parent or host operating system, i.e. under one operating system kernel. Every operating system under a hypervisor runs inside it’s own virtual world or virtual computer (, if you could put it that way); completely isolated from the host or any other operating system that might also be running under the hypervisor.

ReactOS
React OS running under QEMU
Ubuntu (Gutsy Gibbon) 7.10.

So what is the real use of the whole virtualization thing? Several, and as a programmer/developer even more. My first experience with virtualization was with VMware, about 4 years ago when I was working on a freelance project. I used VMware to port an application across windows to RH Linux. I did a lot of monkeying around with the virtualization platform back then, and soon realized that such technology can be put to good use. Besides it’s obvious benefit for a cross-platform developer, virtualization can also be used very effectively for other purposes. More so now that we have machines with multiple cores. Since every OS runs in it’s own separate world, you can pretty much do anything with it. To name a few examples, how about hacking the Linux kernel or maybe writing a device driver. Maybe try a hand at screwing the file system. Crazy things you would never try on your machine, are all possible using virtualization software.

On a more serious note though, besides giving an environment to screw around what more benefits does virtualization provide? The one that is probably more of an interest to an enterprise is, “Utilization”. Underutilized servers and machines can be made to run multiple virtual machines, each taking more responsibility and more load. People have often sited benefits of virtualization as it could save up on physical resources like hardware, power-consumption, management and infrastructure, all leading to obvious cost benefits for an enterprise. With machines getting more and more powerful, this could well be a norm in the future. “Utilization” could well be the single most important reason a technology like virtualization could see a lot of interest.

Also of interest is the ability of virtual systems to be used as a potential secure sandbox testing environments. They could be (, and already are being) used for software QA, analysis and for controlled environment test beds for applications and untested software. Tech support can simulate client specific scenarios on virtual systems that mirror real systems. Virtualization is also excellent to carry out performance analysis on systems and run complex debugging scenarios that could be very difficult to do on normal systems.

The next use will probably be of more interest to the average joe/jane net surfer. If you haven’t already guessed it, virtualization could be used to safely surf the web and keep your system free of viruses and worms that infest web-pages, especially the “good ones” (, you cheeky monkey, you!). Even if your virtual system were to get infected or hacked, it’s just a matter of reloading the entire virtual OS and everything is erased clean! No cookies, no browse history no nothing! No one logging into your computer will ever know which sites you visited or what you did. Sounds interesting doesn’t it. 😉

So where can you get your hands on something like this. VMware offers a free virtualization software which you can download an use. I would recommend VMware hands down, since it is probably the best one I have used. Besides that there are also free ones like QEMU, which I use quite often. If you are on Linux, also check out KVM, Xen and lguest though I haven’t used them myself. Then there are commercial ones offered by Microsoft, VMware and others that are a lot more powerful which you are welcome to try.

[Edit] Also Microsoft Virtual PC is another free virtualization software for Windows.

Designing user-friendly GUI.

What is good GUI? Or should I ask what make a GUI good or bad? Is it the snazzy looks, the cool effects, transparent windows, bright colors or is it really something else? Why do people hate some GUIs and love others? Or is it only just a matter of personal taste? Somethings to really think about I would say. GUI is actually a very broad term and can mean different things for different people. For a web-site a GUI could mean the web-page. For an application it could mean the application interface. For a everyday normal computer user it could very well mean his windowing system. A GUI is a graphical interface to a system, and the system can be, well, anything really. Though this entry is more about application GUIs, most points discussed here are also directly or indirectly valid for other GUIs.

Most novice developers working on GUIs never really understand, or for that matter appreciate the meaning of the phrase “good GUI”. It is often confused with “good-looking GUI”. That however, is not the case. Yes, it is true, good GUI must look good. You or the users of your application wouldn’t want to use bad-looking GUI. However, the single most important thing any GUI must accomplish is user-friendliness. How many times have you seen a good application get a bad reputation just because it was difficult to use? I would say too many times. Sometimes a GUI can make or break your application, but often times a bad GUI can give a bad reputation to applications that would otherwise have had much wider appeal. During my initial years as a developer I worked a lot with GUI libraries, everything from MFC, Qt, FLTK, wxWidgets and learned most of the GUI design principles the hard way, mostly by trial and error.

If you were to Google around for GUI design principles you will no doubt get a bunch of sites that give you some very good subject matter on the topic. However, GUI design is more about experience. Just saying, “You must know your user”, or , “Empower the user”, or , “Know the user’s perspective” doesn’t really cut it. True, most of the data is accurate and you are more than welcome to read it. However, these are very broad phrases and often lead to nothing conclusive. As any good engineer knows, data and facts are what matter. So lets looks at some of the hits and misses in GUI design.

a) Overly bugging a user. This is probably the worst mistake that can be done by any GUI. Remember those annoying pop-up dialogs in Vista! “Do you want to really move a file”, “Do you want to install this program”, and more. Nope, please don’t do that! It will make any UI interface extremely unpopular. The main culprits are Modal dialogs. Modal dialogs are one of the most overused GUI feature. I would go so far as to say, avoid Modal dialogs except for rarest of rare cases. Only use them to warn a user about data loss. Nowhere else! Even in situations where they are used, allow the user some way to turn off the warning message boxes.

b) Use tool-tips and modeless pop-ups, and use tool-tips judiciously. I can’t understand why so many UIs use modal dialogs to present information. Tool-tips are a great way to show information and they can be extremely useful in enhancing the GUI. Use tooltips instead of modal dialogs to convey simple bits of infromation. In one application I replaced information modal dialogs with tooltips that would fade away in about 10 secs or so. They were an instant hit. Azureus comes to mind, the program uses a similar feature to warn users. Also if possible try to replace modal dialogs with modeless ones. Again modal dialogs are not all that great even for user inputs. Compare the search (Ctrl+F) in Internet Explorer 6 and search in Firefox. Which do you think is more intuitive?

c) Another annoyance is leaving the GUI unresponsive. If there is a lengthy task, please delegate it to a worker thread. See to it there is always a way for the user to cancel a lengthy and intensive task. Never ever leave the user in a state where he/she can’t interact with the UI. It’s really annoying

d) Short-cut keys are such a help. I tend to always use them. A GUI application must have short-cut keys for the most commonly used tasks and must be consistent with shortcut keys. For example; never use Ctrl-Z for pasting something, Ctrl-Z is always an “Undo” command under all Windows applications, at least correctly designed ones. So stick to it. Not all tasks need to have shortcut keys. Actually it’s not always prudent to have short cut keys for commands that involve data loss. Also try to keep shortcut keys spaced apart. OK another example here; there was a friend of mine who was working on an application where he assigned Ctrl-S as save and Ctrl-D as delete. A few months later he got a nasty mail from a client asking him to remove that Ctrl-D since the client use to accidentally hit the delete key during save. Also overly complex shortcut keys and key combos like “Ctrl+Alt+F10” are not well appreciated.

e) “Empowering” GUIs are very very popular. GUIs that allow the use of both hands can quickly become a hit. Consider the copy-paste command. Initially the combo used to be Ctrl-Insert and Shift-Insert. However for a right handed person that would mean lifting his hand from the mouse to operate the shortcut keys. So two new short-cuts were introduced Ctrl-C, Ctrl-V. Now the user can select content with a mouse and copy-paste with shortcuts using his left hand. For those who don’t know, Ctrl-Insert and Shift-Insert still work with the Windows Notepad and most other GUI editors under X. Left handed people can still take good advantage of that. Such design can really go a long way because they empower the user to work rapidly.

f) Good GUI applications strive for “Undo” and “Redo”. This is easier said than done. It requires a lot of thought and pre-planning in the design stage of an application to have a full functioning Undo-Redo functionality. However this is a “must” in today’s GUI design. Can’t escape it. Hint: If you want to use Undo\Redo in your application, apply the Command and Memento design patterns to your designs.

g) Tool-bars are friends. Tool-bars are also a must for all but the most trivial applications. True some applications are too small to have toolbars. But the rule of the thumb is if an application has menus, it should also have tool-bars.

h) Another thing I hate is, deeply nested menus. If an application has to have them, then it must have a separate tool-bar for those menus. Deeply nested menu-items can be difficult for the user to locate. I have seen this, maybe not too often, but sometimes applications do have some commonly used functionality deeply embedded in the menu hierarchy. Not too well appreciated I must say.

i) Applications that fail to set the tab order on dialogs can quickly become unpopular, especially with laptop and notebook users. Same is the case with accelerator keys for input widgets. I have seen really good and professional applications miss out on this point.

j) Good GUI designers try to conserve space, but never go overboard. A GUI should not cram widgets, however intelligent choices while selecting widgets can go a long way. For example; often times a combo-box/choice-box will require far less space than a list box and provide equivalent functionality.

k) Readability is another factor. Sometimes snazzy looking applications with all those “skinnable” interfaces can make a mess of it. I generally try to avoid skins, custom fonts or custom colors. It is best to use system default colors and fonts since such GUI scales across systems that may have different settings and hardware setups. It is also best to go for flow layouts or sizer based GUIs. This allows for full compatibility even across platforms and with different windowing systems.

These are probably only some of the things that can go wrong with GUI design. There maybe more, but maybe I got a little bit tired with typing, and maybe you have too (, after reading such a long post). I will just leave you with this link. I think some very good points are addressed there. Have a peek if you are interested.

My programming language is the best!

Very recently I was reading and article on a pretty well-known site where the author just downrightly insults a programming language for what it is. I refuse to quote the site for good reason, but that actually doesn’t really matter. The thing that really got to me is that the author gives downright mundane reasons as to why he hates a language, Java in this case. I have seen this sentiment all through out the industry where many (, not all) programmers think the programming language “they” work on is the best and all others are either sub-par, or downright foolish! I don’t intend to start a flame war on this blog on programming languages, far from it, I will try and debunk some commonly held misunderstandings and misconceptions about programming and programming languages. And yes, I am not going to directly touch on any particular language while doing so.

The biggest misconception of all and held by many is, “You have to learn a particular language (C++, Java or any other) to become a good programmer.” The answer to that is …No! Programming is all about solving problems. If you can do that effectively, then you can become a good programmer using any programming language. The best programmers are the ones that solve difficult problems subtly and elegantly. The choice of language does not determine the quality of a programmer.

The second misconception, and often generally shared by a lot of budding programmers is, “I have to learn the most difficult programming language to get and edge over others in the industry.” Not true either. Some of the best programmers I know started out with Perl, Python and Shell Scripting, which are some of the easiest languages around. In fact, now that I think about it, that experience actually gave them an edge over others, because they learned the holy grail of programming, which is the famous KISS principle (Keep It Short and Simple). If you are starting out, pick the easiest language you can understand, learn it completely and become a good programmer with that language.

The next famous one is, “I can do anything and everything with the programming language I know. Learning or even looking at another language is a waste of time. My programming language is the best!” This is probably the most dangerous one of all, because this sentiment is shared by some senior programmers. Programming languages are like tools, and like any good tool-chest, the more varied the tools you have in it, the better you will be at doing a particular job. Some jobs will require a hammer to force in a nail, others may require a spanner and some others may require a drill. The same thing with programming languages. Many languages are designed to solve specific and sometimes domain related problems, and most of the time they do an excellent job at that. Often better than most other languages that were not specifically designed to address those issues. A good (senior) programmer will often know 3, 4 or even more languages, and will use different languages for different projects when needed. Never be afraid to learn a new programming language, in-fact strive for it.

And another one, “A language doesn’t have an XYZ feature present in my programming language. Hence it is incomplete and therefore it is unsuitable for programming.” This is actually a corollary to the above one, and just as flawed. If a language doesn’t have a feature, then there must be a good reason for omitting it in the first place. The designers, given the nature and scope of the language, felt that the feature was unnecessary. Again if you feel the need for that particular feature in your project, then it’s time to re-evaluate the choice of programming language for that project. It however doesn’t mean the programming language in question is in anyway bad or that matter inferior.

Next misconception, “Programming is too complex, it’s for geeks and really clever people!”. Nope! True some programming languages are more complex than others, but some are really really simple. Check out Python, Perl (, not taking any sides here, you can just as easily use others as well). The rule of the thumb is, if you can use a Spread Sheet package (like Excel), you can learn to program.

Another one that creates a lot of debate is obviously the speed of a language. The misconception is, “XYZ language is faster than PQR language.” Not entirely accurate. A language has no definition of speed. A language is never fast or slow, a program written in a language can be fast or slow. The speed of a program depends on a lot of factors and can be a tricky thing to evaluate. It has more to do with correct design decisions and not so much on the language. There is a phrase I have coined for this over the years debating with people. It goes like this, “The speed of a program directly depends on the intelligence and experience of the programmer.” The fact is a good programmer will produce fast code in any language. It is true that well-written code in some languages can be faster than similar well-written code written in other languages, but unless you are working on bleeding edge of technology, this doesn’t really matter.

A similar misconception is, “XYZ language is more powerful than PQR language!”. Not quite right. It entirely depends on the domain and task at hand. Some languages are specifically designed for certain problem (Erlang comes to mind) and can be extremely effective at addressing those problems, often magnitudes of an order better than others. Other’s are made simple on purpose, just because you can do very rapid programming with it. Some programming languages go an extra mile so redundant and tedious programming tasks and bugs can be eliminated. Again I don’t intend to take sides, but for all rapid programming tasks I use Python because of it obvious benefits. However, there are other languages like C# which are equally good. Every language has strengths and weaknesses, no language is all powerful.

This is how I stand on the whole issue of programming languages; I consider most programming languages, at least the most famous ones used in the industry as “worth learning”. You can never be a master at all, it’s just impossible. However you can certainly be good at a couple. Learning more languages will empower you to be at a better position to choose the right language for the right problem (or task) . Learning more will allow you understand why these languages were designed the way they were and thus make you to be a better programmer in the end. Most production quality languages aren’t bad\incompetent or useless; don’t say they are. (I am talking to experienced programmers here!) Some languages could be inappropriate for a particular task, but harsher words are unwarranted!

In search of a Python IDE.

There is simply no good Python IDE under Linux. Yesterday night I tried searching for one and ended up empty handed, well amost. Under Windows the situation isn’t too good either. I mostly use PythonWin over there and it gets the job done, at least most of the time. Probably not as good as I would like but it does the job. Under Linux however the situation is even worse. There is no IDE, that can be used for serious Python development. Maybe it’s me, but I found it a little bit strange that such a popular language like Python would be lacking a proper IDE. To be fair the only thing that comes close to a good Python programming environment was Komodo Edit though it itself is rough around the edges.

KDevelop is kinda OK. Even though it is good for C++ development, it lacks proper support for Python. For one I couldn’t get the Python debugger working under KDevelop 🙁 . Also KDevelop uses Makefiles for its project management and that just made me run away from it rather quickly. Makefiles are just a little bit too much for a simple scripting like Python. The other editors/IDEs I tried were SPE, Eric, DrPython, Editra, Boa Constructor and Emacs. While most of the IDEs/editors are fairly decent, none of them are up to the mark. I would place gold o’l Emacs at number 3 since it does the job fairly well without crashing or major hiccups. Most of the other editors were either clunky or just simply crashed like way to often (, haven’t tried any commercial ones, sorry, strapped for cash here).

Komodo Edit is more like an editor that has partial support for Python. I haven’t managed to get the debugger working with it in a short while that I have used it (, no idea if you can actually do such a thing) 🙁 . But it seems the best bet for Python development under Linux (, if you want to use free software). The good thing about this editor is the fact that you can run custom commands. So you basically have to run your script via a custom command since the editor itself doesn’t provide a run command out-of-the box. The project layout; well there is none, you basically place everything under the project directory and the editor just picks it up in it’s tree window. Probably a little trivial, but come to think of it, what else do you need when it comes to Python. It’s not like you have compiler switches or linker optimizations that need to be performed. Besides such a setup means there is less complications running scripts from the command line since in Python all module paths are made relative to the script anyways. Overall, Komodo Edit is a good bet if you want to do quick and simple Python scripting under Linux.

Invasion of the multi-core machines.

They are here. They were already here, but now they are really here and can no longer be ignored. Developers and programmers, especially game developers, can no longer afford to sit back and just watch the invasion of machines with multi-core processors. While hardware manufacturers have scaled their processes to bring them to us, software and compilers haven’t scaled equally well. Ironically with current programming methodologies, programming practices and compilers, programmers can’t yet take full advantage of the CPU power thrown at them. It’s not entirely the programmer’s fault, and neither are they limited by their intelligence. The fact is current generation programming models fall short of addressing the multi-core issue in a reliable way. Yes currently there are workarounds and you can definitely use them to get some advantage on a multi-core machine. Applications using even simple threads can benefit from multiple cores, but merely having multiple threads in an application doesn’t mean that application or the threads will run at twice the speed for a dual core CPU. The performance boost for a “normal” multi-threaded application running on a multi-core system will be rather minuscule compared to the computing power of what a multi-core system provides. To be frank all those cores are of little use from the “average joe” programmer’s perspective. That’s just because, if not programmed with care, the advantages provided by those cores is useless, at least in one given application.

Lets look it from a more technical perspective. Most multi threaded applications are not written to use multiple concurrent threads aggressively. A typical multi-threaded application delegates only a minuscule portion of it’s program code to a thread, often called a worker thread. Generally the most CPU intensive or blocking operations in a program are done inside this thread. Consider an example of a web-downloader application. You have one worker thread doing all the downloading while the UI thread waits and processes user input like “Cancel” or “Quit”. Here the program was explicitly designed this way so that the UI can respond to user input, while at the same time the task of downloading a file can go on. In other situations threads may be used for a different purpose. Take the case of the Doofus game. In the game shadow calculations are the most CPU intensive operations, but are not required per frame (, or cycle). So the shadow calculations are done inside a thread at lower priority, typically the calculations are distributed so that the results are obtained per 2 or 3 frames. Whatever the case maybe, the fact remains, such designs are not the most optimal way to program for multi-core machines. In the case of the web-downloader application, one thread waits while one thread does all the work. In the case of the game the situation is a little bit better, but still the task is not optimally distributed between threads. The ideal case would be to have multiple threads share the entire workload of the application so that all the threads are busy all the time. If that were indeed the case and if these threads were to run on separate cores, you would then be able to harness the true power of a multi core machine.

Programming multiple concurrent threads in an application is difficult. Thread synchronization is not for the faint hearted and having a lot of threads in an application can create bugs that are difficult to debug. Traditional multi-threading is synchronized using lock-based methods and lock-based synchronization is prone to problems like deadlocks, livelocks and race conditions. Experienced programmers will try to avoid multi-threading if and when they can and try for simpler and often single threaded solutions. This is not what is advocated by concurrent computing and parallel programming, which clearly can take advantage of multiple cores very effectively. It is true that even with current multi-threaded designs you could benefit from multi-core architecture. Even if such programs internally wouldn’t be able to use the power of multiple cores, the OS can still make full use of the architecture for multi-tasking. To put it in simpler language, multiple applications running at one time will run faster (, note the subtle difference there). For a typical system, applications like anti-virus programs and background processes along with other user applications, running all at once, will definitely benefit from additional cores. This however isn’t very helpful from a game development perspective, since obviously most games are single applications. Games typically take up most of the system resources while running and the advantages of multi-tasking are all but useless while running a game. Games therefore must be able to harness the true power of multiple cores internally. What does that mean? Does it mean a paradigm shift in how games are built? Do we need a special language to do so? Should we shun the current programming practices? Drop C/C++ and look at maybe Erlang or Haskell? Use parallel programming concepts? The questions are many and are increasingly asked by a lot of people. The truth and the solution however, is not quite so simple.

Continue reading

An Unreal Crysis.

If you are graphics geek and love to see those so called next-gen effects, then recently released games like Crysis, UT3 and to some extent Bioshock will give you lot to cheer about. Crysis for one has shown that modern top line cards can push extraordinary amounts of detail. However, raw figures show that Crysis and UT3 sales have been anything but extraordinary. They have in fact fallen flat! Interesting figures there, and to some extent I am a bit surprised by what the figures show. As the articles point out both games were pretty hyped out before the release and they should have made flat out more sales than what the did. True Crysis has some crazy hardware requirements, but still the game can be played with older and less powerful graphics cards, so can UT3. Maybe not with all the graphics effects and resolution maxed out, but they can be played nevertheless. Besides both games have *huge* fan bases so the figures are very surprising indeed.

Well I can’t speak for everyone but, my personal take on the whole thing is the fact that vanilla FPS genre is kinda getting old. After so many games that churn out the same mundane gameplay, it has pretty much lost it’s charm. True the graphics have improved but not the gameplay in general. Games like Bioshock stand apart from the crowd because they give that little bit more to the overall game and it is exactly why they sell more. I can tell you from my experience over that years of playing games is the fact that (, and I have pretty much repeated this a lot of times on this blog,) FPS games are getting kinda boring. As a gamer I want more interesting stuff in there. That is exactly the reason I spent nearly 6 months playing Oblivion. The game gave me so much more to do than just run kill, run kill, collect ammo, run kill, collect health, run kill …..

I myself haven’t played UT3 and for that matter only watch someone else play Crysis, but from what I have heard people say about the games makes me wonder if they are nothing more than tech demos. Maybe we should look at it from a different perspective; it’s a fact Epic markets it’s engines via the UTx games, and I think to some extent Crytek does that too. So maybe that is exactly why those game are here for, to show off what their respective engines can achieve. The graphic brilliance achieved by both games/engines is amazing, there is little doubt to that, and the hardware requirements for the games is equally demanding. But that is for now. The same hardware will become mainstream in another 6 to 8 months and the same engines can be used/licensed to make other games. I therefore wouldn’t count them as outright failures.

Different people have different tastes and different points of view, so naturally have different tastes for game genres. However the feeling I get is, in general, game genres are beginning to overlap. This I think that is because of necessity. Game designers that strive to make their games “immersive” have started incorporating ideas and methods from other game genres to make gameplay more interesting and challenging. However having an equally good engine is a must. Case and point to Oblivion. The game looks great because it uses Gamebryo, which is another good engine. I am pretty sure we will see more and better games using both the engines in the future.

New year wishes and a look at the year gone by.

First of all, a “Very Happy New Year” to all.

Just to highlight some interesting news and events that happened the year gone by, plus my own experiences.

  • Games:
    1. It was probably the game of the year (, at least as far as I am concerned), I am talking about Bioshock. Enjoyed playing it even though not on my PC and I still haven’t completed it. Truly amazing graphics and a new twist to FPS style of play.
    2. The Elder Scrolls VI: Oblivion, this game didn’t come in first place because it was not launched this year, but the last. It is here since I could only manage to play and complete the game this year. Played this game along with the Shivering Isles and Knights of the nine expansions for like more than 5 months 😛 starting July, and I must say I have come to thoroughly enjoy the sandbox style gameplay the game offers. Don’t be surprised if I start getting crazy ideas of creating games like this in 2008 😉 .
    3. Just when we thought nothing could tax the 8800, Crysis hit! The game takes away the best visual graphics award of 2007. Amazing eye candy and surely the sign of things to come, though I am not sure about the overall gameplay.
    4. A couple of other interesting games as well like GOW 2 and Gears of War, but didn’t get my hands on them as yet.
  • Programming and Development:
    1. Biggest disappointment was the postponement of OpenGL 3.0 specs. I was hoping to see at least something concrete on this, but to no avail. I hope 2008 will give us more to look forward to.
    2. 2007 saw the release of Visual Studio 2008 and it’s Express editions. Not too much to complain or praise there. .NET 3.5 was released along with the studio versions.
    3. While major releases were few and far between, minor releases like Cg 2.0 and Silverlight dominated most of the programming and development news.
  • Personal projects:
    1. Biggest miss was not being able to launch Doofus 3D. Period! The game was stated to release October/November but inevitable delays and project pressures resulted in the game not being shipped. This has been the biggest disappointment from my side.
    2. The project is however still on track and baring time delays the product and the engine has become stable and looks more and more like a very solid platform for future projects. Most (almost all) of my ideas (some reallly crazy ones too) have thankfully worked!
    3. My RnD on scripting engine integrations has yielded good results. I remember my promise, will update the blog with some statistical data on this, just tied up with project pressures for now. On the whole RnD this year from my side was lower then what it was last year.
    4. Got a new website this year, migrated the blog and also have one lined up for the game release.
  • Hardware:
    1. The year belonged to NVIDIA and the 8800 has pretty much dominated the graphics scene unchallenged for most of 2007. There was a feeble attempt by AMD(/ATI) at the end of the year but the HD 3870 and 3850 have been plagued with shipping problems, though they have shown impressive figure and amazing value for money considering the price point. However, I expect the green brigade to counter that since they are already well ahead in the race to do so.
    2. The next was Intel which has successfully managed to run the competition (AMD) to the ground with it’s chips, the Core 2s, pretty much dominating the market. The Phenoms are here but still have to prove themselves. It’s safe to say Intel ruled 2007.
  • Operating Systems:
    1. I have done enough Vista bashing on this blog already, so no more! My sentiments however remain unchanged regarding the OS. 2007 has been particularly bad for Vista, the OS was given flak on a lot of articles on the web. My recommendation; give the OS a skip for the time and use XP and/or…
    2. Ubuntu 7.10 code named Gutsy Gibbon (released 2007) has been a revelation for me. I have been using this OS for a month now on my internet PC and I am more than happy with it. True there are some quirks that remain but Ubuntu is great OS for, well, everyone and anyone. I recommend this OS hands down!
  • Misc News:
    1. India wins the 20-20 world cup 2007.

New year resolution:

Release Doofus 3D.

A lot of plans in mind, but more on that later.

Is software development inherently unpredictable?

Last week I was meeting a friend of mine who also happens to be a software engineer and a programmer. So, as we were discussing he came around to complaining as to how his team was riddled with problems on a project that was supposed to be well planned and well organized from the start. It seems the project is now over schedule and over budget and the client is not happy. Now most of us in the software industry would just laugh that off saying, “Oh, that happens everywhere, all the time, so what else is new!” But later it left me wondering as to why this happens to all but the most trivial software projects. Why do projects that are planned and organized by people who have worked in the industry for years (, which by the way includes me), fail to deliver on time and/or on budget and sometimes do so miserably!?!! What is so difficult about estimating software development cycle that it always goes wrong; time and again. Yes there are always excuses, feature creep, attrition, inexperienced programmers, the weather; but still if you were to look at other fields of engineering like mechanical or construction you won’t see similar problems occurring there. Projects do go off schedule but the situation is much better off than what we experience in software. Why? Working in software I have seen projects go off schedule or off budget by 100% or more, worse, some end up as Vaporware.

So what’s exactly is wrong with our time/budget estimation techniques when applied to software? Well; maybe it’s not estimation techniques at all, maybe it’s our software development processes that is at fault, which in turn cause wrong time estimations. Through the years people have come up with several different software development models, but have never agreed upon a single one to be the correct one. There was, or rather is the waterfall model, which according to me falls short because it is too rigid. For most projects, changing requirements are a way of life and the waterfall model is just not geared for changing requirements. If the requirements do change, the phases of the model will overlap and thus put the whole process into complete disarray. Also the waterfall model is criticized for being too documentation oriented (requirement, design and other process documents) and focuses less on work related methodologies and productivity. There are several disadvantages of the waterfall model which would require maybe another blog entry. So I refrain myself from going too deep here. However, proponents of the waterfall model are blind to the idea that for most projects, specifications do change and there is actually very little the clients can do about it. Sometimes a spec change is a must to deal with rapidly changing customer demands, sometimes things are not clear upfront especially in domain specific projects, sometimes it just something the management doesn’t like or wants changed. Whatever the reason may be; time estimation for a non-trivial project with changing requirements using the waterfall model will be almost next to impossible. (You end up casing your own tail and doing “Adhoc – things” in the end).

OK so it maybe clear by now that I hate the waterfall model. Well I don’t. I just think the usefulness of the waterfall model is limited to projects with rigid specs. It is not something to be used everywhere and anywhere. Yes, if you read my above argument then you can see that it would clearly not fit for a broad spectrum of projects. In support of the waterfall model however; it is very easy to estimate time for a project provided the model fits to the project’s need. For a project that has unchanging specs, this model is probably the best. Now, having discounted the waterfall model leaves us with another popular model or rather a class of models called iterative models to software development. Where the waterfall model discourages change, most iterative models embrace it. They scale very well to changing requirements and accommodate spec changes rather easily. There are a lot of different iterative models and each one has it’s share of advantages and disadvantages. I don’t claim to know each and every one of them and I have only used one in my current project and that too with some custom modifications (, hybridized with the waterfall method, more on that later). What I want to focus on is the fact that though iterative models are designed to be scalable, forecasting or predicting time-lines is very difficult. If a spec change does come in, it can be easily absorbed by the process but would still ultimately end up causing disruptions in the total time estimates for the project. Continue reading