Is software development inherently unpredictable?

Last week I was meeting a friend of mine who also happens to be a software engineer and a programmer. So, as we were discussing he came around to complaining as to how his team was riddled with problems on a project that was supposed to be well planned and well organized from the start. It seems the project is now over schedule and over budget and the client is not happy. Now most of us in the software industry would just laugh that off saying, “Oh, that happens everywhere, all the time, so what else is new!” But later it left me wondering as to why this happens to all but the most trivial software projects. Why do projects that are planned and organized by people who have worked in the industry for years (, which by the way includes me), fail to deliver on time and/or on budget and sometimes do so miserably!?!! What is so difficult about estimating software development cycle that it always goes wrong; time and again. Yes there are always excuses, feature creep, attrition, inexperienced programmers, the weather; but still if you were to look at other fields of engineering like mechanical or construction you won’t see similar problems occurring there. Projects do go off schedule but the situation is much better off than what we experience in software. Why? Working in software I have seen projects go off schedule or off budget by 100% or more, worse, some end up as Vaporware.

So what’s exactly is wrong with our time/budget estimation techniques when applied to software? Well; maybe it’s not estimation techniques at all, maybe it’s our software development processes that is at fault, which in turn cause wrong time estimations. Through the years people have come up with several different software development models, but have never agreed upon a single one to be the correct one. There was, or rather is the waterfall model, which according to me falls short because it is too rigid. For most projects, changing requirements are a way of life and the waterfall model is just not geared for changing requirements. If the requirements do change, the phases of the model will overlap and thus put the whole process into complete disarray. Also the waterfall model is criticized for being too documentation oriented (requirement, design and other process documents) and focuses less on work related methodologies and productivity. There are several disadvantages of the waterfall model which would require maybe another blog entry. So I refrain myself from going too deep here. However, proponents of the waterfall model are blind to the idea that for most projects, specifications do change and there is actually very little the clients can do about it. Sometimes a spec change is a must to deal with rapidly changing customer demands, sometimes things are not clear upfront especially in domain specific projects, sometimes it just something the management doesn’t like or wants changed. Whatever the reason may be; time estimation for a non-trivial project with changing requirements using the waterfall model will be almost next to impossible. (You end up casing your own tail and doing “Adhoc – things” in the end).

OK so it maybe clear by now that I hate the waterfall model. Well I don’t. I just think the usefulness of the waterfall model is limited to projects with rigid specs. It is not something to be used everywhere and anywhere. Yes, if you read my above argument then you can see that it would clearly not fit for a broad spectrum of projects. In support of the waterfall model however; it is very easy to estimate time for a project provided the model fits to the project’s need. For a project that has unchanging specs, this model is probably the best. Now, having discounted the waterfall model leaves us with another popular model or rather a class of models called iterative models to software development. Where the waterfall model discourages change, most iterative models embrace it. They scale very well to changing requirements and accommodate spec changes rather easily. There are a lot of different iterative models and each one has it’s share of advantages and disadvantages. I don’t claim to know each and every one of them and I have only used one in my current project and that too with some custom modifications (, hybridized with the waterfall method, more on that later). What I want to focus on is the fact that though iterative models are designed to be scalable, forecasting or predicting time-lines is very difficult. If a spec change does come in, it can be easily absorbed by the process but would still ultimately end up causing disruptions in the total time estimates for the project. Continue reading

A modeler with a difference.

Houdini.A few months back I had a friend demo me a 3D software package called Houdini from Side Effects software. Houdini is used extensively in film and movie circles and not so much in game industry, (which is dominated by 3D Max, Maya and to some extent XSI;) but mostly all big blockbuster films that have those great special effects that make you go “whoo”, “wow”, “cool” are pretty much made using this software. I am not exactly sure what my friend was working on, but it seems he was working an extension for the package and wanted some opinions on a custom file format. The thing that got me interested with Houdini was the way you work with the whole thing. It’s a little bit different from your conventional modelers (Max, Maya and XSI); in Houdini you basically do everything by combining operators. To tell you the truth I am not a great 3D artist. I have done most of the art-work in the game, yet my skills leave a lot to be desired. The only modeler I have ever worked with is Blender and you probably know Blender has a notorious and sometimes flawed reputation of being very difficult to use.

Houdini take a very different approach to 3D modeling. The reason I liked it is because it’s entire flow seems to be highly logic driven or as they say it “procedural”. This is an amazing concept and you have to actually see it to understand it fully. It has an interface that looks like a hierarchy of node graphs using which you pretty much model everything. The node-graphs create a kind of construction history which allows you go back and modify previous steps in a snap. This kind of flexibility means the overall productivity given by the software is unbelievable. It allows the artist to be as creative as he wants and at the same time also allows the entire design process to be non monotonous or in other words non-linear.

I would love to have a Houdini like software for designing a game, and I mean the entire game, with composition and everything. Having seen the software at work (, and being a 3D game engine developer,) made my mind race in 1000 different directions and I could see so many possibilities with the type of “procedural” flow. The creative potential could be enormous when applied to game creation. Now that I have looked at it, my guts tell me a procedural work-flow for any game design/creation/composition software will be a step in the right direction. Another very interesting aspect of this package is reuse. Besides the obvious benefits of a procedural type of work-flow the software encourages the use, or rather, the reuse of existing solutions and designs. This might sound like something out of a computer programming book, but it’s rather more subtle. Create a work-flow once, and then reuse it for several different solutions with minimum effort. That would be a game designer’s and an artist’s dream come true.

For those interested, there is a free learning edition called Houdini Apprentice provided by Side Effects.

Gutsy as a Gibbon.

UbuntuYesterday I had had to take some time off of the game to finish off a previous assignment. A couple of months back I was working on a GUI application which involved writing some GUI code using wxPython. It was a rather trivial application, nothing too brain taxing. Originally written under Windows using wxPython, it so happens it was to be ported to Linux as planned. Anyways, the target system was the Gutsy Gibbon (Ubuntu 7.10), so I had no option but to install it on my PC. Now, I am a long term Red-Hat fan and more recently, of course, Fedora has been my top distro preference. The Gutsy was my first experience with Ubuntu/Debian.

I had heard a lot about Ubuntu but never actually tried it before (, except that once for the Live CD version to rescue a hard-drive). After having used it for about 2 days I can tell you outright, it lives up to it’s reputation. While it may be difficult for me to say whether it is the best, but I can certainly tell you for a fact it is good. I have heard people say it is better than most other distros, but surprisingly I found no evidence to either prove or disprove that with regards to Fedora  7.0. I am talking purely from a user’s point of view, and not a technical one.

The fist thing that I liked was installation. Hassle free, clean, neat. OK that probably true with Fedora also so no points there. The other thing I liked about this distro was the fact that it allows you to browse the live DVD/CD version before installation, something I wish Fedora could also adapt, but then again it’s kinda nice to have things, not really too important. The installation was remarkably fast for such a large OS and the OS has an amazingly quick booting time. Oh yes, and nice to see a distro that finally bundles a propriety NVIDIA driver which can be enabled with a single click. Setting up my PPPOE internet connection was surprisingly trivial. Not even on a Windows system was it that easy. I had to download and install a PPPOE driver for Windows, but with Ubuntu it was just click and GO! Setting it up on Fedora was a nightmare!

Ubuntu comes with the default GNOME manager which is pretty quick and responsive. The system does not by default run too many services and that could be one reason the system is fast. The default installation doesn’t install too many packages, neither are any development packages installed. No eclipse, no KDevelop, no debugger. You will have to do it pretty much via Synaptic. Not too much of a problem, but you do have to browse the Ubuntu forums to find out how to get things working, especially if you are a new user. I had to install a lot of packages to get a development environment going even though it was just python I intended to use.

On the whole, this distro is clearly made with the average user in mind and tries hard to make Linux “friendly” OS. It does a good job but still I found some things lacking overall. You still do have to fire up the good o’l terminal every now an then. Hmm…, not a problem for someone who is using Linux for about 10 years now, but then again my question is, Why the terminal?

Overall experience? Not very different from Fedora, believe me. If you had experience with one you can be just as home on the other, period. Maybe a few searches on Google is all that is required. I would put Ubuntu marginally above Fedora because it is friendlier(, and because of the internet setup thing, and the NVIDIA driver).

Oh yes, the distro never crashed on me even once, Fedora does sometimes crash. Anyways, my short stint with Linux is over, going back to Windows and the game now.

Parallel computing using .NET.

My recent entries have all be about news events, and this one makes no exception. In fact it is just about the recently released Parallel computing extensions for the .NET platform (download). You will of-course need the also recently released .NET 3.5 framework update for them to work (, it comes bundled with the Visual Studio 2008; also released recently). On the very same page you will also find an interesting paper titled “The Manycore Shift”, which doesn’t do justice to it’s title I must say. The title makes you think that it has something to do about parallel core programming, but all it does is just outline what Microsoft intends to do with it’s parallel processing programming model. A marketing gimmick? I’ll let you decide. Looked to me like something written to sell an idea to company execs who have no knowledge of programming and/or the concept of parallel computing. The question is, why is it even shown on a MSDN developers page?

It’s been some time since I downloaded these extensions. Obviously (, if you are a frequent reader of this blog, you will know, ) I am a sucker for any technology that has the potential to provide performance enhancements. Unfortunately I have very little time to devote to anything right now so could couldn’t do too much testing with the extensions. Still, I did manage to glance through the documentation provided (, you can find the chm on the same download page,) and one thing that immediately caught my eye was the fact that the documentation talks about “different approaches for handling parallelism”. The framework allows you to have Data and Task parallelism using Imperative and Declarative models. The docs for the extension are good; hmm…. well not great I must say. They take for granted that the reader has some knowledge about the concept of parallel computing, and is pretty thorough with basic threading. Not the most friendliest of docs , but not a very big problem for someone with an experience of MSDN docs under his belt. The only other library I have seen that allows parallel computation using many cores is Intel’s Thread Building Blocks (TBB) which I think allows Task based parallelism.

Parallel Extensions to the .NET Framework contains a few different approaches for handling parallelism in your applications including imperative and declarative models for data and task parallelism.

I have not had enough time to experiment with either and I don’t think too much will change soon , but the fact I am writing this entry is because parallelism is going to become the single most important aspect of program design over the next few years. Especially in the area of game development where hardware and conventional programming approaches are being pushed to their very limit. There are going to be processors with 8 cores soon and there has been talk of next gen game consoles having > 20 cores. Any system that is being designed for tomorrows needs, has to take into consideration parallel computing. With a good documentation at hand, the Parallel computing extensions are a great way to get your hands dirty with the concept of parallel computing. The extensions also provide some real world examples. Invaluable for learning anything. (I am sure some of you are going to be very interested in the ray-tracer example 😉 .) Go get some!

An Open-Source marketplace.

In an interesting development Sourceforge.net has released a service for the open-source community whereby open-source projects can now commercialize their services via Sourceforge (read here). This is interesting because it will allow small open source projects listed on Sourceforge to sell services which was previously not possible. As one blogger points out, such initiative will allow smaller open-source projects to build businesses around their projects.

“It’s one thing for a venture-backed open-source startup to develop new channels. It’s quite another for a one or two-person open-source project to do so. Suddenly, however, these small projects have an outlet to the market. A global market.” – (read the entire post here.)

I have been contributer to an open source project once and I can tell you from experience, managing or contributing to a project online is no trivial matter. It takes lot of time and effort to do so. I have great appreciation for these guys who work for free, open-source projects and who get almost nothing in return. Sourceforge’s direction in this regard is a welcome initiative indeed.

Nintendo with an interesting Operating System.

Nintendo has released an Operating System named ES (read translated version, original Japaneses version ) under an Open Source License. What the whole idea is behind having an OS is still unclear to me. If they were planning on having an OS for their consoles, there are other more attractive alternatives like Linux. Sony has done something on these lines by having Yellow Dog Linux run on their PS3 consoles.

The website is pretty vague and doesn’t delve too deeply into what the OS is intended for, but it’s clearly something too keep and eye on. Maybe it is just some research project. Maybe the company is still toying with the idea and wants to see what the response from the community will be, who knows.

The OS itself seems pretty interesting. The kernel seems to be written in C++ and it runs natively on x86 and QEMU. They have a port of SmallTalk programming language called Squeak. I am not sure what the intension is really, will it be used for game development on that platform? I guess there must be some way to do C++ programming as well, considering it’s x86 compatible.

Visual Studio Express 2008 – First thoughts.

It’s been about a week since I downloaded Visual Studio Express Editions and I have done some fair (not too much) bit of coding using VC++ Express and not so much with Visual Web Developer. Didn’t touch any other editions as yet, so I am going to be very C++ centric with this one. Generally, the UI in new Visual Studio Express has become more responsive. Now don’t get me wrong, I am not saying 2005 was unresponsive but sometimes it took judicious amounts of time to open Dialog boxes. For example the Configuration dialog box took some time to come up initially, I mean the first time it was brought up, so did the search dialog. No such problems with this version, cool, that was a nag!

The compiler speed has increased, not by too much but it has. I must say I do use the /MP option a lot, but I still generally feel the speed of compilation is a little bit faster. Maybe on a faster machine the compiler could be a magnitude faster, but I still run a 3GHz 2 core machine that is 2 years old now. It’s always nice to have a faster compiler, especially if your project takes about 15 min to compile from scratch and is heavily templatized!

On the other side I have had crashes with the express editions, 4 times to be exact. Considering it’s been only a week, that’s just too many times I must say. Not much has changed since 2005, even then, I must say the express edition is pretty good. It does a lot of things not easily attainable in other IDEs and C++ compilers. People have complained about not having a resource editor; it’s not too much of a hindrance to a game developer like me who pretty much has no need for a resource editor. In any case there are a lot of free resource editors around that you could easily use to edit your resources, or just use wxWidgets!

On the whole the express edition of C++ is good. Hey and best of all, the express editions are free, should we even be complaining!

What were they thinking?

It’s now called the top 10 worst product of all time. Oh, I am talking about Windows Vista, and just recently there have been a flurry of articles (another one here) stating what was rather obvious to anyone taking only a single look at this retarded OS. After all this criticism one cant help but wonder what Microsoft was thinking when it released this OS. In contrast XP was a welcome change to the then ailing 2000 and the OS is still going strong today. I thought Microsoft had learned from past mistakes on Me and 98, yet we see the same thing with Vista.

While some criticism is unduly harsh and unwarranted, Vista seems to be myriad of small mistakes rolled into one. Not entirely from a technical point of view, but with other issues as well. The biggest put off and the most costly mistake as far as Microsoft is concerned is the fact that the OS is a resource hog. The OS requires a stupendous amount of memory to run efficiently. Now some might argue that about 4 GB worth of memory is not so much these days. But that is not entirely accurate. I would put the question the other way round, “Why on earth does an OS that does nothing special in particular require such huge amount of memory?” Why do I tax my memory budget so that a improperly designed OS can run?

There is also another more serious problem which I bet is biting into Vista sales, and that is, Vista runs extremely crappy on older machines with less RAM. Enterprises generally don’t want to upgraded their hardware to support an OS that very clearly doesn’t offer anything special. They see no addition in their value chain in upgrading to Vista and rightly so. You really can’t blame them. I have experienced this first hand on my friend’s machine. He ended up switching back to XP after a rather unpleasant run with Vista. I was reading this inquirer article and it made me smile, what is written seems to be spot on.

The features that were advertised with Vista don’t do justice for it’s price tag. The secure OS crap that was dished out looks nothing more than a nagging nanny. The warnings and messages boxes can get really annoying and I found them too much of a hindrance while working. That’s not going to be too popular with developers and programmers; it’s besides the point that those can be turned off, and programmers have always learned to adapt, what are they there for? The warnings just make you feel retarded. Also, the other feeling I get is somehow Microsoft wants to unload some of it’s responsibility off of themselves on to the end user. It’s like, “Oh we told you this program could damage your system (via a message box). Sorry what happened is your problem not our’s. Don’t say we didn’t warn you!” And what exactly are they securing us from? Can I run the OS without an antivirus or anti-spyware program?

Then there is the DirectX 10 story. As you probably know there is no DirectX 10 for XP ,only for Vista. The driver reason given is utter b.s. That’s just some arm twisting by Microsoft and it has resulted in DirectX 10 not being adopted as widely as it should have been. I have seen a lot of people criticizing DirectX 10, but it’s not DirectX 10 that is preventing more DirectX 10 games but Vista. It seems there are a very few people that have the required hardware + Vista to run DirectX 10. Talking about drivers, the OS has fair share of hardware and driver related problems. Incompatibilities with hardware still continue even after a year of releasing the OS.

I had done a fair bit of brain bashing with the OS just recently to get the game Vista compatible (read here) and I came out with a feeling of being let down by the OS. I use XP for all my PCs and I had high expectations from Vista after having a good time with the XP OS. I however continue to face problems with Vista. Compared to that, XP seems to be a very friendly OS. I spent a good 2 weeks on Vista and was pretty disappointed.

Some additions to yesterday’s update.

  • I was browsing through Visual Studio express page when my interest was drawn towards a toolkit called Game Creators GDK. The interesting thing is, it integrates with the express edition of Visual C++ and thats a little bit surprising. Or maybe not; many indie and aspiring game developers do tend to use express a lot. Just out of curiosity I did download it and the package seems to be a solid beginner level package especially since it is free. If you are a budding game developer you have to check this out. It also comes with a bunch of tutorials and extensive documentation. Yes, it’s a C++ SDK but, I didn’t delve in too deep so can’t say too much about it really.
  • I missed this one yesterday; but XNA Game Studio Beta 2.0 has been released. I tried a hand at XNA a long time back, when it was first released but never really got fully into it. It still remains a mystery why MDX was discontinued in favor of XNA. Maybe XNA is more of a complete game creation toolkit rather than a wrapper over DirectX.

Some recent news updates.

I generally have such an entry every 2-3 months so here goes.

  • Visual Studio 2008 is here!
    Microsoft recently released Visual Studio 2008 along with the Express editions. If you are like me then head on down there and start downloading the express editions, like now!
  • Delayed but good news from AMD/ATI:
    AMD (and ATI) has/have announced the availability of the HD 3870 and 3850 range of graphics cards, widely believed to be an answer to NVIDIA’s 8800 GT and GTS range of GPUs. They are based on ATI’s new RV670 graphics chip. Just how much of a challenge will they be to the 8800 has yet too be seen, but I am guessing we could be seeing stiff competition here. Read details of the spec here and here.

    In other news, very recently AMD also released the Phenom Quad core processor, X4 and the 7940FX chipset. Again to early to say how things pan out. More details here and here.

  • Updates from NVIDIA:
    As you know NVIDIA released Cg 2.0 Beta recently. Along with that two other releases from NVIDIA just rencently; 1) PerfHUD 5.1 and, 2) though I am not working on Cuda, I know some of you are, you can grab the Cuda 1.1 beta.
  • Disappointment in the OpenGL camp.
    As announced recently OpenGL 3.0 specification is delayed due to…. oh well, lets start that again; It seems OpenGL 3.0 has had some last minute changes and the specification release was put on hold. Let me not say anything further, I don’t like getting hate mails.