When s#!t happens!

It’s painful when your graphics card dies out, but even more so when it dies out not because you pushed it too hard (overclocked it), but because of a power-surge. I had a trusty HD 4870, and though it wasn’t the newest of the cards it did serve me well for over 4 years now and could still push most games I play at decent framerates. Plus and more importantly, it worked very well for all my current graphics needs. I mostly target DX-9 to DX-10 level hardware and the 4870 was more than enough for that task. Sigh! … it’s dead now!

It so happened that while I was away one weekend, an electrical fault caused current to leak into the earthing terminal (probably busted earthing of the power co.). Since the earthing terminals of most electrical equipment (including most PCs) don’t break when you turn off the switch, the current flowed into my PC and destroyed one of the PSU caps. As a result PCIe supply of PSU shorted out killing the graphics card. Fortunately (and thank God) no data was lost and the HDDs seem quite OK when I ran tests on them.

The only option left was to get a new card and I opted for the R9 270X. No point in going for an older card now that the R9s are DX-12 compatible. Unfortunately The R9s don’t play well with older MoBos and … so I had to get a new MoBo with a new CPU, and a new PSU, with a new cabinet and basically build a new dev machine from scratch, not to mention another 20-hour dev setup after installing a new OS.

Well to anyone who is reading this, my advise is to at the very least have a stabilizer for your PC. A UPS can go a long way in preventing such a thing and the most important thing is to have a good PSU. Go for the known brands – Corsair, Cooler Master, Antec, etc.

Direct3D 10/11 coming to Linux … What about games?

No, April 1st is still more than 6 months away, and yes you heard me right — Direct3D versions 10 and 11 are indeed coming to Linux. How is this even possible?  Well it is possible, since nouveau moved on to Gallium 3D which allows Direct3D API (actually any API) to be exposed via a front end called a state tracker. Interestingly (, and there seems to be a lot of confusion going about on public forums) Direct3D will be a Native API under Gallium, much like OpenGL is currently. It won’t be a something that emulates Direct3D by using wrappers around OpenGL — meaning you will be able to write and compile Direct3D code directly on Linux or BSD based systems that support the nouveau driver. Initially I was a bit skeptical of such an approach since Direct3D API is integrated with Win32 API, but the author seems to have solved this by using Wine headers. I don’t know the pitfalls (if any) of such an approach, but it seems to have worked for him and would seem a logical path to take (instead of breaking API compatibility). He clearly outlines the motivation behind doing the Direct3D port, and kudos to him for doing something that was but inevitable given a no show of Longs Peak.

Naturally a native Direct3D implementation will allow game developers to write code that is cross-platform and even allow existing engines/games that use Direct3D versions 10 and higher to be ported across to platforms that have a Gallium driver. W00t! This is amazing, almost too good to be true isn’t it? But before we gamers jump in joy, there are still a few things that have to fall in place before things can get up and running with regards to Direct3D on Linux. First and foremost is support. Hardware vendors like Nvidia and AMD must support Gallium in their drivers, or OSS drivers must be written (and are being written) to take their place. This is paramount since without such an interface, no front end API (Direct3D or OpenGL) will be able to use hardware acceleration via Gallium. Second, and more importantly, the guys at Redmond must allow such an implementation of their Direct3D API. An API itself can’t be copyrighted. The author seems to have steered clear of any Microsoft code, so theoretically this shouldn’t be a problem. But then again I am no legal eagle, so I can’t really say anything w.r.t. this. There have been rumors that there are patents on sections of Direct3D. I am not sure what that means, or for that matter if it is even possible to patent sections of an API/Library. But, things could get potentially messy if Microsoft were to place a cease and desist on this new development. I doubt this would happen, but you never know.

I have to agree, having Direct3D as a native API via Gallium does open up a lot of possibilities for OSS platforms that have severely lacked games. Accelerated graphics on most systems apart from Windows have had little choice up until now with OpenGL being the only real option. But does this really mean that all of the games that are developed and are being developed will be ported to Linux and other OSS platforms? That’s an interesting question and the answer isn’t quite that simple. Lets look at the macro picture of the industry. For AAA games the PC platform isn’t a priority. Most (maybe all) AAA games are today made with consoles in mind. Yes there maybe a PC port, but it’s the consoles that are the main priority. Most (if not all) gamers that play AAA games on the PC do spend a bang on their systems and most of them already have Windows as their main OS. Some do have *NIX systems but even these few have a Windows partition that they keep around specifically for games. Porting any software to a new platform isn’t a trivial task. Even with the best coding practices and methods, it requires a lot of resources — which aren’t free. Everything from coding, testing, maintaining build setups, writing install scripts and many other things requires time and money. For  a AAA game, or for that matter for any game or software, a port to a new platform should show a robust ROI (return on investment). That’s where the crux of the problem lies. There aren’t that many *NIX gamers out there, and if there are, the big studios aren’t seeing them!

Then there are the casual games, which also is a big market for games. Casual games represent a very different kind of audience. A typical casual gamer is a non technical person who doesn’t even understand what a hardware driver is, let alone jargons like Gallium, Direct3D, OpenGL or for that matter Linux. Most casual gamers will have nothing but a moderately powerful laptop with on-board Intel graphics chips — which came with Windows pre-installed. This is the kind of player that expects the game to install and run with a single click. They don’t understand driver updates or DirectX versions. For them it matters little which API is better or worse or which platform supports which API and which doesn’t. Apart form these two broad segments, there are a whole lot of players who will play radical indie games and this is probably where Linux ports has found some success. This gamer is the tech savvy computer geek who runs Linux as his/her primary system and isn’t afraid to fire up the console now and then. I must say, some radical indie games have found success in this area. But, these games are far from cutting edge. They maybe very good games, but you don’t expect Crysis like graphics from them, and it matters little what API is used or if the underlying API runs 5% slower when your game is not going below the 30FPS barrier.

There have been lots of debates about OpenGL vs Direct3D. I refrain to go into that. However, having a choice of accelerated graphics API for platforms other than Windows is definitely good all around. Direct3D versions 10 and 11 are well designed APIs, closely tied to current generation hardware. But will all this translate into more ports of games to Linux and BSDs is still an open question. The community as always will play a vital role and only time will tell how things pan out.

Can parallel processing really cut it?

When Larrabee was first delayed and then “postponed” most of us weren’t surprised (, at least  I wasn’t).  Parallel computing, though advocated as a world saver, isn’t the easiest model to program to. Doing everything in “software” (graphics, HPC and all) ‘might not’ be as easy as was anticipated. The cold hard reality is that languages like C++, Java and derivatives (mostly OOP ones,) were never really designed for parallelism. A multi-threading-here and a asynchronous-there, doesn’t really cut it. Using the full potential of parallel devices is very challenging indeed. Ironically most of the code that runs todays software isn’t geared for parallel computing at all. Neither are todays programmers.

But experts advocate  a parallel computing model for the future. But, is it easy to switch to? Will an innovation in hardware design, or a radical new compiler that optimizes away your “for() loop” the real answer? A very interesting article to read (even if you are not into graphics and game programming) is :

http://www.brightsideofnews.com/news/2010/5/27/why-intel-larrabee-really-stumbled-developer-analysis.aspx?pageid=0

Very rarely do I quote articles, but this one is really worth a read. Well-written and well said.

Larrabee isn’t coming just yet.

Hmm… I am disappointed (story). No, I wasn’t expecting the first versions of the technology to be game changer in the graphics or for that matter in the HPC  or the compute world, but I was very very interested in knowing more about the Larrabee technology. Thus far Intel has only thrown “bits and pieces” about their new tech, and that in no way gives one a clear picture. No, Intel hasn’t given up on the technology, but seems to have postponed the release in it’s current form because the performance targets weren’t being met. Ironically, Intel had initially made claims that Larrabee chips would stand up to discrete solutions from ATI and Nvidia. However, it looks like the tech still needs some work done to measure up to that.

At this point all we can do is speculate, but the fact is — building a chip that can do HPC and compute and graphics and have driver/software/optimizing compilers working perfectly is a tall order, even for a giant like Intel. I am sure they have done most of it right, but most of it isn’t all of it, and that’s probably the reason we are seeing the launch being canceled in it’s current form.

Many-core computing is the next big thing, and technologies like Larrabee are the future. I am disappointed because more than the tech, Larrabee would have been a window into how things are shaping up. How does software development scale to the future? Would the new optimizing compilers allow the use of current software methods? Or, does it mean a radical shift in the way software systems are built? How would the new tech address task parallelism? — I guess we will have to wait a while longer to see how these (and I am sure may more) questions are answered.

Alienware M17x – The fastest alien amongst laptops.

alienwareI was invited to the preview of Alienware M17x unveiled by Dell to cater to the high end, hard-core gaming enthusiast. It was my first experience with the Alienware brand, though I have often read about other high performance laptops from them. Dell is marketing the M17x product as “The most powerful gaming laptop machine in the universe”. Well, that is probably correct, at least for now the machine is more than capable of pushing anything out there within it’s resolution limits. Alienware is known for it’s high end machines and this avatar in the Alienware series is no different in keeping with the brand image.

The M17x packs some heavy duty, top of the line stuff in it’s guts, probably far more than what is required for a gaming notebook. Ergonomically the M17x is designed to please the hard-core gamer, and also to make a style statement. Complete with flashing lights, multi-colored keyboard and scintillating sound, every effort has been made to attract yours and everybody else’s attention. The whole laptop is designed to look different and will stand out from anything else in the room. If you want to show off your “new gaming laptop” then the M17x is probably what you should be looking at.

No high end gaming rig can be complete without a heavy duty GPU, or should I say GPUs (plural), 2 in fact. The M17x features either with Dual SLI Nvidia GTX 260M or the Nvidia GTX 280M GPUs. I would suggest the 280M. (Well, if you are going for a high end gaming system, you might as well get a top line GPU.) The 280M is, as of today, the highest performing GPU for notebooks. The laptop comes fitted with the  Intel Core 2 Extreme mobile processor and you seem to have an option of choosing Dual or Quad core CPU. The choice of the CPU will depend on the type of games that are played. Games like Oblivion and Fallout 3 are more CPU intensive since a lot of data is streamed in real-time, but in any case I don’t think there should be too many problems even with a Dual core CPU since most games wont go CPU bound with a powerful GPU setup and fast 1333 MHZ GDDR3 RAM. Again, if you are the one to play at exceptionally high frame-rates and can’t tolerate even the slightest glitch, then by all means the Quad core option is also provided for the M17x.

While M17x looks like a laptop, it’s actually is a mobile desktop. Weighing in at more than 5 Kgs, it isn’t something you can lug around to every place you go. The weight of the laptop must be due to the 2 GPUs, heat dissipation devices and the large battery that will be needed for such a huge performance monster. The M17x is without a doubt a high performance gaming rig. I personally tried pushing Crysis at 1440×900 at full AA and AF and there were no visible hiccups or slowdowns and the gameplay was flawless. I bet it will be able to push every game out there without a sweat. Too bad it only has a 17″ screen. For this kind of performance the 17″ screen looks a tad bit small. I would have loved to see a larger and a higher resolution monitor but I guess the compulsions of space and laptop dimensions made 17″ the largest choice.

The only real nag that I found were the lights. At a first glance you may (or may not) like the flashing lights and the multi colored keyboard, but once you start using the machine, the lights are nothing more than a distraction, especially in fast paced game. Well, you have an option to turn them off, so I guess that’s not too much of a bother. Also the only real advantage of a Dual GPU setup is for systems that have enormous resolutions (2560×1600) or for multi monitor systems. SLI combos are excellent when rendering with very heavy fillrates and even at it’s highest resolution the 17″ monitor isn’t quite in the league for Dual SLI, considering that it already has the 280/260M. Having 2 GPUs instead of one also means the machine will generate quite a lot of heat, guzzle battery power and will weigh substantially more than it would have with a single GPU. However the choice seems to have been made to please the hardest of the hardcore gamer out there. The M17x makes absolutely no compromises on performance, anywhere.

Well, there isn’t too much further to say regarding the machine. My experience with the rig was limited, but it is interesting that Dell launched the Alienware brand in India. India is not known for it’s hardcore gaming enthusiasts and you wont find too many laptops specifically for “the gamer”, at least nothing in the league of the M17x. Kudos to Dell for that.

Supercomputer@Home

The past few weeks has seen a spat of newswires from leading GPU manufacturers showcasing GPUs with potential supercomputing capabilities. Some even going as far as saying that a powerful GPU could be used to achieve the power of a supercomputing cluster. Building a supercomputer at home might sound like an impossible task, but the fact is, very soon you could well have one sitting on your desk. No, this is not one of those machines that is just a bit faster than the previous or current generation PC you already have. This machine could well have a mother load of computing power and it might not feel all that fast while running your GNOME, KDE or Vista UI; or for that matter your Office applications. In fact it might not feel any different at all to the average user. However, hidden beneath the UI exterior will be a system that could be a power monster, at least for applications that have been designed to take advantage of what has become the latest buzzword in programming; parallel computing.

There has been a lot of talk about how a supercomputer could be built using top line GPUs, but up until very recently GPGPU was a difficult thing to achieve. This was due to the fact that most GPGPU solutions in the past involved tweaking already existing graphics APIs (OpenGL and Direct3D) for GPGPU tasks. This method was fraught with problems, mainly due to the fact that graphics APIs were never designed with general purpose computing in mind. That’s why GPGPU technologies like CUDA were developed and with the advent of those and an almost exponential increase in GPU power, the Supercomputer@Home has become a reality.

No, this is not another entry that praises GPGPU, well it is, but not entirely. Let’s be fair, GPGPU is only a part of the solution. As I have repeatedly said in my earlier entries, GPGPU has tremendous potential when applied to the correct problem. If you were to believe GPU manufacturers, it would seem that the GPU is the answer to all the problems out there. The fact however remains that the GPU is only a part of the solution. GPUs are designed to address data parallelism very efficiently. The roots of this obviously lie in how graphics is processed. Graphics, or should I say modern game graphics generally require a lot of parallel data processing and GPUs have evolved to address precisely that.

The current generation GPUs have immense computing power, and most of us already know that. However, GPUs are not all that great when dealing with task based parallelism. Simply put, GPUs are not designed to run several different tasks concurrently. So for problems involving a lot of separate tasks that can benefit from simultaneous execution, GPUs are of little help. However, if you have a large set of data comprising of a lot of smaller data units, and want to perform same operations on those data units, then using the GPU could give you a huge performance enhancement. The solution would involve streaming the entire data on to the GPU, and executing the operation on all the units simultaneously. GPUs excel in such situations.

Which, brings us to  data streaming. Streaming data on to the GPU can be a costly operation and is best done for large data chunks. Streaming data too often and in smaller data chunks can usually offset the performance benefit the GPU has to offer. While this is not a limitation of the GPU per say and more of a limitation of the bus, data transfers to and from the GPU should be kept as little as possible to achieve maximum performance throughput. Graphics programmers are already aware of this. OpenGL and Direct3D both encourage programmers to send maximum data across to the GPU all at one time in what is called as “batching”. Both APIs advise the programmer to stall the GPU as little as possible, and in many instances even go as far as adding a memory/resource overhead to achieve a better throughput.

After reading the above paragraphs, one must invariably ask the question, “Can all problems be efficiently data parallelized?” The answer is no. Not every problem is solved by efficient data parallelism. A lot of problems can be, but many computing problems can’t effectively take advantage of data parallelism and therefore can’t take advantage of the GPU in general. Also if you have a small set of data and want to perform a lot of different operations on that same set, the GPU is of little help. Actually it is you good old CPU that is designed for task parallelism. While a lot of hype has been going on about the latest generation GPUs as potential replacement for supercomputing clusters, the CPU has been left on the sidelines. Or has it?

The next question to ask is, “What role will the CPU play in your Supercomputer@Home?” Well make no mistake about it, a very significant role indeed. In the time the GPU has been growing from strength to strength, the CPU has been having a transition of it’s own. While things have moved on from the days when clock speeds were treated as marketing tools, the CPU has itself seen some significant development. Although nothing to get exited about, the CPU cannot be ignored especially if you have to work on problems which involve multi tasking or multi-threading. Most non trivial programing problems require a mix of programming solutions. Some parts may require data parallel solutions, others may require task parallel. Then there are problems that can’t be solved by either. Therefore the CPU will still continue to play a critical role in any supercomputing system you may build.

It’s completely possible to build a supercomputer at home today. But, building a supercomputer is the easy part. Taking advantage of all this power is whole another story. It would involve an approach where the programmer will have to split his/her design so that data parallel parts could be executed on the GPU; at the same time multiple parallel tasks are executed on the CPU. Yes, it would imply a design that is similar to modern game engines, where data is sorted, batched and the entire data is sent across to the GPU to be processed by the rendering system. While this happens parts, or rather tasks of the application can be executed in parallel by conventional multi-threading or using parallel programming (OpenMP) to achieve maximum performance.

A machine with say a Tesla GPU or 2 HD 4870 GPUs connected via crossfire with a new and upcoming Core 7i CPU from Intel and oodles of memory could very well be a machine with supercomputing capability. But, where would one use so much power? Obviously computer games is one area. For a game developer like me, you just can’t have too much power. I always manage to stress out everything I have on my machines, however high end it maybe. But seriously, where else could you use all this power? Maybe for someone who wants to do heavy duty scientific calculations, could indeed benefit from such a machine. Another use could be compression algorithms, especially video and audio compression. People in video/audio editing business could also benefit from such setups. However, for the joe user so much power is all but useless.

Gamepads, Joysticks! How do you play with those?

I integrated Joystick support into the game engine a long time ago but I never actually played the Doofus game using a Joystick or a gamepad up until now. One of the testers logged an issue last week saying that the game’s camera movement was a bit slow for game controllers in general. So I decided to play the game out myself with a joystick. For the record I never play any game with any accessory other than the keyboard and mouse and after my recent experience with the gamepad, I must say I missed the mouse quite a bit. Maybe it’s just me or I have taken a strong disdain towards any kind of game controllers ever since my days with God Of War, and though I am a total fan of the GOW series, the experience with game controllers while playing that game has been more than a little unpleasant. I think I have been playing games with the mouse for too long. Maybe so much so that I have grown too accustomed to the Keyboard and Mouse. I truly don’t know. However, and this could very well just be me, I find controlling the camera using the mouse far simpler and more intuitive than a Gamepad or a Joystick axis.

I tried a lot of different games this week with a gamepad, which  for the better part of this year, has sat inside the cupboard. I told myself, “It’s just a matter of time before I get the hang of this thing.” No chance! With every game I try it’s the same story. I just give up after struggling with the controller for about 10 mins. It’s been like 3 days and I still can’t control the Doofus game’s third person camera, which by the way is not at fault 🙂 . For me, controlling Doofus’ third person camera just seems a lot more natural with the mouse than with the Keyboard. Not that I can’t do it, it just feels a lot more comfortable with the mouse. Fortunately for people that dislike the mouse, Doofus does run perfectly well on any game controller.

Some people say game controllers are great for flight simulators and maneuvering vehicles. Sorry, I haven’t had time to play those. I can tell you, FPS games are almost impossible to play. You can’t aim with these things and get fragged pretty easily. Maybe combat games fare better, but again I haven’t had time to play those either. I ran Tomb Raider demo I have on my system and even there I found my gamepad to be more than a challenge.

So, after this bout of testing, the gamepad goes right back in the desk from where it came. Ok maybe I have ranted enough for one post!

The HD 4850 and the story with AMD/ATI.

First the HD 4850. I was testing the game on the new HD 4850 (Palit 512MB) today and some interesting things I observed with the graphics card. For one it gives a serious bang for the buck. Doofus 3D clocked at about 140 FPS at a resolution of 1024×768, AF 16x with graphics quality set to high. Even with AA 2x Doofus 3D clocks more than 120 FPS and I have a strong suspicion the game was going CPU bound at those frame-rate, since the machine had a 3 year old CPU. I can tell you for a fact, the card is a serious performance monster, but then again Doofus 3D ain’t a top line game. However, for me, this is the first time I have seen Doofus 3D under 4x AA and 16x AF running at a playable FPS since up until now I have had only GeForce 6200, 6600 (and to some extent the 8600) cards. There is no denying that the HD 4850 is more than worth it’s price for someone who is looking for a budget card and expects to run most of the top-line games today. The card runs a little bit hot but that’s to be expected given the amount of triangles it can push and effects it can deliver. Hats off to AMD/ATI in that regards. If you are someone who is looking for a mid-range card right now, the HD 4850 is excellent value for money.

That was the overview from non-programming point of view. Now the programmer in me has something to say. The card maybe excellent, however it’s not all that cozy with ATI drivers. OpenGL drivers are a mess, with the bundled driver not even having extensions like EXT_stencil_two_side support. Even basic functionality like (for example glDrawRangeElements() ) seems to be broken at times, even showing messed up graphics when using Vertex Arrays on older cards. Now this exact same functionality is available under DirectX. Lets say it’s safe to assume that GL drivers haven’t been updated in a while and\or AMD/ATI just isn’t interested. The only issues that were reported in this round of testing were on ATI cards, so I had to literally debug the application on ATI hardware to ascertain that these were indeed driver problems. Some of the issues I have mentioned occur on guess what, the HD 4850 also. The only workaround seems to be, vendor specific hacks! That doesn’t make me a happy programmer at all!

The story with Direct3D is a lot better and no issues were observed under DirectX renderer of the game. That just tells you something doesn’t it!

It’s a cracker!

It’s probably well known that GPUs are powerful beasts, and I have repeatedly pointed out on this blog that the awesome power of the GPU can be used for more than just graphics. For tasks and computations that can be executed in parallel, GPUs are a lot faster than CPUs and also more powerful. So it won’t come as a big surprise to learn that people have put GPUs to good use to do all kinds of stuff. GPGPU has been more than a buzzword off late and with technologies like CUDA and Larrabee, it has become even easier to get at all this power. However like every other piece of technology, GPGPU also has it’s downsides. This article I read recently briefly outlines the fact the GPU could be put to work as a generic brute force cracker. I am no expert in cracking, but I am a person who has played around with GPGPU long enough to understand how serious this could be. I read the article and the first thought that crossed my mind is, “Hey, you know, this is the kind of thing the GPU excels in actually!”

GPUs today can deliver computational power in teraflops. Very soon we could have hardware that can do 100s of times that. There is also another interesting thing that GPUs allow you to do. You can stack a series of these buggers together and achieve a phenomenal boost to this already awesome power. You could increase the computational power of a machine by several orders of the magnitude by stacking GPUs in parallel. It’s a disturbing fact that such power, until a few years ago, was only available on top-of-the line mainframes. Today your could build a machine that has the power of a supercomputer with components probably available at your nearest computer hardware store. That just doesn’t bode will with the fact that anyone with a brain and time to kill can hack-up a brute force cracker and put it to work — and with enough “horsepower”, might even succeed.

As more and more powerful GPUs hit the market and as GPGPU technologies progress, we will see newer machines with unheard of computing power on our desks and laps. While this means more interesting games and faster number crunching for most of us, there are those who will put such tech to vile use. What we probably also need are better security systems and stronger encryption systems along with better games and faster number crunchers.

Tweaking the game to run on a wide range of hardware.

For the last week I have been involved in rather uninteresting activity. Well, I have been literally throwing the game on all possible hardware configs hoping it will run. All of this (yes, again) to find out how the game fares when exposed to different hardware configurations. Well it may seem like this activity is rather mundane, then let me assure you — it is. Well, not entirely 😀 . It takes some effort to get a game to scale seamlessly to all kinds of hardware and currently I am enduring all the pain of crappy drivers and broken functionality, which,  should I say, underscores some of the major headaches in real-time graphics development. It’s not like you can throw the game with it’s peak setting ON and expect it to run on a crappy Intel on-board graphic cards. Such a thing will just end in a disaster. The game must scale to different kinds of hardware and in our case especially so; that too seamlessly and effectively.

Doofus 3D is uniquely placed. It doesn’t aim to be a top-line, hardware intensive, hard-core gamer only, triple A (AAA) title. Neither is it a 2D game capable of running flawlessly under software rasterized graphics on your grandma’s old school PC. It is geared more towards intermediate level hardware. Hardware that most people have on their work laptops and home desktops. This effectively means an extremely wide range of hardware to cater to, and that in turn means scaling the game’s software paths (internally) based on a *lot* of underlying factors. Assuming a player to have a specific functionality available on his hardware setup can be catastrophic and disastrous. Such assumptions could mean a total failure of the game on a machine and could mean a potential loss of a buyer in the end.

While drawing up specs of Doofus 3D we were especially careful not to go overboard with graphics galore. Even with careful planning, there was significant feature creep, and with each new feature that was added, new countermeasures had to be put in place so that the game would scale to lower-end hardware. Not everything was straight forward, but we still did manage to push it through. If you have been following my blog for some time now, you would know that this is not the first time I am into such activity. I (personally) run such tests after each beta (feature addition/ feature freeze) of the game. That is probably why we haven’t faced too many problems this time around.

Under Doofus 3D we followed a process that is a bit different from traditional software development. Every beta under this game project was actually a feature complete runnable version of the game. Before or between any beta, every release was an internal alpha version. A beta meant, “A set of features is complete enough to be tested”. After each beta, each feature was tested on various hardware setups. Something like an iterative method of software development, but not quite. I would say, a process tailored specifically for our project and more specifically for our situation given our limitations.

Doofus 3D runs on most middle rung hardware without too much problems. It will run on on-board graphics cards too, but I find Intel on-board graphics to be an abomination. Hopeless hardware support for 3D graphics and equally crappy driver support! Enough reason for the engine to scale the game to run on a low setting when it detects an Intel graphics card. The situation with NVIDIA and ATI cards is a lot better with ATI’s low end cards (,assuming the price point, ) to be consistently outperforming NVIDIA cards. That said, NVIDA has the most stable hardware and drivers and most settings work uniformly across cards and driver setups, though there can be problems there as well. ATI’s drivers can be buggy at times and in case of OpenGL can be totally broken. Fortunately the O2 Engine and the Doofus Game can use either Direct3D or OpenGL as rendering APIs. For any high end or for that matter even for most mid-range graphics cards, Doofus 3D is not a problem at all.