Over-patterning software design.

Ah! Design Patterns! Yes those seemingly magical concoctions of code that appear to solve all of the problems plaguing software design. So profound is the initial impact of design patterns, that the engineer begins to believe that he/she has finally found mythical scrolls of wisdom, bestowed upon him/her by divine beings, so much so that after reading through them every design problem can be automatically deconstructed into a set of familiar design patterns. Using them seems to solve every challenge software engineering has to offer — and the engineer begins to believe that all that is ever needed on his/her desk is a copy of those very patterns. Yes, there was a time when I have been guilty of the very same thing.

There is also the misconception that patterns are drop-in replacements to traditional software design practices. It’s tempting to approach a design problem with a pre-packaged solution that patterns seem to offer. “Oh, we have a Composite, that means we need a Visitor for collaboration. So let’s use a Visitor then.” That was easy, but what was missed was the overhead of designing something as a Visitor. No one asked the question why a Visitor was needed, or if  it was indeed needed. Often the only reason given for such design decisions is, “… because a design pattern says so.” That’s not what design patterns advocate at all. Excess use of design patterns while designing software, inadvertently leads to Over-engineering.

This contradicts the popular perception which is of the view that patterns were created to address most commonly occurring design problems. Yes that is true, and no I am not trying to be a design pattern heretic and declare that patterns are useless. Patterns are in fact very useful when applied correctly. It is true that most software designs can broken down into sub-designs which can be collectively solved using a combination of different design patterns. But just because they can be, doesn’t mean they have to be. A designer well versed in design pattern use can quickly find adaptable patterns for most design problems — and can probably get them to work together if he or she understands the modalities of pattern behavior.  There is a dichotomy here; design patterns lead to over-engineering — and they are useful!! What is it then?

The truth lies somewhere in-between. Most problems with “Over-patterning” begin when there is an overbearing urge on the part of a designer to adapt his/her design, and sometimes downright bend it to fit to a design pattern. Just because a pattern fits or solves a problem, doesn’t mean it has to be used. Loading a software design with patterns is a mistake. One must remember, patterns add cost, and by cost I mean engineering cost. Strange — an engineering solution adding an engineering cost? But, thats how it is with any engineering problem in any engineering domain. Ironically if you refer each pattern you will often see these costs clearly pointed out by the authors. Call them disadvantages, limitations, issues or whatever other name you come up with, but the reality is that these issues aren’t trivial. An oversight or a failure to understand the implications of these in the overall design of a software system is what leads to overly complex  or over-engineered solutions.

An excellent article to read with regards to this is Joshua Kerievsky’s Stop Over-engineering.

Speeding up Visual Studio 2010 on XP and Vista.

UPDATE (Nov 2014): Given how many hits this and the other post are generating, I have to warn you that VS 2010 is seriously old. With the new Visual Studio version you can still compile for XP and there is no point in continuing to use VS 2010 anymore, unless you are already using it — and pushing a deadline. Please read the last section of this post to find the links to the free and commercial version of VS.

(Nov 2010): A quick press — I was running Visual Studio Express 2010 on XP a couple of days back and I found it to be rather slow. The intellisense was performing horribly and the entire system was rather sluggish with a ridiculous amount of disk access — almost to the point where I had to physically shut the system down using the power switch. I initially thought it was an install problem, but ironically realized it wasn’t the case after losing another half hour in a reinstall. After googling around (, which I should have done earlier,) I found some people had similar problems and the solution to the problem is rather simple. You just need to update the Automation API to version 3.0.  🙁 Windows 7 already has the latest API and doesn’t have this problem.

UPDATE: Another input from a friend. Apparently you can speed things even more by using /SafeMode switch. Unfortunately it may create problems with third-party plugins you may have with your Visual Studio. For Visual Studio Express, which doesn’t support plugins, you can try this option. I must say however, I didn’t find too much of a difference myself on my current project.

UPDATE 2: Apparently all my problems were solved after following steps 1, 2, 3 and 4 mentioned here. http://msdn.microsoft.com/en-us/vstudio/ff716700

Update (Not really): Neither me nor most programmers I know who are using VS 10 could solve this issue satisfactorily. I would recommend moving projects to Visual Studio 2012. It is much better and more stable than VS 2010. It’s been out for a while now and there’s no point in continuing with VS 2010 that clearly has some issues with speed and disk access.

Visual Studio 2012 : http://www.microsoft.com/visualstudio/eng/downloads
Visual Studio 2012 Expresshttp://www.microsoft.com/visualstudio/eng/products/visual-studio-express-products

It’s time to move on to the latest version of Visual Studio or if you are an Indie/Hobbyist then to the free Visual Studio Community Edition.

Direct3D 10/11 coming to Linux … What about games?

No, April 1st is still more than 6 months away, and yes you heard me right — Direct3D versions 10 and 11 are indeed coming to Linux. How is this even possible?  Well it is possible, since nouveau moved on to Gallium 3D which allows Direct3D API (actually any API) to be exposed via a front end called a state tracker. Interestingly (, and there seems to be a lot of confusion going about on public forums) Direct3D will be a Native API under Gallium, much like OpenGL is currently. It won’t be a something that emulates Direct3D by using wrappers around OpenGL — meaning you will be able to write and compile Direct3D code directly on Linux or BSD based systems that support the nouveau driver. Initially I was a bit skeptical of such an approach since Direct3D API is integrated with Win32 API, but the author seems to have solved this by using Wine headers. I don’t know the pitfalls (if any) of such an approach, but it seems to have worked for him and would seem a logical path to take (instead of breaking API compatibility). He clearly outlines the motivation behind doing the Direct3D port, and kudos to him for doing something that was but inevitable given a no show of Longs Peak.

Naturally a native Direct3D implementation will allow game developers to write code that is cross-platform and even allow existing engines/games that use Direct3D versions 10 and higher to be ported across to platforms that have a Gallium driver. W00t! This is amazing, almost too good to be true isn’t it? But before we gamers jump in joy, there are still a few things that have to fall in place before things can get up and running with regards to Direct3D on Linux. First and foremost is support. Hardware vendors like Nvidia and AMD must support Gallium in their drivers, or OSS drivers must be written (and are being written) to take their place. This is paramount since without such an interface, no front end API (Direct3D or OpenGL) will be able to use hardware acceleration via Gallium. Second, and more importantly, the guys at Redmond must allow such an implementation of their Direct3D API. An API itself can’t be copyrighted. The author seems to have steered clear of any Microsoft code, so theoretically this shouldn’t be a problem. But then again I am no legal eagle, so I can’t really say anything w.r.t. this. There have been rumors that there are patents on sections of Direct3D. I am not sure what that means, or for that matter if it is even possible to patent sections of an API/Library. But, things could get potentially messy if Microsoft were to place a cease and desist on this new development. I doubt this would happen, but you never know.

I have to agree, having Direct3D as a native API via Gallium does open up a lot of possibilities for OSS platforms that have severely lacked games. Accelerated graphics on most systems apart from Windows have had little choice up until now with OpenGL being the only real option. But does this really mean that all of the games that are developed and are being developed will be ported to Linux and other OSS platforms? That’s an interesting question and the answer isn’t quite that simple. Lets look at the macro picture of the industry. For AAA games the PC platform isn’t a priority. Most (maybe all) AAA games are today made with consoles in mind. Yes there maybe a PC port, but it’s the consoles that are the main priority. Most (if not all) gamers that play AAA games on the PC do spend a bang on their systems and most of them already have Windows as their main OS. Some do have *NIX systems but even these few have a Windows partition that they keep around specifically for games. Porting any software to a new platform isn’t a trivial task. Even with the best coding practices and methods, it requires a lot of resources — which aren’t free. Everything from coding, testing, maintaining build setups, writing install scripts and many other things requires time and money. For  a AAA game, or for that matter for any game or software, a port to a new platform should show a robust ROI (return on investment). That’s where the crux of the problem lies. There aren’t that many *NIX gamers out there, and if there are, the big studios aren’t seeing them!

Then there are the casual games, which also is a big market for games. Casual games represent a very different kind of audience. A typical casual gamer is a non technical person who doesn’t even understand what a hardware driver is, let alone jargons like Gallium, Direct3D, OpenGL or for that matter Linux. Most casual gamers will have nothing but a moderately powerful laptop with on-board Intel graphics chips — which came with Windows pre-installed. This is the kind of player that expects the game to install and run with a single click. They don’t understand driver updates or DirectX versions. For them it matters little which API is better or worse or which platform supports which API and which doesn’t. Apart form these two broad segments, there are a whole lot of players who will play radical indie games and this is probably where Linux ports has found some success. This gamer is the tech savvy computer geek who runs Linux as his/her primary system and isn’t afraid to fire up the console now and then. I must say, some radical indie games have found success in this area. But, these games are far from cutting edge. They maybe very good games, but you don’t expect Crysis like graphics from them, and it matters little what API is used or if the underlying API runs 5% slower when your game is not going below the 30FPS barrier.

There have been lots of debates about OpenGL vs Direct3D. I refrain to go into that. However, having a choice of accelerated graphics API for platforms other than Windows is definitely good all around. Direct3D versions 10 and 11 are well designed APIs, closely tied to current generation hardware. But will all this translate into more ports of games to Linux and BSDs is still an open question. The community as always will play a vital role and only time will tell how things pan out.

Can parallel processing really cut it?

When Larrabee was first delayed and then “postponed” most of us weren’t surprised (, at least  I wasn’t).  Parallel computing, though advocated as a world saver, isn’t the easiest model to program to. Doing everything in “software” (graphics, HPC and all) ‘might not’ be as easy as was anticipated. The cold hard reality is that languages like C++, Java and derivatives (mostly OOP ones,) were never really designed for parallelism. A multi-threading-here and a asynchronous-there, doesn’t really cut it. Using the full potential of parallel devices is very challenging indeed. Ironically most of the code that runs todays software isn’t geared for parallel computing at all. Neither are todays programmers.

But experts advocate  a parallel computing model for the future. But, is it easy to switch to? Will an innovation in hardware design, or a radical new compiler that optimizes away your “for() loop” the real answer? A very interesting article to read (even if you are not into graphics and game programming) is :

http://www.brightsideofnews.com/news/2010/5/27/why-intel-larrabee-really-stumbled-developer-analysis.aspx?pageid=0

Very rarely do I quote articles, but this one is really worth a read. Well-written and well said.

Busy busy busy…

Damm it’ s been the longest time ever. No, I haven’t forgotten about writing, but been involved with a lot of work as of late.

  • Game Art integration.
  • Testing.
  • Pushing 3 other projects (2 game related and 1 other misc project).
  • Plus one new project.

Very little time, too much to do. 🙂

It’s getting a bit better, so  hopefully you will hear more of my ‘self-centered’ ‘worthless’ rants in the coming days 😀 .

Norton 360 4.0.

Norton 360I received a complimentary copy of Norton 360 a couple of weeks back, but only managed to install it now. Let me start off by saying, I am pretty impressed with the number of products integrated within the Norton 360 package. For a package this large, the installation was pretty silent, quick and quite clean. Everything got installed correctly and I could start using the software in about 5 mins flat. I tried almost everything and found nothing I could really complain about. Most things are pretty straight forward except maybe the firewall settings where you do have to turn to the application help if you want to customize things. However, even the firewall was configured correctly from the word go. A full thumbs-up to that. The virus/adware/spyware scans are also pretty good albeit a bit slow.

I personally never used any Norton product before this one. However, Norton products used to be installed on machines where I used to work and the common complaint with them is, “They seem to hog computer resources and are really slow”. Well, I can’t say that this version is a resource hog, but it’s still on the slower side. The problem I found with 360 is a large amount of Page faults, and that could explain some of the problems as to why things seem to slow down. I don’t know, my machine has 2 Gigs of memory which may not be a lot, but 2,500,000 page faults in 3 hours is way too much. I guess this must be because of a lot of checking that goes on when the software is running. The application however takes a surprisingly low memory footprint.

The software integrates with internet browsers (IE 6 and higher and Firefox 3.0 and later) to prevent phishing websites. I however, am having troubles with IE integrations. My IE sometimes becomes unresponsive and sometimes takes ages to startup and load since I installed the software. Firefox integration is pretty good but I found “Norton site safety”  marking some hacker/warez and illegal sites as “safe”. This is clearly a lapse with the software especially on sites that are known for malware/spyware/viruses and/or phishing. Some were even marked with “transaction protection” and “privacy protection”. At the very least such sites should have been marked as “unknown”.

I can’t comment on the backup system for a simple reason, I have a elaborate backup system of my own for my projects and I don’t want to mess that up. But make no mistake about it, a backup system is integral to any good security solution and Norton 360 does provide that.

Lastly, Norton 360 also has a module that will tune up your system, clean redundant and temporary files and optimize your disks for performance. This an added bonus and though these things don’t particularly fall into the category of system security, they are probably equally important.

I think Norton 360 is good solid all round package focusing on security of a system. It does everything that should be done to keep your system safe and more. The team behind the product has taken into consideration every aspect of security including having a backup system in place. If worse come to worst you have the option of restoring your data from an online source. Having said that, the product did seem a bit slow while scanning. That said, for an average user Norton 360 is a good solution.

Things I liked :-

  • Comprehensive package for your computer security. Good integration of products.
  • A lot of focus on multiple levels and aspects of security including backups. Covers mostly everything you need to keep yourself safe and/or recover from a malware/security related attack.
  • Good support.
  • Easy installation of a complex security solution.

Things I found that can be improved :-

  • Not the fastest around, a bit on the slower side. Could have been much faster.
  • Parts of the user interface could be daunting for a non technical user. Some configurations could be complex for some users.
  • Does a lot of checking giving the user a feeling that the system is slowing down.
  • Some warez and dangerous sites were marked as safe.
  • Browser integrations could be improved.
  • A bit pricey.

General tips to protect yourself from malware/spyware/viruses/phishing. (These do not specifically apply to Norton 360)

  • Choose the correct browser and plugins. Firefox has Ad-bloc plus/WOT/No-Script and a host of  other plugins that can reduce the number of unwanted scripts/ads that run of webpages. This has 2 advantages, a) They will make your browsing speed faster, reduce the number of bloated ads and flash scripts and b) Automatically reduce the risk of running malicious scripts on unfriendly websites.
  • Avoid warez and illegal download sites like the plague. The No. 1 reason for getting malware on your desktop is visiting and downloading from such sites.
  • Never give off your password, credit card No., or for that matter any personal information to anyone on the internet or on the phone, period! This may sound like stating the obvious, but you will be surprised how easily people are fooled into giving away their personal information. For example, most people don’t think twice about giving away their email with their email passwords when signing up on some social networking site. What if this information is used for identity theft? I am not saying it will happen; but it could happen! Remember those messages you get in your mail “AFriendOfYours is now using SomeSocialNetowkingSite.com, come and join him/her and be a part of the community!” Most sites like these will ask for your email with your email password (to get access to your email) so they can automatically connect and invite your friends. These sites will go through your email and build a profile on you including your habits, friends, where you go and what you do online. God forbid, if you do online transactions, or have your bank statements emailed to you, then nothing prevents them for knowing all about your finances. This is how phishing takes place.
  • Update and deep scan (antivirus and adware) your system at least once a week. A good antivirus/adware/spyware solution will auto-update regularly.
  • If you are using Windows never turn off the firewall. With XP (and higher) there is a built-in firewall. Norton and others have other more elaborate solutions. Use them, and don’t turn them off under the pretext of faster browsing speed. Firewalls rarely affect browsing speeds. On systems other than Windows, it’s always good to have a firewall.
  • Apply updates to your system regularly.
  • Avoid using your workplace computer to do private stuff. Remember computers at your workplace can and are being monitored. Every key you press can be logged by a key logger and some of these systems are extremely sophisticated and are actively used by organizations to monitor employees.

Bash the Flash.

It’s almost fashionable to bash the flash these days. Everyone is doing it, the big, the small, the wise and sometimes people who don’t seem to fully understand the argument. For a technology that has been around for almost 15 years and probably the only platform capable of delivering rich web content for the better part of that time, some criticisms may sound a bit too harsh — or, are they? Yes some of it is indeed true. Flash applications have been known to slow a brand new quad core machine to a crawl while doing nothing more than streaming a simple video. There have been more than one instances when the entire browser has frozen up because flash hogged every available resource. But, before we go flash bashing lets look at why we are so overly dependent on a technology and why suddenly after 15 years of loyal service flash has now become so much of a thorn that everyone likes to crib about.

When flash first arrived on the scene, it was this cool new technology wherein you could program interactive webpages much to the delight of web designers. But as the world would soon realize, there was a negative side to overly depend on this new technology. During those days fast internet was a luxury of only a few and flash content on websites would take ages to download and display on dial-up connections (yes, I was in the university back then and couldn’t afford a broadband connection 😀 ) . So flash adoption was initially limited. But as the internet grew and speeds increased, more and more websites started adopting flash. The logical next step for a rich content platform were games. This was exploited by flash game developers and we began to see more and more flash games being developed. Flash enjoyed renewed interest, web applications started being made with flash.

So why flash?  Simply because, there was no other. If you wanted to make a rich web application, there was no better solution. True some Javascript workarounds existed, but until recently these were pretty limited when compared to what flash could achieve. But there was an even bigger reason as to why flash got adopted, and is now on almost every computer system that connects to the internet and is used to browse the web, and that is — streaming video. Yes, there were other competing formats but most were closed ones and flash was favored over those. Only now does the HTML 5.0 standard talk about streaming video and sound. This HTML revision should have been done 10 years ago, there is no logic to this delay, but it is what it is and flash was and still is the leading tech/plugin to watch streaming videos on the Internet. The story doesn’t end there; there is still a debate about what video codec/standard to use for HTML 5.0 and patent encumbered video technologies means this debate will last a while longer. Also most streaming media sites deliver content in flash (flv) format and haven’t yet switched to HTML 5.0. So before you go blaming flash for all your browser troubles, think about it — do you have a choice? Well as it stands today, not quite.

It’s true that flash has it’s problems. But these problems were there before, so what’s changed now? The answer is 2 fold. A) People have started watching more streaming content online and as a result inadvertently use more of the flash plugin. B) There is a new technology that has silently crept up to flash — and that is your humble Javascript. As Javascript got faster, websites got faster as well. Things which were possible with flash could also be done with Javascript. Developers found new ways of writing rich web-content using Javascript (AJAX) and slowly started avoiding flash by using equivalent Javascript functionality. Mind you, I am not saying Javascript is a replacement for flash, I am saying you can now do so much more with it than you could do earlier. As a result an obvious comparison with flash was and is being made. Javascript continues to grow and with integration of technologies like WebGL it has rapidly narrowed the gap between flash and may even surpass flash in some areas.

People blame flash, but it not flash that is the problem, it’s the implementation. The flash plugin and it’s integration with the browser is what causes the pains like system slowdowns and browser crashes. Flash today is JIT compiled much like Javascript is so there are no problems there. Contrary to some who believe otherwise, ActionScript is a dialect of ECMAScript much like JavaScript and is not inferior in anyway to the latter.

According to me the problem with flash is an engineering-implementation one. ActionScript and flash aren’t deficient or outdated in any way as some would suggest. However, it’s true that the implementation is what needs to be looked at. Yes, the flash plugin has problems but it’s not the technology that is flash, but it’s implementation that is the plugin. If that were to be cured, flash isn’t at all bad.

Avatar

It’s been some time since the movie was released, but I only managed to watch Avatar yesterday. Ok, before I proceed let me put a “spoiler alert”. If you haven’t, go see the movie and then read the rest of the entry ;-).

I would describe the movie as, “great graphics, superbly imaginative environments, great blending of live actors and CG, but a rather bland and ordinary storyline”. The movie is a graphics galore, but the story itself is rather dull and predictable. Throughout the movie you can almost sense what’s going to happen next, and that’s exactly what happens — leaving little room for mystery. I am a James Cameron fan (, who isn’t), but in most of his movies he does find a subtle and an uncanny way to weave a wacky (but believable) story around the whole action movie concept. Unfortunately, Avatar doesn’t quite have all of that.

The whole dull story thing however, could be easily forgiven given that most of the time is spent admiring the visual effects, graphics and stunningly beautiful environments modern CG can achieve. I found the movie rather enjoying. I guess Avatar is natural fodder for a 3D graphics geek like myself, but apart form that the movie does an excellent job at handling or rather blending graphics with real life actors. You would be forgiven for mistaking reality from CG especially when live human actors interact with CG actors and the environment. I was doubly interested with how the environment behaved in response to the human actors actions. The most difficult part of compositing a 3D CG environment with actual actors actions is the interactions of the CG elements (with the actors). The subtle swish of the grass when an actor runs, the rustle of the leaves when a an actor goes through a bush, these are small things that makes a CG scene believable. My hunch is — all that was done and captured in real time in a 3D studio environment.

The truly spectacular achievement of the movie/technology and the one that impressed me the most is the facial animation. Any computer modeled facial animation is bound to be hit by the uncanny valley effect, but in Avatar the facial expressions, though not flawless do turn a page (no they are not fully human like, but are  definitely believable). The technological achievement is commendable and some critical reviews don’t do justice to, what is a pretty good effort on the part of the CG team. I know how hard it is to have a seamless facial animation system (I myself am working on one) and the movie and it’s mocap technology to simulate facial movements does bring in a lot of realism. A lot of ideas there for future gaming projects.

I am pretty impressed with the movie as a whole. Yes it has a linear and an ordinary story but it does push the envelop in CG technology. The graphics are stunning, but what is more interesting is the composition of graphics and human actors. For me the facial animation was probably the best part. It’s not a new idea, ie. to capture live human actor’s facial movements on a CG character, but Avatar does it so very well.

Happy new year…

Best wishes and a happy new year to all.

Larrabee isn’t coming just yet.

Hmm… I am disappointed (story). No, I wasn’t expecting the first versions of the technology to be game changer in the graphics or for that matter in the HPC  or the compute world, but I was very very interested in knowing more about the Larrabee technology. Thus far Intel has only thrown “bits and pieces” about their new tech, and that in no way gives one a clear picture. No, Intel hasn’t given up on the technology, but seems to have postponed the release in it’s current form because the performance targets weren’t being met. Ironically, Intel had initially made claims that Larrabee chips would stand up to discrete solutions from ATI and Nvidia. However, it looks like the tech still needs some work done to measure up to that.

At this point all we can do is speculate, but the fact is — building a chip that can do HPC and compute and graphics and have driver/software/optimizing compilers working perfectly is a tall order, even for a giant like Intel. I am sure they have done most of it right, but most of it isn’t all of it, and that’s probably the reason we are seeing the launch being canceled in it’s current form.

Many-core computing is the next big thing, and technologies like Larrabee are the future. I am disappointed because more than the tech, Larrabee would have been a window into how things are shaping up. How does software development scale to the future? Would the new optimizing compilers allow the use of current software methods? Or, does it mean a radical shift in the way software systems are built? How would the new tech address task parallelism? — I guess we will have to wait a while longer to see how these (and I am sure may more) questions are answered.