Code.Fu: Quickly testing if a positive integer is a power of 2.

If you are a graphics programmer you will often need to test if an integer is a power of 2. To do this you can exploit a neat trick using binary arithmetic. This trick will work with all positive integers.

Lets say the number you are testing is 4. Take a look at the simple binary arithmetic below:


  4 --> 0100‬ (binary)
- 1 --> 0001
------------------
  3 --> 0011

  4 --> 0100
& 3 --> 0011
------------------
  0 --> 0000

In short, you subtract 1 from the number and do a binary ‘AND’ of the result and the original number. If the number is a power of two, the result you get will always be 0.  The only exception is 0 itself, which will return 0 in the above operation. Depending on your application you may or may not want that.

C/C++:

#define bool int /*remove this line for C++*/

bool is_powof_2(int x)
{ 
  return ((x & (x-1)) == 0);
}

bool is_powof_2_excl_0(int x)
{ 
  return (x != 0) && ((x & (x-1)) == 0);
}

Python:

def is_powof_2(x):
	return ((x & (x-1)) == 0)

def is_powof_2_excl_0(x):
	return (x != 0) and ((x & (x-1)) == 0)

Visual Studio Community Edition (Free).

Microsoft has released free Visual Studio Community Edition. It’s basically a free full featured Visual Studio IDE (apparently with everything included) that can be used to make all kinds of apps. I was even surprised to find support for Python and Git included. Even more surprising was the Apple and Android logos in the “supported platforms” section!

Link: http://www.visualstudio.com/products/visual-studio-community-vs

Visual Studio 2010 still too slow!

UPDATE (Nov 2014): Given how many hits this and the Speeding up Visual Studio post are generating, I have to warn you that VS 2010 is seriously old. With the new Visual Studio version you can still compile for XP and there is no point in continuing to use VS 2010 anymore, unless you are already using it — and pushing a deadline. Please read the last section of this post to find the links to the free and commercial version of VS.

(Jan 2011) : Awe sh*t! After 2 months of active use I can say for sure, Visual Studio has some serious problems with speed. I didn’t have the IDE crash on me, but it’s just too slow for any large project. Tried everything possible, cleaned out the cache, recreated the intellisense files but the program still keeps slowing down for no apparent reason. It’s really annoying when the program suddenly gets into a heavy disk access mode, to the point where even typing becomes impossible. I have racked my brains and fiddled with every possible tweak I could find on the web without success. Since our entire project is now moved over to VS 10 it’s too late to turn back now 🙁 !!

UPDATE:  I finally managed to fix Visual Studio 10. Please read the post Speeding up Visual Studio.

Update (Not really): Neither me nor most programmers I know who are using VS 10 could solve this issue satisfactorily. I would recommend moving projects to Visual Studio 2012. It is much better and more stable than VS 2010. It’s been out for a while now and there’s no point in continuing with VS 2010 that clearly has some issues with speed and disk access.

Visual Studio 2012 : http://www.microsoft.com/visualstudio/eng/downloads
Visual Studio 2012 Express : http://www.microsoft.com/visualstudio/eng/products/visual-studio-express-products

It’s time to move on to the latest version of Visual Studio or if you are an Indie/Hobbyist then to the free Visual Studio Community Edition.

Over-patterning software design.

Ah! Design Patterns! Yes those seemingly magical concoctions of code that appear to solve all of the problems plaguing software design. So profound is the initial impact of design patterns, that the engineer begins to believe that he/she has finally found mythical scrolls of wisdom, bestowed upon him/her by divine beings, so much so that after reading through them every design problem can be automatically deconstructed into a set of familiar design patterns. Using them seems to solve every challenge software engineering has to offer — and the engineer begins to believe that all that is ever needed on his/her desk is a copy of those very patterns. Yes, there was a time when I have been guilty of the very same thing.

There is also the misconception that patterns are drop-in replacements to traditional software design practices. It’s tempting to approach a design problem with a pre-packaged solution that patterns seem to offer. “Oh, we have a Composite, that means we need a Visitor for collaboration. So let’s use a Visitor then.” That was easy, but what was missed was the overhead of designing something as a Visitor. No one asked the question why a Visitor was needed, or if  it was indeed needed. Often the only reason given for such design decisions is, “… because a design pattern says so.” That’s not what design patterns advocate at all. Excess use of design patterns while designing software, inadvertently leads to Over-engineering.

This contradicts the popular perception which is of the view that patterns were created to address most commonly occurring design problems. Yes that is true, and no I am not trying to be a design pattern heretic and declare that patterns are useless. Patterns are in fact very useful when applied correctly. It is true that most software designs can broken down into sub-designs which can be collectively solved using a combination of different design patterns. But just because they can be, doesn’t mean they have to be. A designer well versed in design pattern use can quickly find adaptable patterns for most design problems — and can probably get them to work together if he or she understands the modalities of pattern behavior.  There is a dichotomy here; design patterns lead to over-engineering — and they are useful!! What is it then?

The truth lies somewhere in-between. Most problems with “Over-patterning” begin when there is an overbearing urge on the part of a designer to adapt his/her design, and sometimes downright bend it to fit to a design pattern. Just because a pattern fits or solves a problem, doesn’t mean it has to be used. Loading a software design with patterns is a mistake. One must remember, patterns add cost, and by cost I mean engineering cost. Strange — an engineering solution adding an engineering cost? But, thats how it is with any engineering problem in any engineering domain. Ironically if you refer each pattern you will often see these costs clearly pointed out by the authors. Call them disadvantages, limitations, issues or whatever other name you come up with, but the reality is that these issues aren’t trivial. An oversight or a failure to understand the implications of these in the overall design of a software system is what leads to overly complex  or over-engineered solutions.

An excellent article to read with regards to this is Joshua Kerievsky’s Stop Over-engineering.

Speeding up Visual Studio 2010 on XP and Vista.

UPDATE (Nov 2014): Given how many hits this and the other post are generating, I have to warn you that VS 2010 is seriously old. With the new Visual Studio version you can still compile for XP and there is no point in continuing to use VS 2010 anymore, unless you are already using it — and pushing a deadline. Please read the last section of this post to find the links to the free and commercial version of VS.

(Nov 2010): A quick press — I was running Visual Studio Express 2010 on XP a couple of days back and I found it to be rather slow. The intellisense was performing horribly and the entire system was rather sluggish with a ridiculous amount of disk access — almost to the point where I had to physically shut the system down using the power switch. I initially thought it was an install problem, but ironically realized it wasn’t the case after losing another half hour in a reinstall. After googling around (, which I should have done earlier,) I found some people had similar problems and the solution to the problem is rather simple. You just need to update the Automation API to version 3.0.  🙁 Windows 7 already has the latest API and doesn’t have this problem.

UPDATE: Another input from a friend. Apparently you can speed things even more by using /SafeMode switch. Unfortunately it may create problems with third-party plugins you may have with your Visual Studio. For Visual Studio Express, which doesn’t support plugins, you can try this option. I must say however, I didn’t find too much of a difference myself on my current project.

UPDATE 2: Apparently all my problems were solved after following steps 1, 2, 3 and 4 mentioned here. http://msdn.microsoft.com/en-us/vstudio/ff716700

Update (Not really): Neither me nor most programmers I know who are using VS 10 could solve this issue satisfactorily. I would recommend moving projects to Visual Studio 2012. It is much better and more stable than VS 2010. It’s been out for a while now and there’s no point in continuing with VS 2010 that clearly has some issues with speed and disk access.

Visual Studio 2012 : http://www.microsoft.com/visualstudio/eng/downloads
Visual Studio 2012 Expresshttp://www.microsoft.com/visualstudio/eng/products/visual-studio-express-products

It’s time to move on to the latest version of Visual Studio or if you are an Indie/Hobbyist then to the free Visual Studio Community Edition.

Direct3D 10/11 coming to Linux … What about games?

No, April 1st is still more than 6 months away, and yes you heard me right — Direct3D versions 10 and 11 are indeed coming to Linux. How is this even possible?  Well it is possible, since nouveau moved on to Gallium 3D which allows Direct3D API (actually any API) to be exposed via a front end called a state tracker. Interestingly (, and there seems to be a lot of confusion going about on public forums) Direct3D will be a Native API under Gallium, much like OpenGL is currently. It won’t be a something that emulates Direct3D by using wrappers around OpenGL — meaning you will be able to write and compile Direct3D code directly on Linux or BSD based systems that support the nouveau driver. Initially I was a bit skeptical of such an approach since Direct3D API is integrated with Win32 API, but the author seems to have solved this by using Wine headers. I don’t know the pitfalls (if any) of such an approach, but it seems to have worked for him and would seem a logical path to take (instead of breaking API compatibility). He clearly outlines the motivation behind doing the Direct3D port, and kudos to him for doing something that was but inevitable given a no show of Longs Peak.

Naturally a native Direct3D implementation will allow game developers to write code that is cross-platform and even allow existing engines/games that use Direct3D versions 10 and higher to be ported across to platforms that have a Gallium driver. W00t! This is amazing, almost too good to be true isn’t it? But before we gamers jump in joy, there are still a few things that have to fall in place before things can get up and running with regards to Direct3D on Linux. First and foremost is support. Hardware vendors like Nvidia and AMD must support Gallium in their drivers, or OSS drivers must be written (and are being written) to take their place. This is paramount since without such an interface, no front end API (Direct3D or OpenGL) will be able to use hardware acceleration via Gallium. Second, and more importantly, the guys at Redmond must allow such an implementation of their Direct3D API. An API itself can’t be copyrighted. The author seems to have steered clear of any Microsoft code, so theoretically this shouldn’t be a problem. But then again I am no legal eagle, so I can’t really say anything w.r.t. this. There have been rumors that there are patents on sections of Direct3D. I am not sure what that means, or for that matter if it is even possible to patent sections of an API/Library. But, things could get potentially messy if Microsoft were to place a cease and desist on this new development. I doubt this would happen, but you never know.

I have to agree, having Direct3D as a native API via Gallium does open up a lot of possibilities for OSS platforms that have severely lacked games. Accelerated graphics on most systems apart from Windows have had little choice up until now with OpenGL being the only real option. But does this really mean that all of the games that are developed and are being developed will be ported to Linux and other OSS platforms? That’s an interesting question and the answer isn’t quite that simple. Lets look at the macro picture of the industry. For AAA games the PC platform isn’t a priority. Most (maybe all) AAA games are today made with consoles in mind. Yes there maybe a PC port, but it’s the consoles that are the main priority. Most (if not all) gamers that play AAA games on the PC do spend a bang on their systems and most of them already have Windows as their main OS. Some do have *NIX systems but even these few have a Windows partition that they keep around specifically for games. Porting any software to a new platform isn’t a trivial task. Even with the best coding practices and methods, it requires a lot of resources — which aren’t free. Everything from coding, testing, maintaining build setups, writing install scripts and many other things requires time and money. For  a AAA game, or for that matter for any game or software, a port to a new platform should show a robust ROI (return on investment). That’s where the crux of the problem lies. There aren’t that many *NIX gamers out there, and if there are, the big studios aren’t seeing them!

Then there are the casual games, which also is a big market for games. Casual games represent a very different kind of audience. A typical casual gamer is a non technical person who doesn’t even understand what a hardware driver is, let alone jargons like Gallium, Direct3D, OpenGL or for that matter Linux. Most casual gamers will have nothing but a moderately powerful laptop with on-board Intel graphics chips — which came with Windows pre-installed. This is the kind of player that expects the game to install and run with a single click. They don’t understand driver updates or DirectX versions. For them it matters little which API is better or worse or which platform supports which API and which doesn’t. Apart form these two broad segments, there are a whole lot of players who will play radical indie games and this is probably where Linux ports has found some success. This gamer is the tech savvy computer geek who runs Linux as his/her primary system and isn’t afraid to fire up the console now and then. I must say, some radical indie games have found success in this area. But, these games are far from cutting edge. They maybe very good games, but you don’t expect Crysis like graphics from them, and it matters little what API is used or if the underlying API runs 5% slower when your game is not going below the 30FPS barrier.

There have been lots of debates about OpenGL vs Direct3D. I refrain to go into that. However, having a choice of accelerated graphics API for platforms other than Windows is definitely good all around. Direct3D versions 10 and 11 are well designed APIs, closely tied to current generation hardware. But will all this translate into more ports of games to Linux and BSDs is still an open question. The community as always will play a vital role and only time will tell how things pan out.

Can parallel processing really cut it?

When Larrabee was first delayed and then “postponed” most of us weren’t surprised (, at least  I wasn’t).  Parallel computing, though advocated as a world saver, isn’t the easiest model to program to. Doing everything in “software” (graphics, HPC and all) ‘might not’ be as easy as was anticipated. The cold hard reality is that languages like C++, Java and derivatives (mostly OOP ones,) were never really designed for parallelism. A multi-threading-here and a asynchronous-there, doesn’t really cut it. Using the full potential of parallel devices is very challenging indeed. Ironically most of the code that runs todays software isn’t geared for parallel computing at all. Neither are todays programmers.

But experts advocate  a parallel computing model for the future. But, is it easy to switch to? Will an innovation in hardware design, or a radical new compiler that optimizes away your “for() loop” the real answer? A very interesting article to read (even if you are not into graphics and game programming) is :

http://www.brightsideofnews.com/news/2010/5/27/why-intel-larrabee-really-stumbled-developer-analysis.aspx?pageid=0

Very rarely do I quote articles, but this one is really worth a read. Well-written and well said.

Bash the Flash.

It’s almost fashionable to bash the flash these days. Everyone is doing it, the big, the small, the wise and sometimes people who don’t seem to fully understand the argument. For a technology that has been around for almost 15 years and probably the only platform capable of delivering rich web content for the better part of that time, some criticisms may sound a bit too harsh — or, are they? Yes some of it is indeed true. Flash applications have been known to slow a brand new quad core machine to a crawl while doing nothing more than streaming a simple video. There have been more than one instances when the entire browser has frozen up because flash hogged every available resource. But, before we go flash bashing lets look at why we are so overly dependent on a technology and why suddenly after 15 years of loyal service flash has now become so much of a thorn that everyone likes to crib about.

When flash first arrived on the scene, it was this cool new technology wherein you could program interactive webpages much to the delight of web designers. But as the world would soon realize, there was a negative side to overly depend on this new technology. During those days fast internet was a luxury of only a few and flash content on websites would take ages to download and display on dial-up connections (yes, I was in the university back then and couldn’t afford a broadband connection 😀 ) . So flash adoption was initially limited. But as the internet grew and speeds increased, more and more websites started adopting flash. The logical next step for a rich content platform were games. This was exploited by flash game developers and we began to see more and more flash games being developed. Flash enjoyed renewed interest, web applications started being made with flash.

So why flash?  Simply because, there was no other. If you wanted to make a rich web application, there was no better solution. True some Javascript workarounds existed, but until recently these were pretty limited when compared to what flash could achieve. But there was an even bigger reason as to why flash got adopted, and is now on almost every computer system that connects to the internet and is used to browse the web, and that is — streaming video. Yes, there were other competing formats but most were closed ones and flash was favored over those. Only now does the HTML 5.0 standard talk about streaming video and sound. This HTML revision should have been done 10 years ago, there is no logic to this delay, but it is what it is and flash was and still is the leading tech/plugin to watch streaming videos on the Internet. The story doesn’t end there; there is still a debate about what video codec/standard to use for HTML 5.0 and patent encumbered video technologies means this debate will last a while longer. Also most streaming media sites deliver content in flash (flv) format and haven’t yet switched to HTML 5.0. So before you go blaming flash for all your browser troubles, think about it — do you have a choice? Well as it stands today, not quite.

It’s true that flash has it’s problems. But these problems were there before, so what’s changed now? The answer is 2 fold. A) People have started watching more streaming content online and as a result inadvertently use more of the flash plugin. B) There is a new technology that has silently crept up to flash — and that is your humble Javascript. As Javascript got faster, websites got faster as well. Things which were possible with flash could also be done with Javascript. Developers found new ways of writing rich web-content using Javascript (AJAX) and slowly started avoiding flash by using equivalent Javascript functionality. Mind you, I am not saying Javascript is a replacement for flash, I am saying you can now do so much more with it than you could do earlier. As a result an obvious comparison with flash was and is being made. Javascript continues to grow and with integration of technologies like WebGL it has rapidly narrowed the gap between flash and may even surpass flash in some areas.

People blame flash, but it not flash that is the problem, it’s the implementation. The flash plugin and it’s integration with the browser is what causes the pains like system slowdowns and browser crashes. Flash today is JIT compiled much like Javascript is so there are no problems there. Contrary to some who believe otherwise, ActionScript is a dialect of ECMAScript much like JavaScript and is not inferior in anyway to the latter.

According to me the problem with flash is an engineering-implementation one. ActionScript and flash aren’t deficient or outdated in any way as some would suggest. However, it’s true that the implementation is what needs to be looked at. Yes, the flash plugin has problems but it’s not the technology that is flash, but it’s implementation that is the plugin. If that were to be cured, flash isn’t at all bad.

Larrabee isn’t coming just yet.

Hmm… I am disappointed (story). No, I wasn’t expecting the first versions of the technology to be game changer in the graphics or for that matter in the HPC  or the compute world, but I was very very interested in knowing more about the Larrabee technology. Thus far Intel has only thrown “bits and pieces” about their new tech, and that in no way gives one a clear picture. No, Intel hasn’t given up on the technology, but seems to have postponed the release in it’s current form because the performance targets weren’t being met. Ironically, Intel had initially made claims that Larrabee chips would stand up to discrete solutions from ATI and Nvidia. However, it looks like the tech still needs some work done to measure up to that.

At this point all we can do is speculate, but the fact is — building a chip that can do HPC and compute and graphics and have driver/software/optimizing compilers working perfectly is a tall order, even for a giant like Intel. I am sure they have done most of it right, but most of it isn’t all of it, and that’s probably the reason we are seeing the launch being canceled in it’s current form.

Many-core computing is the next big thing, and technologies like Larrabee are the future. I am disappointed because more than the tech, Larrabee would have been a window into how things are shaping up. How does software development scale to the future? Would the new optimizing compilers allow the use of current software methods? Or, does it mean a radical shift in the way software systems are built? How would the new tech address task parallelism? — I guess we will have to wait a while longer to see how these (and I am sure may more) questions are answered.

DirectX 11 hardware is here.

It’s almost time for Windows 7 and along with that the first lot of DirectX 11 class hardware has started to appear. This time the first off the block was, surprise surprise, ATI. The 5800 series cards were released a couple of days ago and there are already impressive reviews about the new cards all around. I am sure it wont be long before Nvidia, which has been uncannily silent, comes out with their line-up. So it is safe to assume that there will be DirectX 11 class hardware on the shelves going into Windows 7 release (Windows 7 RC already has DX 11 support and will also be available for Vista soon). It will however, still take a few weeks more for the initial euphoria to settle, and we should see prices of the cards drop around the holiday season, and probably that is when I will go in for an upgrade as well. I have been running the HD 4850 for some time now and thus far it’s proving to be sufficient, not only for gaming but also for my programming needs. The HD 4850 has been surprisingly good given it’s price point and one would expect the same from 5800 series given the already positive reviews.

There are a couple of things that are in favour of DirectX 11. The first is the API itself. DirectX 11 offers more than just a simple evolutionary upgrade (more here). DirectX 10 was mostly a non event. The enormous success and the longevity of XP and the XBox 360 ensured that the 9 version of the API far outlived  most expectations (and probably will continue to live for some time to come). The story of DirectX 10 is also intrinsically connected to Vista. Vista’s low adoption meant not enough people were running a DirectX 10 capable software platform, which Microsoft stubbornly refused to port to XP for whatever reasons. Even though 10 class hardware was available during Vista’s reign, nagging hardware issues and poorly implemented drivers meant DirectX 10 never really caught on like 9 did.

That brings us to the second point in favour of DirectX 11 — Windows 7. XP is old, and I mean seriously old. I am still running a 2004 copy of XP on my machine and though it’s doing it’s job admirably, it’s due for an upgrade. Windows 7 seems to have gotten over those little annoying quirks of Vista which we hated and shouted so much about. My hunch is most people who have stuck with XP will probably upgrade too. Maybe not on immediate release, but 2-3 months down the line when things settle in, after those initial bugs have been addressed and more and more reviews of the OS come out; 7 should slowly see wider adoption. With Vista it seemed like things were rushed into and hyped up. In contrast Microsoft has been careful with Windows 7. The RC of Windows 7 has been somewhat of a “soft launch” and though I haven’t myself had the chance to try it out, it would seem (from reviews and from what people are saying) Windows 7 is much better off than what Vista was. So it’s fair to assume that 7 will catch on more than Vista and in the process DirectX 11 will “get on” to the Desktop.

Does it mean DirectX 11 will be the defacto API for coming games? For that lets look at the games developed today. Yes most of the games that are developed today are still developed primarily for  DirectX 9.0 class hardware. Why? Consoles that’s why. You do see AAA titles advertise DirectX 10 and 10.1 support, but even those games are developed with DirectX 9.0 class hardware in mind. Yes some features here and there, usually eye-candy to impress your overzealous graphics fanboi can be found, but the engine and tech itself is designed for platform compatibility. Which ironically means not all of the features of the newer DirectX versions are exploited.  As I said before, DirectX 11 is more than just a simple upgrade to the API, it’s also a new way to do things. But since the older hardware still has to be supported, compromises have to be made. There are probably no AAA titles exclusively for the PC, so even if PCs all around were to have DirectX 11 support, it’s not until the consoles catch up will you see all the cool things the newer API has to offer come to the fore.

There is little doubt that version 11 of will make games look better. But there is so much more to the API than just improving looks for games. Many of the features in the new API mirror hardware changes that have taken place, like moving away from the fixed function pipeline, the evolution of GPUs as massively parallel compute devices. All this does mean that DirectX 11 is an API to look at seriously. But how quickly will games start using all these features? I guess only time will tell.