Fable, Oblivion and the sandbox gameplay.

Fable is a game developed by Lion head studios and to be honest I missed out on it a couple of years back when it first came out. Interestingly it was my bout with Oblivion that actually first piqued my interest in this game; since the game seems very similar to Oblivion, and yes Oblivion is pretty high on my list of all time favorite games. Fable is interesting because it just seems so much like what I have in my head as to something I might be working on. A very cartoonish backdrop, a very serious and in-depth gameplay with, what can be called as, dark humor. I have only very briefly played the game and it seems like a well designed game overall. I like it, and like it a lot. While not exactly same as Oblivion, some similarities do exist between the two and, well, it’s hard not to compared the two.

For one, although Fable is open-ended, Oblivion allows you more freedom, definitely more than Fable does. (Those of you who don’t know what an open-ended or sandbox style game is read this.) You can play Oblivion at your own pace and the game can play differently depending on the choices you make quests you complete and how you interact with world (and NPCs). Fable does allow you something similar and does have a sandbox style play, but unlike Oblivion it has a more linear gameplay, or should I say, more linear than Oblivion. Fable is actually an older game as compared to Oblivion, so it wouldn’t be fair to compare them outright, since of course the game predates Oblivion by almost 2 years. Then again 2 years isn’t such a long time after all.

The one thing I didn’t like about Fable, or should I say, didn’t appreciate too much, is that fact that you can’t deviate from your play area, meaning you can’t go anywhere and everywhere in the game world. Exploration is kept to a confined area and the player is not allowed to go beyond that. In Oblivion you are free to explore every corner of Cyrodiil which, I must say, can take quite a while. I played the game for 8 months now and I still haven’t had time to go to every place on the map. The world along with every cave and dungeon is just simply huge. I can understand the technical limitations for such an approach, but Oblivion addresses this very subtly and elegantly. Coming back to my point about exploration; I think exploration is a critical component of any sandbox style game. It gives you so much freedom or should I say gives you an illusion of total freedom and that is something I have come to appreciate a lot after playing the Elder Scrolls series (Morrowind and Oblivion).

In support of Fable, it has a fantastic combat system. I would place it better than Oblivion and I can safely say Fable allows you to have a more balanced combat game. I can give you an example; both Oblivion and Fable allow the use of Mêlée and Ranged weapons, but for some reason I didn’t find the use of ranged weapons in Oblivion all that intuitive. I can’t really explain why, can’t really put a finger on one particular reason, but while playing the game I used to get clobbered if I used a bow & arrow. In Fable I use both to an equal degree. Both games are sandbox games and both games build the player character by the choices the player makes. In Oblivion I ended up being a beefy guy with little resistance to ranged attacks from other NPCs. In Fable my character seems to be a great balance of both. Now I can take down NPCs with proper planning and lure them into traps by using a combination of mêlée, ranged and magic. On the whole Fable does allow you to build a more all-round character.

Fable’s graphics are top notch. Spells and magic, combat system, weapon augmentations and teleportations are all done wonderfully. Even the cartoonish world is built beautifully and so are all the NPCs, of course considering the triangle budget and the fact that the game runs flawlessly on a Geforce 6200 with an impressive frame-rate I must add. The camera navigation and the cut-scenes are also pretty good. Graphics complement the gameplay very nicely and that’s what is important. Graphics are not over done and that’s good. You will find games that are galore with graphics that do nothing more than slowdown the game for no apparent reason and have no particular function other than to please graphics “fanbois”. Fable does none of that. The only thing I really hate about Fable is it’s game-save system. You can’t save your game in the middle of a quest. All you can do is save the skills you have learned. I can’t understand the reason for this, just defies logic. I generally play games only for 20 to 30 mins and quests take significantly longer to complete. So this “feature” is a real PITA. This is probably the only real complaint I have with the game.

I am a fan of sandbox style gameplay but my interest in Fable was 2 fold; it’s true I like playing games, however this time around my interest in the game was more of an academic nature; as a student of game design. I wanted to see how the game was designed overall. Yeah! I have this crazy idea of actually making a sandbox style game some day (long time in the future… or maybe not so long) and Fable seemed too hard to resist. Mind you I haven’t fully played the game yet but I am already pretty impressed; and the same goes of Oblivion too. Baring little quirks, I think both games are equally good in presenting the player with a out-of-the-box experience. Both games allow you to build the player character in unique ways (, sometimes not so unique) but non-monotonous none the less. Both game are “different” and it would be unfair to say that one is better than the other. True they each have their good and bad points, but both games are equally enjoyable.

Two great books from NVIDIA

NVIDIA released 2 great books (free) very recently which are a must for anyone who tinkers with graphics and specifically with shaders. First one is the GPU Gems book, which I happen to have as a h/c (, long before it was released). It’s such an invaluable resource of tricks that are still very much valid to this day. I would recommend this to anyone and everyone who wants to get their hands dirty with graphics. Then yesterday  The Cg tutorial was released. I haven’t got a h/c version of the book as I pretty much have a good hand at HLSL and both (HLSL and Cg) are essentially the same. I read thorugh the book and was equally impressed with it. So if you haven’t already read them, I would strongly recommend taking a good hard look at both.

EDIT [13th May, 2008]: Another one released today. GPU Gems 2.

STL is not slow.

Recently I was having a conversation with some former colleagues of mine and I got a feeling that most of them were of the opinion that STL was slow and/or inefficient . If you think there is truth to this then let me assure you it’s not the case, not at all. STL is used by so many people and so many libraries that it, in fact, is probably the most optimized piece of code there is. This misconception is actually a result of inappropriate use of STL library and not because STL is inefficient. It maybe true that different versions of STL may have different speeds and I have heard that MS STL is a little slower than others like STLport, but I have no data to either prove or disprove this. I never use MS STL even while programming under Visual Studio (find out why), so I can’t really say.

As I said earlier, the perception that STL is somehow slow and/or inefficient stems from the fact that programmers generally tend to abuse STL containers by not using correct ones. STL has different containers and each is specifically designed to address a particular problem. I don’t want to get into which container to select when, I think Scott Myeres has done a far better job that I ever can. If you haven’t read his book then you better get down to it right now. It clearly outlines how one should go about using STL and addresses several subtitles involved in correct container selection.

Misconceptions, discipline and a pragmatic approach.

How disciplined are you in coding? No, seriously, are you a mad code hacker or are you one of those who takes that extra bit of care while coding? Are you paranoid about comments or do you believe comments are not needed especially for trivial code snippets? Why am I raising these questions? It so happens, I was helping someone out (very recently) to port code across platforms and I happen to look at a piece of code, or rather pieces of code, that were an utter disgrace to coding standards. No comments, headers included in headers, crazy loop-backs across libraries, 10 people writing code all over the place, use of non-standard practices, utter disrespect for memory management and zero design (high level or low level). Can you believe somebody using malloc to create a C++ object! I mean seriously, you could hire monkeys to do a better job than that!

OK enough of the rant already! I can’t really disclose who the code was for, since it is production code used by a reputed organization. Yeah believe it, I still can’t, but it just goes to show how disconnected the organization is with respect to what can be considered as their most valuable asset. No wrong it’s not code, it’s the process! It is not that they are not paying for it, they are, but the management is, well, too stupid (for lack of a better word) to understand the implications of not having proper coding discipline. On the flip side, you will find some organizations where the coding discipline is so rigid that it rarely allows even simple adjustments to the existing process. When coding practices are made far too rigid it hampers free thought and ultimately retards innovation. This is the other end of the story, where companies are paranoid about coding standards and don’t realize the fact that having inflexible coding practises can be in some situations counterproductive. Standardization is important, and having standards does help in many activities including coding, debugging, code reviews and can ultimately determine the quality of a product. Having standards helps maintain discipline in the team. In the above case, since the team did not maintain any standard the code just fell apart over time.

However, overdoing it can also lead to problems. Many times people simply don’t understand what a “coding standard” is. I can sight an example here; I was once involved with the team where the coding standards didn’t permit the use of RTTI. The wisdom behind that was, “RTTI breaks polymorphism”. Yes very true and RTTI should be avoided whenever possible. However, lets not be paranoid, in some situations it does help. RTTI when used subtly can solve problems which may require re-engineering of design. Not all class relationships are monolithic and RTTI can help you in such situations. I am not saying overuse RTTI, I am just saying RTTI has it’s place. To make a commandment like “Never use RTTI” is just plain lunatic. In our case it lead to breaking up of one class definition into smaller classes which ultimately lead to over-engineering of the solution. A problem which otherwise would have had a very straightforward approach was now distributed into a bunch of classes which had no real use other than to adhere to the “Never use RTTI” rule. Come to think of it, was that even a “coding standard”? This is what I can call invasion of standards on practises. Meaning in a attempt to have standards and discipline, the team/project leader went overboard and ultimately invaded on what was a design decision. It’s definitely not a coding standard.

Coming back to the code I was working on; the other thing I noticed was an attempt to do preemptive optimizations. Preemptive optimizations are an attempt to increase the run-time performance of a program during coding and/or design. While it’s not to say that you downrightly use bad practises, it often a folly to preemptively optimize code by what you think might be right. That’s because unless you are absolutely sure about what you are doing, you will have wasted your time or in a worst case actually made the code slower. What you think might be right is often not the case. One thing I remember out of the code I saw was multiplication by 0.5 to halve an integer value instead of division by 2. The reason, someone somewhere read that multiplication is faster than division on CPUs. Now this is downright crazy, because not only did it not optimize the code, it actually made it a whole lot slower. No one bothered to verify if this was indeed true. This is the type of noobish oneupmanship procreated by budding programmers who clearly have no real-world experience. A division by 2 produces

mov    edx,DWORD PTR [ebp-12]
mov    eax,edx
shr    eax,0x1f
add    eax,edx
sar    eax,1

whereas a multiplication by 0.5 produces

fild   DWORD PTR [ebp-12]
fld    QWORD PTR ds:0x8048720
fmulp  st(1),st
fnstcw WORD PTR [ebp-22]
movzx  eax,WORD PTR [ebp-22]
mov    ah,0xc
mov    WORD PTR [ebp-24],ax
fldcw  WORD PTR [ebp-24]
fistp  DWORD PTR [ebp-8]
fldcw  WORD PTR [ebp-22]

The code produced by the multiplication is slower than the division by several orders of the magnitude since the FPU gets involved. Why did that happen? Simple, because the compiler is a lot smarter that you give it credit for. It saw the division by 2 and quickly guessed the best way to halve a value was to use shift ops. Looks like we have a winner here and it’s not the programmer. It may happen that an optimizing compiler might be smart enough to even optimize this piece of code, but my point is there was no need to go for preemptive optimization in this case. Modern compilers are pretty smart, a for(int i = 0; i < 4; ++i) will produce the exact same code as for(int i = 0; i < 4; i++). Don’t believe me? Verify it. Oh yes and please don’t use a compiler from the 1990’s and complain. Something like a GCC 4.x series or a VC 9.0 is something all of us should be using right now. The only way to really optimize anything is via a performance analysis tool like Vtune or Codeanalyst and not make blind assumptions of what you may think is faster. Please note 10% of the code takes 90% of the time. The other 90% code may require no optimizations at all.

The other thing that got me really annoyed was the fact the code was poorly commented, or should I say rather inconsistently commented. No comments on function definitions, inconsistent inline comments, blocks of code without comments at all, algorithms explanations placed out of scope often in some other .doc file. Just a garbled tub of lard! OK everyone knows comments are a must, but very few programmers actually understand how good comments ought to be written. Properly commented code will boost productivity significantly. That doesn’t mean you have to over comment. It’s a case of working smart rather than working hard. It’s quality vs quantity. I wanted to write more on this but I figured the blog entry would get rather long, instead I will provide a link to relevant info and of course to doxygen. Please people do everyone a favor and use doxygen style commenting, please please! The another thing I advocate is keeping all documentation close or rather easily accessible. Most documents that are created never get read or never get read at the right time because they sit in some obscure directory on some machine somewhere. The intension of creating the documentation is undoubtedly noble, however none of that is of any real help if the documentation is not accessible at the right time. With some rather trivial tweaks to doxygen, you could easily make it happen. We tried something like this and it was a great hit.

It’s not the first time I have worked on such a piece of code. But still I find it difficult to understand how reputed organizations can work this way. Let the facts be known; once upon a time I too was guilty of writing such code, however we all learn from our mistakes. Taking a lesson out of every experience is, I think, the key.

Too busy to write?

Sorry, but I have been a little busy for the past two weeks. Too busy to blog I guess. I have been aiming for a code freeze on the Doofus game and it’s been hard work getting all the bugs and issues in. I’m going for the final push this time around to get at least the coding issues out of the way. The good thing is there isn’t too much left on the coding side, so I may be able to push out another beta by next week. Hopefully it will be the last and final beta before (; wait, don’t get your hopes up) at least one release candidate before Doofus goes gold. I guess there are still a sizable amount to levels to be completed.

Unlike release cycles of other software, Doofus game release cycles are a little bit different. I devised a new method after we initially found the old method to be rather monolithic for this particular project, and because of obvious constraints we have as a small team (unavailability of testers at specific times and things of that nature). Traditionally, you have a set of alpha releases of a product where each alpha release is tested in-house by both the developers and/or testers. Bugs are filed for specific releases and fixed during bug-fixing stage, whenever that maybe (, generally differs from project to project). Beta releases are pushed out when alpha releases get stable enough for “general consumption”. Beta releases are generally widely accepted as “almost complete” versions of the product. So a beta release often signifies a “feature-freeze” of the product. A bug fixed beta release can become a Release Candidate if the dev team feels confident enough which eventually turns Gold when everyone is confident enough.

In the case of the Doofus game, things are a little bit different. A beta release signifies a “feature freeze” for “a particular set of” features. Let me explain. When we started developing the game, the O2 Engine was the first to come (, before we started on the actual game code). The name “O2 engine” comes from the repository branch of my older game that was never released because it had too many flaws! I guess a lot was carried over including primitive libraries and some design decisions and implementations. Anyways, since the new project was a bit complex and our testing team small and working part-time, we decided to have specific release milestones having only limited set of completely complete features. When I say completely complete I mean “feature frozen”. Each beta release addressed different features. The first was for engine integration with geometry. The level structure was finalized and resource management was put in place. The first release looked really ugly because the renderer was partially finished.

The second beta addressed collision systems, basic gameplay things like triggers activators and integrations with third-party modules and libraries. The third was for rendering sub-systems, when those screenshots were posted. This release, the fourth, will be for AI (NPC) and Physics and that marks the end of the game features. The beta still has to go through a bug fixing stage before I am confident enough to even look at a RC, but it does mark and end to any major code modifications to the game code. Many would say the betas are actually alphas, but there are 2 reasons I call them beta releases. a) They are feature freeze releases. No features are added or removed to the already tested features. b) Our testers are no full-fledged project member so white-box testing responsibility falls on the dev team, mostly. That said, the beta testers are not just kids beating at the keyboard and have been instrumental in testing the product.

I guess this release has got me a bit exited, and I am working on the website/s at the same time. I have actually started on quite a few blog posts in the past week, but haven’t had the time to polish and/or finish them yet. Maybe this week will see more posts on the blog, I hope.

A look into Code::Blocks.

Code::Blocks.This is my second entry on Code::Blocks in the past couple of weeks. I had earlier commented on the release of the IDE but refrained myself to get too carried away and thus, purposefully, didn’t get into details at that time. The reason? Well, we all know how deceptive first impressions can be, especially about something like an IDE. IDEs can be complex beasts and it can take some time to work things out with them. However, Code::Blocks has been mostly easy to adapt to, at least for me. This in part due the fact that it mostly mirrors how Visual Studio works, and I work on that beast 98% of the time. So adapting to Code::Blocks was not too difficult except for minor differences.

First of all, Kudos to the Code::Blocks team. They have done a great job at bringing us this editor. It’s no mean feat, but they seemed to have pulled through all odds and that does indeed deserve a praise. It’s true I was eagerly waiting for the Code::Blocks release for some time now, and if you have been reading my blog, you will have seen me mention the IDE a couple of times before. To cut the long story short, I am lazy! I hate writing UI code and Code::Blocks (wxSmith) just does much work for you in that regard and yes, I always tend to use wxWidgets for most of my cross-platform UI projects. I wish this release could have come in a year earlier, when I was working on a C++ project which involved using wxWidgets for the UI, would have saved me a sh**t load of trouble.

Even though the IDE auto generates UI code, it’s surprisingly clean. Most editors will make a mess of code generation, but not so with C::B. The UI code is generated into pure C++ files (.h and .cpp) which you can continue editing like your normal text files provided you don’t insert code into the blocks C::B uses. Reminds me of the days I worked with Visual Studio 6.0 and MFC. If I am not mistaken VC++ 6.0 used a similar method for code generation. You can even move the code around and C::B is correctly recognize it, yes, provided the blocks are kept intact. A good thing is the fact that you can save the resources as .XRC files, which I tend to use extensively with wxPython. For me, Code::Blocks could very well become the de-facto editor while working with wxWidgets. To bad it doesn’t allow native Python support. That would have been great indeed.

So, besides having a good integration with a UI builder what more does C::B offer? Other than the fact that it can be used for UI development, can it be used for (, maybe other) serious C++ development? Yes it very much can be. All said, my main interest in IDE was not how easily you could build UI. My main interest was too see if C::B could be used for serious day-to-day development and how well it scales to full scale projects. There are several other IDEs that look equally impressive, until you actually try to get things done with them. So what’s the story with C::B? Does it live up to the standards of other professional IDEs? Well, besides some niggling quirks C::B seems to be pretty good for full scale projects. I always have a habit of building, a hello world, a “hello notepad” project with any new UI library I encounter. It gives you a fair idea of the capabilities of the UI. I tried the same with C::B and was pretty happy with the overall experience.

Now for some issues I had with the IDE. First and probably the most annoying was the fact that short-cut key assignments are very different from other editors, at least the ones I use. Also the fact that the IDE doesn’t allow me to set shortcuts like Ctrl-F5 or Shift-F5 is somewhat of a hindrance to quick acclimatization to C::B. That’s one serious nag! The other thing I noticed was the fact that the debugger can get really slow on Linux systems, though I must say it happened only twice for me and is not a frequent occurrence. On Windows the Visual Studio 9.0 directories got messed up when I installed VC 9.0 after I had installed C::B. C::B doesn’t pick up the VC 9.0 directories when you upgrade or remove older express versions. Not a problem though, I did managed to manually set them in the Options section. The debugger is not as extensive as others, but I guess you can generally live with that by adding “watches”. Most other issues, or for the matter of fact even these are rather trivial, I suppose.

OK then, how does the IDE handle projects across platforms? I found almost no trouble porting applications across platforms, at least no issues that were IDE centric. But then again my sample application was not entirely that extensive. Even then it’s worth a mention that after having been setup right, the project written under Linux compiled without a single major change on windows. No mucking around with Makefiles or build systems. Yes, it’s true I programmed for compatibility but still, all I really had to do was switch the compiler settings (for VC 9.0) thats all.

So can C::B be used for production quality projects? I would have to answer “yes” to that question. It definitely is good enough to be used for production code and if you are working with wxWidgets, I would even go so far as recommending this IDE over others. True it is not as powerful as Visual Studio, at least yet, but it still deserves more than a praise. For C++ development under Linux, I would recommend this IDE hands down, period!

No more T-Junctions.

I must confess, my original post on optimization of game levels was, well, incomplete and inaccurate. Optimizations were not fully complete. There were a lot of T-Junctions that were left behind after optimization (, Sandeep was probably the only person to catch that). However, I managed to remove those too. They were causing a lot of problems with A* navigation and I am glad they are gone! So here are the updated screens. Some extra level geometry has been added so the screens might not look exactly the same as the earlier post.

tri_opt_tjunction_small.jpg

The updated scene (Doofus 3D).

tri_opt_tjunction2_small.jpg

T-Junctions removed.

A tryst with CSS and web-design.

I have been juggling my time these days working on 2 things at one time. Yes of course there is the game, and then I have also been spending some time with getting the website up and ready. Yes that also means I am getting my hands dirty with web technologies like CSS and PHP. The two things couldn’t be more different. On the one hand I have this geometrically intensive and monumental algorithmic monster called the game engine and on the other there is this woefully deep chasm in the form of web-design. It’s a fact I would choose the monster over the chasm any given day, (I can slay monsters pretty easily,) but that doesn’t elude us from the fact that web-design is notoriously difficult than I had previously anticipated. Yes I have a good hand on Gimp and Inkscape, and for the record all of the game interface was created using those two packages. Creating most of the art for the web pages in easy! Yes, I am pretty good with most programming languages (, if I can say so myself). However, putting up the web-site has had me cringe with frustration more than once in the past week.

Talking with friends and colleagues who have been down this road, I always knew web development was a bit quirky. But let me just say this, web-design can be crazily non deterministic! OK that was a bit too much, maybe I am going a bit overboard, but sometimes web browsers do tend to have a mind of their own. It is this quirkiness that makes web-development a pain in the rear. Different web browsers can interpret web markups differently, mostly the way they want to and that to me, who falls in the stronger discipline of application programming, is rather distressful. It isn’t one particular browser at fault, though some are more unreliable than others, but most browsers do have some sort of weirdness built into them ( check out CSS compatibility, W3C DOM compatibility). IE (Microsoft) as usual receives the most flack as being hypocritical in its approach towards maintaining standards (, oh please don’t even get me started on that!!). But what I found surprising was that the story is no better with others as well.

All said, most problems are no more than a Google away. Considering the amount of people working on web-development, there is always some poor unfortunate soul who has battled with a similar problem that you face. He has, probably after much deliberation and hair-pulling, found the solution to it, and yes, has been kind enough to post it on a website or a blog so that those who follow in his footsteps will not falter like he did. Bless him\her! I found Google to be an invaluable resource for web development, and with some degree of query refinement, you can pretty much get exactly what you are looking for. Fortunately when it comes to web-development, there are too many tutorials and code dumps all around to get things working.

Then again, I have decide to take a shortcut and go with Joomla for the site; since obviously it’s very easy to understand and saves me a lot of work. Also weighing in was the fact that I have had a pretty good experience with it running my personal site, and it seems a good all-round solid free CMS (Content Management System) solution. The fact that Joomla has a very active community and a myriad of plug-ins for almost anything and everything also makes it an attractive choice. I tried other CMSes as well but couldn’t get around to understanding them as well as I did Joomla. However, it would seem there is no escape from CSS and PHP to some extent since customizing anything with the CMS also means understanding Joomla’s own structure.

The work on the website continues. I hope to finish it soon but it has (, as always,) been a learning experience. With all said and done I am a person who loves challenges, and to tell you the truth, I am kinda enjoying it! 😀 .

Optimizations on game levels.

Just an update on the Doofus game and on what I have been working on for the past couple of weeks. The past couple of weeks have seen me seriously working at getting the triangle count down in the game levels. The tri count had been increasing steadily for the last few levels and it just started hitting on the FPS value real bad. That is why I had no option but to go for Triangle decimation. The amount of triangles for even moderately complex levels started turning out to be surprisingly high and most triangles were all but useless. The reason; Doofus 3D levels use brush based geometry and the tris are a result of successive CSG (Constructive Solid Geometry) splits. The more detail I added to the levels, the more redundant splits occurred with the brushes. Meaning the FPS started falling like a rock for arbitrarily complex levels.

The optimization technique I was working on reduces the number of triangles by a) Removing redundant vertices and b) Collapsing unwanted edges. Simple right, not quite. Triangle decimation turned out to be somewhat more complex than I had anticipated. Fortunately and after some real hard brainstorming I managed to get it working just as I wanted it to. Now in some situations the triangle count reduces to as much as 4%. But an average value of around 10 to 20 % is what I usually get. That is also quite significant to say the least. Thank God my effort has not been in vain after all. It was a real pain to get that working correctly. Check out the images below to actually see the optimizations at work.

Original scene.
A smaple Doofus 3D scene

Triangles in the unoptimized version (click to enlarge)
Triangles in the scene before optimization.

Triangles in the optimized version (click to enlarge)
Triangles in the same scene after optimization.

I have also been working on completing the AI. Sorry but I don’t have screens for those, maybe the next time. The AI still needs some amount of tweaking to get things working perfectly. I am not saying too much at this point in time; maybe in one of my next posts I will get into more details. Hopefully I can finish this last pending thing in the game soon.

STL map/multimap woes.

I was working on porting someone else’s C++ code from Windows to Linux system. This code made heavy use of STL, no problem there. I have a good hand on STL (or so I thought). My engine also makes heavy use of STL. However this code was written using Microsoft’s version of STL. So what’s the problem? STL is STL right? It’s standard across platforms right? No, wrong! Apparently not entirely true. Microsoft’s version of STL is not 100% standards compliant. I had read this before, but didn’t actually come across a case where I found incompatibilities in code across STL libraries.

Until now that is. The code I was porting happens to have a lot of maps and multimaps, with deeply nested template code. A pain in the neck to debug I must say. The problem started with the compiler throwing some ridiculous errors, almost illegible which I traced back (, with some amount of difficulty) to map<>::erase() function. MS’ version of the function returns an iterator, the standards version doesn’t return anything! So I checked the one I use in my engine, STLPort, and it too doesn’t return an iterator for map<>::erase(). Googled around bit, and found that indeed there is no return value for that function.

Strange. I would generally agree with MS on this one. Most other containers like vector and list return an iterator on erase() so should map and multimap. I don’t understand the logic behind map<>::erase() not returning an iterator value. Maybe the standards committee got it wrong or maybe I haven’t fully understood the reasons why. A caveat to those who use MS STL; don’t. Though the erase() issue is to some extent trivial, debugging template code can be really difficult. I for one use standards compliant STLPort to avoid such issues. Though it may be a little difficult to setup, I would recommend people to use the same.