A look into Code::Blocks.

Code::Blocks.This is my second entry on Code::Blocks in the past couple of weeks. I had earlier commented on the release of the IDE but refrained myself to get too carried away and thus, purposefully, didn’t get into details at that time. The reason? Well, we all know how deceptive first impressions can be, especially about something like an IDE. IDEs can be complex beasts and it can take some time to work things out with them. However, Code::Blocks has been mostly easy to adapt to, at least for me. This in part due the fact that it mostly mirrors how Visual Studio works, and I work on that beast 98% of the time. So adapting to Code::Blocks was not too difficult except for minor differences.

First of all, Kudos to the Code::Blocks team. They have done a great job at bringing us this editor. It’s no mean feat, but they seemed to have pulled through all odds and that does indeed deserve a praise. It’s true I was eagerly waiting for the Code::Blocks release for some time now, and if you have been reading my blog, you will have seen me mention the IDE a couple of times before. To cut the long story short, I am lazy! I hate writing UI code and Code::Blocks (wxSmith) just does much work for you in that regard and yes, I always tend to use wxWidgets for most of my cross-platform UI projects. I wish this release could have come in a year earlier, when I was working on a C++ project which involved using wxWidgets for the UI, would have saved me a sh**t load of trouble.

Even though the IDE auto generates UI code, it’s surprisingly clean. Most editors will make a mess of code generation, but not so with C::B. The UI code is generated into pure C++ files (.h and .cpp) which you can continue editing like your normal text files provided you don’t insert code into the blocks C::B uses. Reminds me of the days I worked with Visual Studio 6.0 and MFC. If I am not mistaken VC++ 6.0 used a similar method for code generation. You can even move the code around and C::B is correctly recognize it, yes, provided the blocks are kept intact. A good thing is the fact that you can save the resources as .XRC files, which I tend to use extensively with wxPython. For me, Code::Blocks could very well become the de-facto editor while working with wxWidgets. To bad it doesn’t allow native Python support. That would have been great indeed.

So, besides having a good integration with a UI builder what more does C::B offer? Other than the fact that it can be used for UI development, can it be used for (, maybe other) serious C++ development? Yes it very much can be. All said, my main interest in IDE was not how easily you could build UI. My main interest was too see if C::B could be used for serious day-to-day development and how well it scales to full scale projects. There are several other IDEs that look equally impressive, until you actually try to get things done with them. So what’s the story with C::B? Does it live up to the standards of other professional IDEs? Well, besides some niggling quirks C::B seems to be pretty good for full scale projects. I always have a habit of building, a hello world, a “hello notepad” project with any new UI library I encounter. It gives you a fair idea of the capabilities of the UI. I tried the same with C::B and was pretty happy with the overall experience.

Now for some issues I had with the IDE. First and probably the most annoying was the fact that short-cut key assignments are very different from other editors, at least the ones I use. Also the fact that the IDE doesn’t allow me to set shortcuts like Ctrl-F5 or Shift-F5 is somewhat of a hindrance to quick acclimatization to C::B. That’s one serious nag! The other thing I noticed was the fact that the debugger can get really slow on Linux systems, though I must say it happened only twice for me and is not a frequent occurrence. On Windows the Visual Studio 9.0 directories got messed up when I installed VC 9.0 after I had installed C::B. C::B doesn’t pick up the VC 9.0 directories when you upgrade or remove older express versions. Not a problem though, I did managed to manually set them in the Options section. The debugger is not as extensive as others, but I guess you can generally live with that by adding “watches”. Most other issues, or for the matter of fact even these are rather trivial, I suppose.

OK then, how does the IDE handle projects across platforms? I found almost no trouble porting applications across platforms, at least no issues that were IDE centric. But then again my sample application was not entirely that extensive. Even then it’s worth a mention that after having been setup right, the project written under Linux compiled without a single major change on windows. No mucking around with Makefiles or build systems. Yes, it’s true I programmed for compatibility but still, all I really had to do was switch the compiler settings (for VC 9.0) thats all.

So can C::B be used for production quality projects? I would have to answer “yes” to that question. It definitely is good enough to be used for production code and if you are working with wxWidgets, I would even go so far as recommending this IDE over others. True it is not as powerful as Visual Studio, at least yet, but it still deserves more than a praise. For C++ development under Linux, I would recommend this IDE hands down, period!

No more T-Junctions.

I must confess, my original post on optimization of game levels was, well, incomplete and inaccurate. Optimizations were not fully complete. There were a lot of T-Junctions that were left behind after optimization (, Sandeep was probably the only person to catch that). However, I managed to remove those too. They were causing a lot of problems with A* navigation and I am glad they are gone! So here are the updated screens. Some extra level geometry has been added so the screens might not look exactly the same as the earlier post.

tri_opt_tjunction_small.jpg

The updated scene (Doofus 3D).

tri_opt_tjunction2_small.jpg

T-Junctions removed.

A tryst with CSS and web-design.

I have been juggling my time these days working on 2 things at one time. Yes of course there is the game, and then I have also been spending some time with getting the website up and ready. Yes that also means I am getting my hands dirty with web technologies like CSS and PHP. The two things couldn’t be more different. On the one hand I have this geometrically intensive and monumental algorithmic monster called the game engine and on the other there is this woefully deep chasm in the form of web-design. It’s a fact I would choose the monster over the chasm any given day, (I can slay monsters pretty easily,) but that doesn’t elude us from the fact that web-design is notoriously difficult than I had previously anticipated. Yes I have a good hand on Gimp and Inkscape, and for the record all of the game interface was created using those two packages. Creating most of the art for the web pages in easy! Yes, I am pretty good with most programming languages (, if I can say so myself). However, putting up the web-site has had me cringe with frustration more than once in the past week.

Talking with friends and colleagues who have been down this road, I always knew web development was a bit quirky. But let me just say this, web-design can be crazily non deterministic! OK that was a bit too much, maybe I am going a bit overboard, but sometimes web browsers do tend to have a mind of their own. It is this quirkiness that makes web-development a pain in the rear. Different web browsers can interpret web markups differently, mostly the way they want to and that to me, who falls in the stronger discipline of application programming, is rather distressful. It isn’t one particular browser at fault, though some are more unreliable than others, but most browsers do have some sort of weirdness built into them ( check out CSS compatibility, W3C DOM compatibility). IE (Microsoft) as usual receives the most flack as being hypocritical in its approach towards maintaining standards (, oh please don’t even get me started on that!!). But what I found surprising was that the story is no better with others as well.

All said, most problems are no more than a Google away. Considering the amount of people working on web-development, there is always some poor unfortunate soul who has battled with a similar problem that you face. He has, probably after much deliberation and hair-pulling, found the solution to it, and yes, has been kind enough to post it on a website or a blog so that those who follow in his footsteps will not falter like he did. Bless him\her! I found Google to be an invaluable resource for web development, and with some degree of query refinement, you can pretty much get exactly what you are looking for. Fortunately when it comes to web-development, there are too many tutorials and code dumps all around to get things working.

Then again, I have decide to take a shortcut and go with Joomla for the site; since obviously it’s very easy to understand and saves me a lot of work. Also weighing in was the fact that I have had a pretty good experience with it running my personal site, and it seems a good all-round solid free CMS (Content Management System) solution. The fact that Joomla has a very active community and a myriad of plug-ins for almost anything and everything also makes it an attractive choice. I tried other CMSes as well but couldn’t get around to understanding them as well as I did Joomla. However, it would seem there is no escape from CSS and PHP to some extent since customizing anything with the CMS also means understanding Joomla’s own structure.

The work on the website continues. I hope to finish it soon but it has (, as always,) been a learning experience. With all said and done I am a person who loves challenges, and to tell you the truth, I am kinda enjoying it! 😀 .

Are integrated graphics chips the new battlezone?

In what could be an “one up” and almost a sucker punch to Intel, AMD announced an amazing new chipset, the 780G which is sure to create some flutter in the industry. The 780G puts a full fledged GPU on to the main-board and while I was reading the specs, it does seem to be substantially better than any of the other on-board or, ( in correct terminology,) integrated graphics chips out there. While Intel claims to have “more half of the graphics” market, the graphics or should I say “craphics” cards supported by Intel (, and to some extent AMD earlier) are nothing more than a big joke. The only reason they have such a huge portion of the market is because the average joe/jane is stuck with it and because it came pre-installed. I was recently quizzed by an acquaintance as to why his system could not run Bioshock and the only answer I could give him really was, “Well your system isn’t designed for that sort of gaming”. To that his reply was “Oh I just got a brand new system. How is it that it can’t run a latest game?”

It’s really disturbing for people who buy a brand new PC only to see it fail, utterly miserably I might add, to even push a last generation game at a shallow 24 FPS. Most are clueless, and while their PCs may be brand new with a “fast” multi-core processor with gazillions of RAM at it’s disposal, it can only but run their Office applications. Yes they run faster and better! No such luck with games though. People have to realized, having a faster CPU or for that matter having more cores doesn’t really help too much with games. It does to some extent, but as it stands right now, I would rather have a top-line graphics card like the 8800 GTX than a quad core CPU. It’s a very deceptive concept, I know, but thats how it is.

Anyone who has worked on graphics knows how utterly lousy and how much of a pathetic abomination integrated graphics chips can be. I have battled with all sorts of problems, everything from broken drivers to faulty implementations to near absent feature support. I hope things are finally changing for the better. The question is where does that leave Intel? Intel has been desperately trying to get a better graphics solution on to it’s boards without too much luck. The chipset that AMD has thrown up beats anything that Intel can conjure up hands down! At least in the near future that is. While Intel may add on more cores, they aren’t going to be too useful for people who want to run the latest games. With quality of integrated graphics on offer by Intel, users will have to install, at the very least, a low end graphics card. Sorry Intel, that’s how bad things are!

Then what has the Green Brigade (NVIDIA) have to say to all this? AMD’s acquisition of ATI is finally showing it’s advantages. While the graphics chips may not be the fastest out there, they are indeed very attractive considering the price point. Chipzilla and Graphzilla better get their acts together because if 2007 was the year both ruled in their respective departments, there is a new kid in town. He’s got better and faster guns, and looking more attractive than any of the old boyz!

Optimizations on game levels.

Just an update on the Doofus game and on what I have been working on for the past couple of weeks. The past couple of weeks have seen me seriously working at getting the triangle count down in the game levels. The tri count had been increasing steadily for the last few levels and it just started hitting on the FPS value real bad. That is why I had no option but to go for Triangle decimation. The amount of triangles for even moderately complex levels started turning out to be surprisingly high and most triangles were all but useless. The reason; Doofus 3D levels use brush based geometry and the tris are a result of successive CSG (Constructive Solid Geometry) splits. The more detail I added to the levels, the more redundant splits occurred with the brushes. Meaning the FPS started falling like a rock for arbitrarily complex levels.

The optimization technique I was working on reduces the number of triangles by a) Removing redundant vertices and b) Collapsing unwanted edges. Simple right, not quite. Triangle decimation turned out to be somewhat more complex than I had anticipated. Fortunately and after some real hard brainstorming I managed to get it working just as I wanted it to. Now in some situations the triangle count reduces to as much as 4%. But an average value of around 10 to 20 % is what I usually get. That is also quite significant to say the least. Thank God my effort has not been in vain after all. It was a real pain to get that working correctly. Check out the images below to actually see the optimizations at work.

Original scene.
A smaple Doofus 3D scene

Triangles in the unoptimized version (click to enlarge)
Triangles in the scene before optimization.

Triangles in the optimized version (click to enlarge)
Triangles in the same scene after optimization.

I have also been working on completing the AI. Sorry but I don’t have screens for those, maybe the next time. The AI still needs some amount of tweaking to get things working perfectly. I am not saying too much at this point in time; maybe in one of my next posts I will get into more details. Hopefully I can finish this last pending thing in the game soon.

More than impressed with Xfce.

I am a long time Gnome fan, but recently I had an unexpected run in with Xfce. I was visiting a friend of mine and he had an old laptop that couldn’t be used for anything much really. So we decided to give a shot at install Linux on it. Obviously Xubuntu was the distro of choice since the hardware was pretty old. We got around to installing it and I quickly noticed how fast the GUI was responding even on such old and rather archaic piece of hardware. The Xfce environment looked really slick indeed. I was under the wrong impression that Xfce missed all the bells and whistles provided by Gnome or KDE. Obviously the next thing was to install it on my own desktop, which I did, and I can tell you, the Xfce desktop manager is quite a bit faster than its older and heavier cousins. I generally don’t mess around with stable OS configurations, but I happen to be a speed freak and anything that is fast and light always tends to get my attention. Naturally I made an exception with this one. Now, Xfce is my default desktop.

With everything set to default, Xfce does take less memory than Gnome or KDE. But it wasn’t the only thing that impressed me about this desktop environment. In functionality too it seems to be designed to enhance productivity. Not like other managers aren’t, but you know those little things that nag you about other windowing systems under Linux; well they are nicely taken care of in Xfce. The desktop environment has a uncluttered interface, and though it may miss the richness of KDE, every focus is made so that the user can find his\her way around quickly. Xfce feels and looks very much like a lightweight clone of the Gnome manager (, thought it is not). Also Xfce will happily work with Gnome and the two can exist on the same machine without conflict and to some extent are even interoperable and share data between them (, like Xfce being able to use Gnome icons and files). That’s just a thumbs up as far as I am concerned.

The default file manager under Xfce desktop, Thunar is much (much) faster to open up, and though it lacks some features of Nautilus, I didn’t find any work hindering features missing. So on the whole, is Xfce for you? Well judging from this post you can pretty much see where I stand. However it is a matter of personal taste. If you like an uncluttered fast desktop, or have a lot of windows open which you switch around often (, I know I do),  then you have to check Xfce out. I for one, am pretty happy with Xfce, and I am not switching in a hurry.

STL map/multimap woes.

I was working on porting someone else’s C++ code from Windows to Linux system. This code made heavy use of STL, no problem there. I have a good hand on STL (or so I thought). My engine also makes heavy use of STL. However this code was written using Microsoft’s version of STL. So what’s the problem? STL is STL right? It’s standard across platforms right? No, wrong! Apparently not entirely true. Microsoft’s version of STL is not 100% standards compliant. I had read this before, but didn’t actually come across a case where I found incompatibilities in code across STL libraries.

Until now that is. The code I was porting happens to have a lot of maps and multimaps, with deeply nested template code. A pain in the neck to debug I must say. The problem started with the compiler throwing some ridiculous errors, almost illegible which I traced back (, with some amount of difficulty) to map<>::erase() function. MS’ version of the function returns an iterator, the standards version doesn’t return anything! So I checked the one I use in my engine, STLPort, and it too doesn’t return an iterator for map<>::erase(). Googled around bit, and found that indeed there is no return value for that function.

Strange. I would generally agree with MS on this one. Most other containers like vector and list return an iterator on erase() so should map and multimap. I don’t understand the logic behind map<>::erase() not returning an iterator value. Maybe the standards committee got it wrong or maybe I haven’t fully understood the reasons why. A caveat to those who use MS STL; don’t. Though the erase() issue is to some extent trivial, debugging template code can be really difficult. I for one use standards compliant STLPort to avoid such issues. Though it may be a little difficult to setup, I would recommend people to use the same.