Hardy is here.

UbuntuUbuntu 8.04, code name Hardy Heron was released 2 days ago and since my internet machine has nothing better to do while I finish up the game, I went for a full system upgrade right away. Ubuntu does go from strength to strength with each release of the OS and the story with Hardy is no different. I have been using Gutsy for the past 6 months now and with the release of Hardy, I think XP is in serious danger of losing it’s number one spot in my list of preferred OSes. I just don’t boot into XP these days on my internet PC and my reservations on Vista are well known. Ubuntu at the moment is all you could want from an OS, though some nagging issues clearly remain. I however do use XP for all my programming stuff, unfortunately that’s where the bulk of the gaming market is. I however do plan to release the Doofus game for Linux once I release a Windows version.

I have praised Ubuntu before on this blog, but it is funny how Canonical has consistently managed to do a good job and stick to it’s motto of providing a simple and yet promising Linux distribution that even a common, or should I say a non tech savvy person could use. They have successfully managed to change the “Linux is for Geeks” attitude into something people can look and use in their everyday lives. Lets be fair, there are others that are fast catching up and can be considered equally impressive, yet Ubuntu has managed to stay ahead of the curve, just that little bit. It’s just those small things and annoyances that Ubuntu has managed to address successfully that has led to it’s popularity. Some people would argue that Ubuntu could not have stood so tall if it weren’t standing on shoulders of teams like Fedora SUSE and of course Debian, and without whose support and work Ubuntu could not have been possible. Yes, thats indeed true. However, Ubuntu has made a difference by actually using and in some cases integrating the great work done by all these teams and putting together a strong stable distro which could easily be considered as the best of the Linux distros out there.

Little things go a long way. Many people have heard about Linux, probably more than you might think. However, very few have actually used it. Why? It’s a headache to partition your disks and actually have a Linux partition. A average-joe user dreads things like that. Walk in Wubi! Now some might say having Linux on an NTFS partition is not something new. It could be done with several other distros long before Ubuntu was around, but how many of the other distros allow this to be done with a simple few clicks? I threw in the Ubuntu CD in the drive under XP and the first thing that popped up was the Wubi installer. I could install an entire Linux distro in about 4-5 clicks and a couple of restarts of the machine. I am a long time Linux user, but even I was surprised how trivial it was to install Linux with Wubi. Now I wouldn’t recommend using Wubi for the experienced user, however this option is rather cool for a person who has never seen or used Linux before..

However, not every aspect of the distro is flawless. There are some issues that still need work. It may not be all the distro’s fault either. Somethings are still a miss with the community as a whole. Technical issues like sound and WiFi are the ones that comes to mind. There are some issues there that need to be sorted out. Needless to say such issues are no doubt small and Ubuntu has address a lot of them with Hardy. The only real complaint I have is, I still can’t seem to get my front headphone jack to work, not with Gutsy and not with Hardy. But I guess this is some weired ALSA problem. Fortunately the NVIDIA driver is doing a fine job. I remember there was a time when h/w vendors didn’t seem too interested with Linux but I must say things are changing for the better. It wasn’t that long ago, when you couldn’t find a decent driver for your graphics card, now most leading distros come bundled with one.

As a parting note, a few suggestions on download and upgrade. I would recommend using bittorrent since  found it far faster than using the overloaded Ubuntu servers. The CD ISOs can be found on all mirrors. Try this link if you want DVD ISO torrents. Also remember if you are upgrading from a CD use the “alternate” version of the install ISO. It is best to use the Update manager to do an upgrade of the OS, it’s the safest method. If you have downloaded the alternate version of the ISO, you can update without having to actually burn a CD ROM. Linux can directly mount ISOs and you don’t need any special software to do that. Make a directory under /mnt called “isocdrom” and use the command

sudo mount -o loop -t iso9660 ubuntu-8.04-alternate-i386.iso /mnt/isocdrom/

to mount the ISO directly. Then use the command

sudo /mnt/isocdrom/cdromupgrade

to start the upgrade and follow the instructions. Remember to use the full path “/mnt/isocdrom/cdromupgrade” while starting the upgrade.

Triple booting: Vista – Linux – XP using Vista’s boot loader.

Ah it’s Vista again, 😀 but this entry is a bit different from my other ‘Vista’ entries. Do you know you can have a multi-boot system via the Vista boot-loader? OK yes, I am using Vista, or rather I am testing the Doofus game out on various Windows versions and Vista just happens to be one of them. No, I am not officially using it (as yet) on dev machines but since we are heavy into testing and as past experience has shown us that Vista is a pretty unreliable OS, we decided for full Vista compatibility testing this time around. However, none of the team has Vista installed on their PCs so we had to go looking for someone who has. We did find a friend with a Dell laptop who had Vista Ultimate but unfortunately the guy had long since formatted the machine and installed dual booted XP-Fedora combination. However after some fair bit of convincing and coaxing I did manage to have him share a partition on his machine for Vista.

The problem was, we had to keep the XP-Fedora working. However the Vista install overwrites the MBR so GRUB loaded into the MBR is effectively overwritten thus preventing a boot to an already installed Linux partition, and that is exactly what happened. I had initially anticipated the problem, this is not the first time I was working with multiple OS with multiple-boot options. In the past GRUB had served me well in such situations. So I was pretty confident that even if the MBR were to be overwritten, it was just a matter of reinstalling grub. That’s what I did, unfortunately it didn’t work this time around. Maybe because of some problem with chainloader, or maybe Vista doesn’t seem to like anything other than it’s own bootloader. I was unfortunately unable to find out why exactly Vista doesn’t boot via GRUB. So I tried something else, I tried booting into Linux via Vista’s boot loader and with a bit of hacking it really worked, quite nicely I must say.

How To:

Continue reading

FL Studio Rocks!

All of this blog has been tech stuff and more tech stuff. People must be wondering what is it I otherwise do. Actually as the story goes, earlier when I was working (, not on gaming but on my other programming job), my hobby used to be working on graphics stuff and modding other games and engines. Funny, my hobby became my job now that I am working on this game, so it was time to take up another one (, hobby that is). What’s the next best thing? Creating music, of course! 😀 It so happens I ran across this software called FL Studio 6 months or so back and started fiddling with it and was soon hooked.

I was using the demo for quite a while and I was really impressed by the whole product. I guess impressed enough to go get the full producer version of the software. For the clueless, FL Studio is a digital audio workstation (DAW), meaning you can produce music with it. OK, I am still a noob at the whole thing, but even then the software allowed me to create some really good tunes pretty easily. The work flow is not trivial, but you can figure your way around after reading tutorials and following online video-tutorials. I am not a good music composer, not by any stretch of the imagination, yet the software allowed me to create pretty decent tracks far too quickly than what I had ever thought possible.

The program has near infinite options for authoring audio, most of which I am completely clueless about. Unfortunately I haven’t got too much time right now to look at each and every one, but I hope I can get around to understanding them eventually. To someone who has never seen FL Studio, the interface might look intimidating. The amount of nuts and bolts on the UI makes one think that it would be rather difficult to get things working, however looks can be very deceiving. While not a walk in the park, a few searches on the internet will have even a total noob creating great audio loops in no time. All you really need to do is visit Youtube and there are more than enough tutorials for even advanced stuff. The FL Studio site itself has more than enough vids to get you started.

FL Studio

I kinda liked the software right from the word ‘Go’ and, yes, I am a sucker for music. Unfortunately all my earlier attempts to produce anything audible, with any other software (, or for that matter hardware) could only be categorized as ‘noise’. FL Studio just seemed so intuitive. True there are other, more powerful products in the market, but I think very few can stand up to FL Studio given the price point, not to mention the lifetime free upgrades the product offers. Cool!

UAC popups were designed to annoy.

I was shocked reading this, well almost; maybe not quite, but it seems Microsoft built the UAC prompts into Vista to annoy people and that too on purpose. The idea behind it was to force developers to write more secure code that would not allow for those UAC pop-ups to appear at all. Oh wow, so this is actually a feature then! Silly me, I was really stupid to think it was just an annoyance. So it’s official then, all those popups you encounter in Vista are actually developer’s faults and has nothing to do with MS’. Dumb developers! Somebody better teach them correct programming!

To put things in perspective, it’s not uncommon to have such a system in place. Most modern Linux distros also have a similar concept, ie. either via su, sudo or asking root password while doing admin tasks. Most distros today refuse root access all together. It’s not a bad thing. Root/Admin accounts are notoriously easy to hack into, not to mention most users use root as a default login. Using root as default is dangerous by any standards. However I hate UAC because it is far more annoying than say a sudo. It’s like, it takes me for a retard and pops up for the simplest of operations. In most cases it is unwarranted, and the other reason I hate it so much is because the messages can be really cryptic. Most of the times they look to me as downright disclaimers.

I have no problems with MS trying to make Vista a more secure platform. It’s actually a very good achievement what MS has done with Vista. All previous MS OSes had a very bad reputation for security. All said, Vista is pretty good, probably the best of the MS OSes thus far when it comes to security. However the UAC just goes overboard and that, actually, is the flaw! Most users are not technically savvy and most don’t understand what the hell UAC dialogs tell them. It’s like “WTF is that!” when the dialog pops up. Most people I know just turn off the UAC because it’s annoying. Such systems can get pwnd or infected, thus offsetting all the security Vista provides. A system that was put in place to prevent something ends up actually doing the reverse!

I don’t buy the argument that annoying UAC popups per say will somehow make software vendors write more secure code. I mean just simply having a UAC will make sure that application writers will take enough care so that their application runs on a default Vista setup. My argument is, there is actually no need for any popups at all. Developers who want their applications to run on Vista, will automatically adhere to the UAC concept. I can understand not allowing an application to write to the registry or preventing files being placed in the system area, and this can all be done subtly by not having a popup. It’s as simple as not allowing a file-write/read in the system directory. A developer is smart enough to understand that the file needs to go somewhere else, or the registry value needs to be placed somewhere other than a restricted area. For heaven sakes, most of us who have programmed on *NIX based systems have been doing this for years now and I don’t remember seeing any popups!

Vote for a Poll!

I am trying out something new on the blog. Some of you may have noticed, a mysterious poll appeared on the right hand side. Actually I was browsing through wordpress plugins and I came across this neat little poll plug-in and decided to give it a try. So there it is. Let me know what you think. You may need to turn on javascript to enable voting.

Do you like the idea of having polls on this blog?

View Results

Loading ... Loading ...

Misconceptions, discipline and a pragmatic approach.

How disciplined are you in coding? No, seriously, are you a mad code hacker or are you one of those who takes that extra bit of care while coding? Are you paranoid about comments or do you believe comments are not needed especially for trivial code snippets? Why am I raising these questions? It so happens, I was helping someone out (very recently) to port code across platforms and I happen to look at a piece of code, or rather pieces of code, that were an utter disgrace to coding standards. No comments, headers included in headers, crazy loop-backs across libraries, 10 people writing code all over the place, use of non-standard practices, utter disrespect for memory management and zero design (high level or low level). Can you believe somebody using malloc to create a C++ object! I mean seriously, you could hire monkeys to do a better job than that!

OK enough of the rant already! I can’t really disclose who the code was for, since it is production code used by a reputed organization. Yeah believe it, I still can’t, but it just goes to show how disconnected the organization is with respect to what can be considered as their most valuable asset. No wrong it’s not code, it’s the process! It is not that they are not paying for it, they are, but the management is, well, too stupid (for lack of a better word) to understand the implications of not having proper coding discipline. On the flip side, you will find some organizations where the coding discipline is so rigid that it rarely allows even simple adjustments to the existing process. When coding practices are made far too rigid it hampers free thought and ultimately retards innovation. This is the other end of the story, where companies are paranoid about coding standards and don’t realize the fact that having inflexible coding practises can be in some situations counterproductive. Standardization is important, and having standards does help in many activities including coding, debugging, code reviews and can ultimately determine the quality of a product. Having standards helps maintain discipline in the team. In the above case, since the team did not maintain any standard the code just fell apart over time.

However, overdoing it can also lead to problems. Many times people simply don’t understand what a “coding standard” is. I can sight an example here; I was once involved with the team where the coding standards didn’t permit the use of RTTI. The wisdom behind that was, “RTTI breaks polymorphism”. Yes very true and RTTI should be avoided whenever possible. However, lets not be paranoid, in some situations it does help. RTTI when used subtly can solve problems which may require re-engineering of design. Not all class relationships are monolithic and RTTI can help you in such situations. I am not saying overuse RTTI, I am just saying RTTI has it’s place. To make a commandment like “Never use RTTI” is just plain lunatic. In our case it lead to breaking up of one class definition into smaller classes which ultimately lead to over-engineering of the solution. A problem which otherwise would have had a very straightforward approach was now distributed into a bunch of classes which had no real use other than to adhere to the “Never use RTTI” rule. Come to think of it, was that even a “coding standard”? This is what I can call invasion of standards on practises. Meaning in a attempt to have standards and discipline, the team/project leader went overboard and ultimately invaded on what was a design decision. It’s definitely not a coding standard.

Coming back to the code I was working on; the other thing I noticed was an attempt to do preemptive optimizations. Preemptive optimizations are an attempt to increase the run-time performance of a program during coding and/or design. While it’s not to say that you downrightly use bad practises, it often a folly to preemptively optimize code by what you think might be right. That’s because unless you are absolutely sure about what you are doing, you will have wasted your time or in a worst case actually made the code slower. What you think might be right is often not the case. One thing I remember out of the code I saw was multiplication by 0.5 to halve an integer value instead of division by 2. The reason, someone somewhere read that multiplication is faster than division on CPUs. Now this is downright crazy, because not only did it not optimize the code, it actually made it a whole lot slower. No one bothered to verify if this was indeed true. This is the type of noobish oneupmanship procreated by budding programmers who clearly have no real-world experience. A division by 2 produces

mov    edx,DWORD PTR [ebp-12]
mov    eax,edx
shr    eax,0x1f
add    eax,edx
sar    eax,1

whereas a multiplication by 0.5 produces

fild   DWORD PTR [ebp-12]
fld    QWORD PTR ds:0x8048720
fmulp  st(1),st
fnstcw WORD PTR [ebp-22]
movzx  eax,WORD PTR [ebp-22]
mov    ah,0xc
mov    WORD PTR [ebp-24],ax
fldcw  WORD PTR [ebp-24]
fistp  DWORD PTR [ebp-8]
fldcw  WORD PTR [ebp-22]

The code produced by the multiplication is slower than the division by several orders of the magnitude since the FPU gets involved. Why did that happen? Simple, because the compiler is a lot smarter that you give it credit for. It saw the division by 2 and quickly guessed the best way to halve a value was to use shift ops. Looks like we have a winner here and it’s not the programmer. It may happen that an optimizing compiler might be smart enough to even optimize this piece of code, but my point is there was no need to go for preemptive optimization in this case. Modern compilers are pretty smart, a for(int i = 0; i < 4; ++i) will produce the exact same code as for(int i = 0; i < 4; i++). Don’t believe me? Verify it. Oh yes and please don’t use a compiler from the 1990’s and complain. Something like a GCC 4.x series or a VC 9.0 is something all of us should be using right now. The only way to really optimize anything is via a performance analysis tool like Vtune or Codeanalyst and not make blind assumptions of what you may think is faster. Please note 10% of the code takes 90% of the time. The other 90% code may require no optimizations at all.

The other thing that got me really annoyed was the fact the code was poorly commented, or should I say rather inconsistently commented. No comments on function definitions, inconsistent inline comments, blocks of code without comments at all, algorithms explanations placed out of scope often in some other .doc file. Just a garbled tub of lard! OK everyone knows comments are a must, but very few programmers actually understand how good comments ought to be written. Properly commented code will boost productivity significantly. That doesn’t mean you have to over comment. It’s a case of working smart rather than working hard. It’s quality vs quantity. I wanted to write more on this but I figured the blog entry would get rather long, instead I will provide a link to relevant info and of course to doxygen. Please people do everyone a favor and use doxygen style commenting, please please! The another thing I advocate is keeping all documentation close or rather easily accessible. Most documents that are created never get read or never get read at the right time because they sit in some obscure directory on some machine somewhere. The intension of creating the documentation is undoubtedly noble, however none of that is of any real help if the documentation is not accessible at the right time. With some rather trivial tweaks to doxygen, you could easily make it happen. We tried something like this and it was a great hit.

It’s not the first time I have worked on such a piece of code. But still I find it difficult to understand how reputed organizations can work this way. Let the facts be known; once upon a time I too was guilty of writing such code, however we all learn from our mistakes. Taking a lesson out of every experience is, I think, the key.

Too busy to write?

Sorry, but I have been a little busy for the past two weeks. Too busy to blog I guess. I have been aiming for a code freeze on the Doofus game and it’s been hard work getting all the bugs and issues in. I’m going for the final push this time around to get at least the coding issues out of the way. The good thing is there isn’t too much left on the coding side, so I may be able to push out another beta by next week. Hopefully it will be the last and final beta before (; wait, don’t get your hopes up) at least one release candidate before Doofus goes gold. I guess there are still a sizable amount to levels to be completed.

Unlike release cycles of other software, Doofus game release cycles are a little bit different. I devised a new method after we initially found the old method to be rather monolithic for this particular project, and because of obvious constraints we have as a small team (unavailability of testers at specific times and things of that nature). Traditionally, you have a set of alpha releases of a product where each alpha release is tested in-house by both the developers and/or testers. Bugs are filed for specific releases and fixed during bug-fixing stage, whenever that maybe (, generally differs from project to project). Beta releases are pushed out when alpha releases get stable enough for “general consumption”. Beta releases are generally widely accepted as “almost complete” versions of the product. So a beta release often signifies a “feature-freeze” of the product. A bug fixed beta release can become a Release Candidate if the dev team feels confident enough which eventually turns Gold when everyone is confident enough.

In the case of the Doofus game, things are a little bit different. A beta release signifies a “feature freeze” for “a particular set of” features. Let me explain. When we started developing the game, the O2 Engine was the first to come (, before we started on the actual game code). The name “O2 engine” comes from the repository branch of my older game that was never released because it had too many flaws! I guess a lot was carried over including primitive libraries and some design decisions and implementations. Anyways, since the new project was a bit complex and our testing team small and working part-time, we decided to have specific release milestones having only limited set of completely complete features. When I say completely complete I mean “feature frozen”. Each beta release addressed different features. The first was for engine integration with geometry. The level structure was finalized and resource management was put in place. The first release looked really ugly because the renderer was partially finished.

The second beta addressed collision systems, basic gameplay things like triggers activators and integrations with third-party modules and libraries. The third was for rendering sub-systems, when those screenshots were posted. This release, the fourth, will be for AI (NPC) and Physics and that marks the end of the game features. The beta still has to go through a bug fixing stage before I am confident enough to even look at a RC, but it does mark and end to any major code modifications to the game code. Many would say the betas are actually alphas, but there are 2 reasons I call them beta releases. a) They are feature freeze releases. No features are added or removed to the already tested features. b) Our testers are no full-fledged project member so white-box testing responsibility falls on the dev team, mostly. That said, the beta testers are not just kids beating at the keyboard and have been instrumental in testing the product.

I guess this release has got me a bit exited, and I am working on the website/s at the same time. I have actually started on quite a few blog posts in the past week, but haven’t had the time to polish and/or finish them yet. Maybe this week will see more posts on the blog, I hope.