Hardy is here.

UbuntuUbuntu 8.04, code name Hardy Heron was released 2 days ago and since my internet machine has nothing better to do while I finish up the game, I went for a full system upgrade right away. Ubuntu does go from strength to strength with each release of the OS and the story with Hardy is no different. I have been using Gutsy for the past 6 months now and with the release of Hardy, I think XP is in serious danger of losing it’s number one spot in my list of preferred OSes. I just don’t boot into XP these days on my internet PC and my reservations on Vista are well known. Ubuntu at the moment is all you could want from an OS, though some nagging issues clearly remain. I however do use XP for all my programming stuff, unfortunately that’s where the bulk of the gaming market is. I however do plan to release the Doofus game for Linux once I release a Windows version.

I have praised Ubuntu before on this blog, but it is funny how Canonical has consistently managed to do a good job and stick to it’s motto of providing a simple and yet promising Linux distribution that even a common, or should I say a non tech savvy person could use. They have successfully managed to change the “Linux is for Geeks” attitude into something people can look and use in their everyday lives. Lets be fair, there are others that are fast catching up and can be considered equally impressive, yet Ubuntu has managed to stay ahead of the curve, just that little bit. It’s just those small things and annoyances that Ubuntu has managed to address successfully that has led to it’s popularity. Some people would argue that Ubuntu could not have stood so tall if it weren’t standing on shoulders of teams like Fedora SUSE and of course Debian, and without whose support and work Ubuntu could not have been possible. Yes, thats indeed true. However, Ubuntu has made a difference by actually using and in some cases integrating the great work done by all these teams and putting together a strong stable distro which could easily be considered as the best of the Linux distros out there.

Little things go a long way. Many people have heard about Linux, probably more than you might think. However, very few have actually used it. Why? It’s a headache to partition your disks and actually have a Linux partition. A average-joe user dreads things like that. Walk in Wubi! Now some might say having Linux on an NTFS partition is not something new. It could be done with several other distros long before Ubuntu was around, but how many of the other distros allow this to be done with a simple few clicks? I threw in the Ubuntu CD in the drive under XP and the first thing that popped up was the Wubi installer. I could install an entire Linux distro in about 4-5 clicks and a couple of restarts of the machine. I am a long time Linux user, but even I was surprised how trivial it was to install Linux with Wubi. Now I wouldn’t recommend using Wubi for the experienced user, however this option is rather cool for a person who has never seen or used Linux before..

However, not every aspect of the distro is flawless. There are some issues that still need work. It may not be all the distro’s fault either. Somethings are still a miss with the community as a whole. Technical issues like sound and WiFi are the ones that comes to mind. There are some issues there that need to be sorted out. Needless to say such issues are no doubt small and Ubuntu has address a lot of them with Hardy. The only real complaint I have is, I still can’t seem to get my front headphone jack to work, not with Gutsy and not with Hardy. But I guess this is some weired ALSA problem. Fortunately the NVIDIA driver is doing a fine job. I remember there was a time when h/w vendors didn’t seem too interested with Linux but I must say things are changing for the better. It wasn’t that long ago, when you couldn’t find a decent driver for your graphics card, now most leading distros come bundled with one.

As a parting note, a few suggestions on download and upgrade. I would recommend using bittorrent sinceย  found it far faster than using the overloaded Ubuntu servers. The CD ISOs can be found on all mirrors. Try this link if you want DVD ISO torrents. Also remember if you are upgrading from a CD use the “alternate” version of the install ISO. It is best to use the Update manager to do an upgrade of the OS, it’s the safest method. If you have downloaded the alternate version of the ISO, you can update without having to actually burn a CD ROM. Linux can directly mount ISOs and you don’t need any special software to do that. Make a directory under /mnt called “isocdrom” and use the command

sudo mount -o loop -t iso9660 ubuntu-8.04-alternate-i386.iso /mnt/isocdrom/

to mount the ISO directly. Then use the command

sudo /mnt/isocdrom/cdromupgrade

to start the upgrade and follow the instructions. Remember to use the full path “/mnt/isocdrom/cdromupgrade” while starting the upgrade.

Triple booting: Vista – Linux – XP using Vista’s boot loader.

Ah it’s Vista again, ๐Ÿ˜€ but this entry is a bit different from my other ‘Vista’ entries. Do you know you can have a multi-boot system via the Vista boot-loader? OK yes, I am using Vista, or rather I am testing the Doofus game out on various Windows versions and Vista just happens to be one of them. No, I am not officially using it (as yet) on dev machines but since we are heavy into testing and as past experience has shown us that Vista is a pretty unreliable OS, we decided for full Vista compatibility testing this time around. However, none of the team has Vista installed on their PCs so we had to go looking for someone who has. We did find a friend with a Dell laptop who had Vista Ultimate but unfortunately the guy had long since formatted the machine and installed dual booted XP-Fedora combination. However after some fair bit of convincing and coaxing I did manage to have him share a partition on his machine for Vista.

The problem was, we had to keep the XP-Fedora working. However the Vista install overwrites the MBR so GRUB loaded into the MBR is effectively overwritten thus preventing a boot to an already installed Linux partition, and that is exactly what happened. I had initially anticipated the problem, this is not the first time I was working with multiple OS with multiple-boot options. In the past GRUB had served me well in such situations. So I was pretty confident that even if the MBR were to be overwritten, it was just a matter of reinstalling grub. That’s what I did, unfortunately it didn’t work this time around. Maybe because of some problem with chainloader, or maybe Vista doesn’t seem to like anything other than it’s own bootloader. I was unfortunately unable to find out why exactly Vista doesn’t boot via GRUB. So I tried something else, I tried booting into Linux via Vista’s boot loader and with a bit of hacking it really worked, quite nicely I must say.

How To:

Continue reading

FL Studio Rocks!

All of this blog has been tech stuff and more tech stuff. People must be wondering what is it I otherwise do. Actually as the story goes, earlier when I was working (, not on gaming but on my other programming job), my hobby used to be working on graphics stuff and modding other games and engines. Funny, my hobby became my job now that I am working on this game, so it was time to take up another one (, hobby that is). What’s the next best thing? Creating music, of course! ๐Ÿ˜€ It so happens I ran across this software called FL Studio 6 months or so back and started fiddling with it and was soon hooked.

I was using the demo for quite a while and I was really impressed by the whole product. I guess impressed enough to go get the full producer version of the software. For the clueless, FL Studio is a digital audio workstation (DAW), meaning you can produce music with it. OK, I am still a noob at the whole thing, but even then the software allowed me to create some really good tunes pretty easily. The work flow is not trivial, but you can figure your way around after reading tutorials and following online video-tutorials. I am not a good music composer, not by any stretch of the imagination, yet the software allowed me to create pretty decent tracks far too quickly than what I had ever thought possible.

The program has near infinite options for authoring audio, most of which I am completely clueless about. Unfortunately I haven’t got too much time right now to look at each and every one, but I hope I can get around to understanding them eventually. To someone who has never seen FL Studio, the interface might look intimidating. The amount of nuts and bolts on the UI makes one think that it would be rather difficult to get things working, however looks can be very deceiving. While not a walk in the park, a few searches on the internet will have even a total noob creating great audio loops in no time. All you really need to do is visit Youtube and there are more than enough tutorials for even advanced stuff. The FL Studio site itself has more than enough vids to get you started.

FL Studio

I kinda liked the software right from the word ‘Go’ and, yes, I am a sucker for music. Unfortunately all my earlier attempts to produce anything audible, with any other software (, or for that matter hardware) could only be categorized as ‘noise’. FL Studio just seemed so intuitive. True there are other, more powerful products in the market, but I think very few can stand up to FL Studio given the price point, not to mention the lifetime free upgrades the product offers. Cool!

UAC popups were designed to annoy.

I was shocked reading this, well almost; maybe not quite, but it seems Microsoft built the UAC prompts into Vista to annoy people and that too on purpose. The idea behind it was to force developers to write more secure code that would not allow for those UAC pop-ups to appear at all. Oh wow, so this is actually a feature then! Silly me, I was really stupid to think it was just an annoyance. So it’s official then, all those popups you encounter in Vista are actually developer’s faults and has nothing to do with MS’. Dumb developers! Somebody better teach them correct programming!

To put things in perspective, it’s not uncommon to have such a system in place. Most modern Linux distros also have a similar concept, ie. either via su, sudo or asking root password while doing admin tasks. Most distros today refuse root access all together. It’s not a bad thing. Root/Admin accounts are notoriously easy to hack into, not to mention most users use root as a default login. Using root as default is dangerous by any standards. However I hate UAC because it is far more annoying than say a sudo. It’s like, it takes me for a retard and pops up for the simplest of operations. In most cases it is unwarranted, and the other reason I hate it so much is because the messages can be really cryptic. Most of the times they look to me as downright disclaimers.

I have no problems with MS trying to make Vista a more secure platform. It’s actually a very good achievement what MS has done with Vista. All previous MS OSes had a very bad reputation for security. All said, Vista is pretty good, probably the best of the MS OSes thus far when it comes to security. However the UAC just goes overboard and that, actually, is the flaw! Most users are not technically savvy and most don’t understand what the hell UAC dialogs tell them. It’s like “WTF is that!” when the dialog pops up. Most people I know just turn off the UAC because it’s annoying. Such systems can get pwnd or infected, thus offsetting all the security Vista provides. A system that was put in place to prevent something ends up actually doing the reverse!

I don’t buy the argument that annoying UAC popups per say will somehow make software vendors write more secure code. I mean just simply having a UAC will make sure that application writers will take enough care so that their application runs on a default Vista setup. My argument is, there is actually no need for any popups at all. Developers who want their applications to run on Vista, will automatically adhere to the UAC concept. I can understand not allowing an application to write to the registry or preventing files being placed in the system area, and this can all be done subtly by not having a popup. It’s as simple as not allowing a file-write/read in the system directory. A developer is smart enough to understand that the file needs to go somewhere else, or the registry value needs to be placed somewhere other than a restricted area. For heaven sakes, most of us who have programmed on *NIX based systems have been doing this for years now and I don’t remember seeing any popups!

Vote for a Poll!

I am trying out something new on the blog. Some of you may have noticed, a mysterious poll appeared on the right hand side. Actually I was browsing through wordpress plugins and I came across this neat little poll plug-in and decided to give it a try. So there it is. Let me know what you think. You may need to turn on javascript to enable voting.

Do you like the idea of having polls on this blog?

View Results

Loading ... Loading ...

Misconceptions, discipline and a pragmatic approach.

How disciplined are you in coding? No, seriously, are you a mad code hacker or are you one of those who takes that extra bit of care while coding? Are you paranoid about comments or do you believe comments are not needed especially for trivial code snippets? Why am I raising these questions? It so happens, I was helping someone out (very recently) to port code across platforms and I happen to look at a piece of code, or rather pieces of code, that were an utter disgrace to coding standards. No comments, headers included in headers, crazy loop-backs across libraries, 10 people writing code all over the place, use of non-standard practices, utter disrespect for memory management and zero design (high level or low level). Can you believe somebody using malloc to create a C++ object! I mean seriously, you could hire monkeys to do a better job than that!

OK enough of the rant already! I can’t really disclose who the code was for, since it is production code used by a reputed organization. Yeah believe it, I still can’t, but it just goes to show how disconnected the organization is with respect to what can be considered as their most valuable asset. No wrong it’s not code, it’s the process! It is not that they are not paying for it, they are, but the management is, well, too stupid (for lack of a better word) to understand the implications of not having proper coding discipline. On the flip side, you will find some organizations where the coding discipline is so rigid that it rarely allows even simple adjustments to the existing process. When coding practices are made far too rigid it hampers free thought and ultimately retards innovation. This is the other end of the story, where companies are paranoid about coding standards and don’t realize the fact that having inflexible coding practises can be in some situations counterproductive. Standardization is important, and having standards does help in many activities including coding, debugging, code reviews and can ultimately determine the quality of a product. Having standards helps maintain discipline in the team. In the above case, since the team did not maintain any standard the code just fell apart over time.

However, overdoing it can also lead to problems. Many times people simply don’t understand what a “coding standard” is. I can sight an example here; I was once involved with the team where the coding standards didn’t permit the use of RTTI. The wisdom behind that was, “RTTI breaks polymorphism”. Yes very true and RTTI should be avoided whenever possible. However, lets not be paranoid, in some situations it does help. RTTI when used subtly can solve problems which may require re-engineering of design. Not all class relationships are monolithic and RTTI can help you in such situations. I am not saying overuse RTTI, I am just saying RTTI has it’s place. To make a commandment like “Never use RTTI” is just plain lunatic. In our case it lead to breaking up of one class definition into smaller classes which ultimately lead to over-engineering of the solution. A problem which otherwise would have had a very straightforward approach was now distributed into a bunch of classes which had no real use other than to adhere to the “Never use RTTI” rule. Come to think of it, was that even a “coding standard”? This is what I can call invasion of standards on practises. Meaning in a attempt to have standards and discipline, the team/project leader went overboard and ultimately invaded on what was a design decision. It’s definitely not a coding standard.

Coming back to the code I was working on; the other thing I noticed was an attempt to do preemptive optimizations. Preemptive optimizations are an attempt to increase the run-time performance of a program during coding and/or design. While it’s not to say that you downrightly use bad practises, it often a folly to preemptively optimize code by what you think might be right. That’s because unless you are absolutely sure about what you are doing, you will have wasted your time or in a worst case actually made the code slower. What you think might be right is often not the case. One thing I remember out of the code I saw was multiplication by 0.5 to halve an integer value instead of division by 2. The reason, someone somewhere read that multiplication is faster than division on CPUs. Now this is downright crazy, because not only did it not optimize the code, it actually made it a whole lot slower. No one bothered to verify if this was indeed true. This is the type of noobish oneupmanship procreated by budding programmers who clearly have no real-world experience. A division by 2 produces

mov    edx,DWORD PTR [ebp-12]
mov    eax,edx
shr    eax,0x1f
add    eax,edx
sar    eax,1

whereas a multiplication by 0.5 produces

fild   DWORD PTR [ebp-12]
fld    QWORD PTR ds:0x8048720
fmulp  st(1),st
fnstcw WORD PTR [ebp-22]
movzx  eax,WORD PTR [ebp-22]
mov    ah,0xc
mov    WORD PTR [ebp-24],ax
fldcw  WORD PTR [ebp-24]
fistp  DWORD PTR [ebp-8]
fldcw  WORD PTR [ebp-22]

The code produced by the multiplication is slower than the division by several orders of the magnitude since the FPU gets involved. Why did that happen? Simple, because the compiler is a lot smarter that you give it credit for. It saw the division by 2 and quickly guessed the best way to halve a value was to use shift ops. Looks like we have a winner here and it’s not the programmer. It may happen that an optimizing compiler might be smart enough to even optimize this piece of code, but my point is there was no need to go for preemptive optimization in this case. Modern compilers are pretty smart, a for(int i = 0; i < 4; ++i) will produce the exact same code as for(int i = 0; i < 4; i++). Don’t believe me? Verify it. Oh yes and please don’t use a compiler from the 1990’s and complain. Something like a GCC 4.x series or a VC 9.0 is something all of us should be using right now. The only way to really optimize anything is via a performance analysis tool like Vtune or Codeanalyst and not make blind assumptions of what you may think is faster. Please note 10% of the code takes 90% of the time. The other 90% code may require no optimizations at all.

The other thing that got me really annoyed was the fact the code was poorly commented, or should I say rather inconsistently commented. No comments on function definitions, inconsistent inline comments, blocks of code without comments at all, algorithms explanations placed out of scope often in some other .doc file. Just a garbled tub of lard! OK everyone knows comments are a must, but very few programmers actually understand how good comments ought to be written. Properly commented code will boost productivity significantly. That doesn’t mean you have to over comment. It’s a case of working smart rather than working hard. It’s quality vs quantity. I wanted to write more on this but I figured the blog entry would get rather long, instead I will provide a link to relevant info and of course to doxygen. Please people do everyone a favor and use doxygen style commenting, please please! The another thing I advocate is keeping all documentation close or rather easily accessible. Most documents that are created never get read or never get read at the right time because they sit in some obscure directory on some machine somewhere. The intension of creating the documentation is undoubtedly noble, however none of that is of any real help if the documentation is not accessible at the right time. With some rather trivial tweaks to doxygen, you could easily make it happen. We tried something like this and it was a great hit.

It’s not the first time I have worked on such a piece of code. But still I find it difficult to understand how reputed organizations can work this way. Let the facts be known; once upon a time I too was guilty of writing such code, however we all learn from our mistakes. Taking a lesson out of every experience is, I think, the key.

Too busy to write?

Sorry, but I have been a little busy for the past two weeks. Too busy to blog I guess. I have been aiming for a code freeze on the Doofus game and it’s been hard work getting all the bugs and issues in. I’m going for the final push this time around to get at least the coding issues out of the way. The good thing is there isn’t too much left on the coding side, so I may be able to push out another beta by next week. Hopefully it will be the last and final beta before (; wait, don’t get your hopes up) at least one release candidate before Doofus goes gold. I guess there are still a sizable amount to levels to be completed.

Unlike release cycles of other software, Doofus game release cycles are a little bit different. I devised a new method after we initially found the old method to be rather monolithic for this particular project, and because of obvious constraints we have as a small team (unavailability of testers at specific times and things of that nature). Traditionally, you have a set of alpha releases of a product where each alpha release is tested in-house by both the developers and/or testers. Bugs are filed for specific releases and fixed during bug-fixing stage, whenever that maybe (, generally differs from project to project). Beta releases are pushed out when alpha releases get stable enough for “general consumption”. Beta releases are generally widely accepted as “almost complete” versions of the product. So a beta release often signifies a “feature-freeze” of the product. A bug fixed beta release can become a Release Candidate if the dev team feels confident enough which eventually turns Gold when everyone is confident enough.

In the case of the Doofus game, things are a little bit different. A beta release signifies a “feature freeze” for “a particular set of” features. Let me explain. When we started developing the game, the O2 Engine was the first to come (, before we started on the actual game code). The name “O2 engine” comes from the repository branch of my older game that was never released because it had too many flaws! I guess a lot was carried over including primitive libraries and some design decisions and implementations. Anyways, since the new project was a bit complex and our testing team small and working part-time, we decided to have specific release milestones having only limited set of completely complete features. When I say completely complete I mean “feature frozen”. Each beta release addressed different features. The first was for engine integration with geometry. The level structure was finalized and resource management was put in place. The first release looked really ugly because the renderer was partially finished.

The second beta addressed collision systems, basic gameplay things like triggers activators and integrations with third-party modules and libraries. The third was for rendering sub-systems, when those screenshots were posted. This release, the fourth, will be for AI (NPC) and Physics and that marks the end of the game features. The beta still has to go through a bug fixing stage before I am confident enough to even look at a RC, but it does mark and end to any major code modifications to the game code. Many would say the betas are actually alphas, but there are 2 reasons I call them beta releases. a) They are feature freeze releases. No features are added or removed to the already tested features. b) Our testers are no full-fledged project member so white-box testing responsibility falls on the dev team, mostly. That said, the beta testers are not just kids beating at the keyboard and have been instrumental in testing the product.

I guess this release has got me a bit exited, and I am working on the website/s at the same time. I have actually started on quite a few blog posts in the past week, but haven’t had the time to polish and/or finish them yet. Maybe this week will see more posts on the blog, I hope.

A look into Code::Blocks.

Code::Blocks.This is my second entry on Code::Blocks in the past couple of weeks. I had earlier commented on the release of the IDE but refrained myself to get too carried away and thus, purposefully, didn’t get into details at that time. The reason? Well, we all know how deceptive first impressions can be, especially about something like an IDE. IDEs can be complex beasts and it can take some time to work things out with them. However, Code::Blocks has been mostly easy to adapt to, at least for me. This in part due the fact that it mostly mirrors how Visual Studio works, and I work on that beast 98% of the time. So adapting to Code::Blocks was not too difficult except for minor differences.

First of all, Kudos to the Code::Blocks team. They have done a great job at bringing us this editor. It’s no mean feat, but they seemed to have pulled through all odds and that does indeed deserve a praise. It’s true I was eagerly waiting for the Code::Blocks release for some time now, and if you have been reading my blog, you will have seen me mention the IDE a couple of times before. To cut the long story short, I am lazy! I hate writing UI code and Code::Blocks (wxSmith) just does much work for you in that regard and yes, I always tend to use wxWidgets for most of my cross-platform UI projects. I wish this release could have come in a year earlier, when I was working on a C++ project which involved using wxWidgets for the UI, would have saved me a sh**t load of trouble.

Even though the IDE auto generates UI code, it’s surprisingly clean. Most editors will make a mess of code generation, but not so with C::B. The UI code is generated into pure C++ files (.h and .cpp) which you can continue editing like your normal text files provided you don’t insert code into the blocks C::B uses. Reminds me of the days I worked with Visual Studio 6.0 and MFC. If I am not mistaken VC++ 6.0 used a similar method for code generation. You can even move the code around and C::B is correctly recognize it, yes, provided the blocks are kept intact. A good thing is the fact that you can save the resources as .XRC files, which I tend to use extensively with wxPython. For me, Code::Blocks could very well become the de-facto editor while working with wxWidgets. To bad it doesn’t allow native Python support. That would have been great indeed.

So, besides having a good integration with a UI builder what more does C::B offer? Other than the fact that it can be used for UI development, can it be used for (, maybe other) serious C++ development? Yes it very much can be. All said, my main interest in IDE was not how easily you could build UI. My main interest was too see if C::B could be used for serious day-to-day development and how well it scales to full scale projects. There are several other IDEs that look equally impressive, until you actually try to get things done with them. So what’s the story with C::B? Does it live up to the standards of other professional IDEs? Well, besides some niggling quirks C::B seems to be pretty good for full scale projects. I always have a habit of building, a hello world, a “hello notepad” project with any new UI library I encounter. It gives you a fair idea of the capabilities of the UI. I tried the same with C::B and was pretty happy with the overall experience.

Now for some issues I had with the IDE. First and probably the most annoying was the fact that short-cut key assignments are very different from other editors, at least the ones I use. Also the fact that the IDE doesn’t allow me to set shortcuts like Ctrl-F5 or Shift-F5 is somewhat of a hindrance to quick acclimatization to C::B. That’s one serious nag! The other thing I noticed was the fact that the debugger can get really slow on Linux systems, though I must say it happened only twice for me and is not a frequent occurrence. On Windows the Visual Studio 9.0 directories got messed up when I installed VC 9.0 after I had installed C::B. C::B doesn’t pick up the VC 9.0 directories when you upgrade or remove older express versions. Not a problem though, I did managed to manually set them in the Options section. The debugger is not as extensive as others, but I guess you can generally live with that by adding “watches”. Most other issues, or for the matter of fact even these are rather trivial, I suppose.

OK then, how does the IDE handle projects across platforms? I found almost no trouble porting applications across platforms, at least no issues that were IDE centric. But then again my sample application was not entirely that extensive. Even then it’s worth a mention that after having been setup right, the project written under Linux compiled without a single major change on windows. No mucking around with Makefiles or build systems. Yes, it’s true I programmed for compatibility but still, all I really had to do was switch the compiler settings (for VC 9.0) thats all.

So can C::B be used for production quality projects? I would have to answer “yes” to that question. It definitely is good enough to be used for production code and if you are working with wxWidgets, I would even go so far as recommending this IDE over others. True it is not as powerful as Visual Studio, at least yet, but it still deserves more than a praise. For C++ development under Linux, I would recommend this IDE hands down, period!

No more T-Junctions.

I must confess, my original post on optimization of game levels was, well, incomplete and inaccurate. Optimizations were not fully complete. There were a lot of T-Junctions that were left behind after optimization (, Sandeep was probably the only person to catch that). However, I managed to remove those too. They were causing a lot of problems with A* navigation and I am glad they are gone! So here are the updated screens. Some extra level geometry has been added so the screens might not look exactly the same as the earlier post.

tri_opt_tjunction_small.jpg

The updated scene (Doofus 3D).

tri_opt_tjunction2_small.jpg

T-Junctions removed.

A tryst with CSS and web-design.

I have been juggling my time these days working on 2 things at one time. Yes of course there is the game, and then I have also been spending some time with getting the website up and ready. Yes that also means I am getting my hands dirty with web technologies like CSS and PHP. The two things couldn’t be more different. On the one hand I have this geometrically intensive and monumental algorithmic monster called the game engine and on the other there is this woefully deep chasm in the form of web-design. It’s a fact I would choose the monster over the chasm any given day, (I can slay monsters pretty easily,) but that doesn’t elude us from the fact that web-design is notoriously difficult than I had previously anticipated. Yes I have a good hand on Gimp and Inkscape, and for the record all of the game interface was created using those two packages. Creating most of the art for the web pages in easy! Yes, I am pretty good with most programming languages (, if I can say so myself). However, putting up the web-site has had me cringe with frustration more than once in the past week.

Talking with friends and colleagues who have been down this road, I always knew web development was a bit quirky. But let me just say this, web-design can be crazily non deterministic! OK that was a bit too much, maybe I am going a bit overboard, but sometimes web browsers do tend to have a mind of their own. It is this quirkiness that makes web-development a pain in the rear. Different web browsers can interpret web markups differently, mostly the way they want to and that to me, who falls in the stronger discipline of application programming, is rather distressful. It isn’t one particular browser at fault, though some are more unreliable than others, but most browsers do have some sort of weirdness built into them ( check out CSS compatibility, W3C DOM compatibility). IE (Microsoft) as usual receives the most flack as being hypocritical in its approach towards maintaining standards (, oh please don’t even get me started on that!!). But what I found surprising was that the story is no better with others as well.

All said, most problems are no more than a Google away. Considering the amount of people working on web-development, there is always some poor unfortunate soul who has battled with a similar problem that you face. He has, probably after much deliberation and hair-pulling, found the solution to it, and yes, has been kind enough to post it on a website or a blog so that those who follow in his footsteps will not falter like he did. Bless him\her! I found Google to be an invaluable resource for web development, and with some degree of query refinement, you can pretty much get exactly what you are looking for. Fortunately when it comes to web-development, there are too many tutorials and code dumps all around to get things working.

Then again, I have decide to take a shortcut and go with Joomla for the site; since obviously it’s very easy to understand and saves me a lot of work. Also weighing in was the fact that I have had a pretty good experience with it running my personal site, and it seems a good all-round solid free CMS (Content Management System) solution. The fact that Joomla has a very active community and a myriad of plug-ins for almost anything and everything also makes it an attractive choice. I tried other CMSes as well but couldn’t get around to understanding them as well as I did Joomla. However, it would seem there is no escape from CSS and PHP to some extent since customizing anything with the CMS also means understanding Joomla’s own structure.

The work on the website continues. I hope to finish it soon but it has (, as always,) been a learning experience. With all said and done I am a person who loves challenges, and to tell you the truth, I am kinda enjoying it! ๐Ÿ˜€ .