Code::Blocks 8.02 has been released.

The much awaited release of Code::Blocks has finally happened. I was waiting for this for like forever! It’s time to go get the new IDE. From the initial looks of it, it’s been worth the wait. More later!

Into the mind of a casual gamer.

I haven’t said or ranted about gaming in quite a while now. Quite unlike me. I often have to say and rant a lot, especially about gaming. If you read the last 10 posts you would never guess I was a game developer. So I guess it’s time to put the Blog back on track! The absence of any game related post have been for a couple of reasons. For one, I haven’t installed any new games as of now. Too busy with finishing off Doofus 3D. Neither have I upgraded my old aging NVIDIA 6200 and 6600 cards. So anything new like Bioshock or Crysis is out! I have promised myself a brand new 8800 GTS or something newer as soon as I complete and ship the game, and that’s been now like, forever! On the contrary, for the past 6-8 months I have had the opportunity of actually watching others playing games and it’s been a learning experience as a game designer. You wouldn’t believe it, one of them happens to be my mother. She has been doing all the playing while I have been working my ass off to finish my game, though I must say her taste is very (very) different from mine. No Half Life-s or Oblivion-s for her, she kinda enjoys the Zuma-s and the Insaniquarium-s; casual games in general.

Just watching and being a spectator can teach you volumes about how people enjoy computer games. Casual gamers in many ways can very different from hardcore counterparts and can be very similar in other respects. A casual gamer is generally not interested in complex gameplay. They are the ones who enjoy simple yet immersive games. Simple puzzles and not overly complex challenges are ones that are favored over long story lines and dark alleys. Also appreciated are short levels. For instance, I tried selling a game called Caesar III to my mother but nope, didn’t work. For those who don’t know, Caesar III was a city building game, kinda like Sim City and I loved that game when I played it. The game has no gore, so I was pretty sure she would love it. However, the comments I got from her were, “Too complex”, “Very difficult to manage”, “Can’t understand it”. It just goes to show casual gamers tend to favour easier set of “game-rules”. Anything that is more complex than a few set of keystrokes is a big NO.

Over the past few months I observed and chatted with a few other casual gamers too. Mostly friends and relatives who often play casual games. It’s always good to understand the market mentality, especially if you are developing a product catering to them. Most casual gamers I chatted with have played 2D games, but quite a few I must say, have played 3D games. Interesting, since I am developing a 3D casual game myself. When asked if they would be interested in such a game, the overwhelming response was, Yes, to obviously a simple 3D game. Most of them said they would be very interested, but were unsure if they would like it. Some even told me they wouldn’t mind learning a few extra keys to play something like Marble Blast. So it means Doofus 3D does have a market after all, but it remains to be seen if it would indeed translates into sales.

In general I have found casual gamers dislike having to learn a lot of rules, period! Games that involve significant learning effort quickly become unpopular. So I guess games like Oblivion are out. Also not so much appreciated is, gore. There is an interesting paradox here, unexpected I must say; many casual players admitted to playing arena games like Quake III. The reason; such games was easy and fast paced. Most enjoyed it, some said the games were violent but they generally didn’t mind. None of the casual gamers I talked to knew exactly what game genres were, none cared. Most said they looked at the game cover or screenshots to decide if they would indeed play a game. 90% said they played flash games. About half said they don’t understand things like DirectX or OpenGL or h/w acceleration. Nearly all said they had only on-board graphic cards. More than half said they didn’t even know you could install separate graphics cards. More than 75% said they had and do play games on laptops.

OK, my survey is far from being completely accurate. All I have done is maybe asked a few questions to people who I have met in the past year or so. I have in no way used a proper methodical approach for the survey. However, it does give you an insight into the mind of a casual gamer. As a game developer I generally tend to forget these important lessons and sometimes do go overboard. Then, I usually drag myself back to reality!

W’ow’ubi!

Finally someone did it! I mean this could have, and should have been done a long time ago, but no one really bothered. OK, what the hell am I talking about here? Oh well, seems like with the upcoming release of Ubuntu 8.04 code named Hardy Heron, you will finally have an option of installing Linux on a Windows installed PC without tinkering with the file-system and disk-partitions. A new installation mechanism called Wubi will allow Windows users to install Ubuntu on to their Windows OS using nothing but a simple Windows installer. Click and Install! No partitions no data-loss no headaches. It would seem Wubi installs the entire OS on a disk image which neatly sits in a folder on your existing Windows drive. Uninstalling is equally trivial. Simply use Add/Remove from Windows, just as you would use a normal uninstaller. Cool!

Linux installations are typically non trivial. I mean at least for the average home user, who doesn’t understand things like partition tables and dual booting options. Yes, there will be those of us who will continue to have separate partitions for Linux, but most non tech savvy users would try to skip such headaches if they can. If Wubi does achieve what it says it can (, still remains to be seen if it indeed can), anyone will be able to install and use Linux just like they would use any other software. It’s still an open question if this will mean more people signing on to Linux. Though debatable, Linux as an OS is still not as “user-friendly” or as some would say “Windows-friendly” as Windows is. Yet this is an interesting development indeed!

Wha’zup with Microsoft?

In an almost uncanny act, which has surely taken a lot of people by surprise, Microsoft very recently pledged that it would open up it applications and platforms to allow for greater interoperability. Yes, Even pledging to work with open-source communities and developers. All this seems to be a part of it’s compliance to the EU anti-trust decision. Interesting, very interesting indeed. Microsoft has been getting cute with open-source communities for some time now, but this is probably more than anyone had expected. However, it remains to be seen if Microsoft genuinely believes about open standards. I was reading through the published article and there are some things in fine print that still seem a little bit vague (, or maybe I am limited by my intelligence). Some things do appear rather subjective. A phrase like “low royalty rates” is open to debate. What does “low” mean? How low is “low”? Ah, and what’s a “covenant“? It will be interesting to see how these new initiatives are adopted. Does it mean all projects that opt for interoperability with Microsoft products, end up paying royalties in some form to Microsoft? Reading some portions of newsletter sure seems to suggest so.

However with everything said, it will be interesting to see and explore possibilities for FOSS projects under these new initiatives by Microsoft. For whatever it’s worth, it’s definitely a step in the right direction. So let’s hope the new gesture by Microsoft does pave the way for better interop solutions between FOSS and MS applications.

Virtualization: What is it exactly?

There has been a lot of buzz about virtualization and a lot has been said about it everywhere. Recently everybody and anybody who has something to do even remotely with operating systems is talking about it. So what exactly is it? And more importantly, what benefits do you gain from virtualization? Is it even that useful? A lot of questions, lets look at them one by one. First, lets look at what virtualization actually is. By it’s very definition virtualization means “abstraction of computer resources”. To put it simply it means sharing of physical resources on one system via several or multiple logical resources. In it’s true sense virtualization is a broad term and can mean different things. A CD/DVD emulation software can also be called as one form of virtualization. However virtualization is often mixed and used interchangeably with hypervisor. A hypervisor is a virtualization platform, meaning it is a platform using which you can run multiple operating systems simultaneously, often under one parent or host operating system, i.e. under one operating system kernel. Every operating system under a hypervisor runs inside it’s own virtual world or virtual computer (, if you could put it that way); completely isolated from the host or any other operating system that might also be running under the hypervisor.

ReactOS
React OS running under QEMU
Ubuntu (Gutsy Gibbon) 7.10.

So what is the real use of the whole virtualization thing? Several, and as a programmer/developer even more. My first experience with virtualization was with VMware, about 4 years ago when I was working on a freelance project. I used VMware to port an application across windows to RH Linux. I did a lot of monkeying around with the virtualization platform back then, and soon realized that such technology can be put to good use. Besides it’s obvious benefit for a cross-platform developer, virtualization can also be used very effectively for other purposes. More so now that we have machines with multiple cores. Since every OS runs in it’s own separate world, you can pretty much do anything with it. To name a few examples, how about hacking the Linux kernel or maybe writing a device driver. Maybe try a hand at screwing the file system. Crazy things you would never try on your machine, are all possible using virtualization software.

On a more serious note though, besides giving an environment to screw around what more benefits does virtualization provide? The one that is probably more of an interest to an enterprise is, “Utilization”. Underutilized servers and machines can be made to run multiple virtual machines, each taking more responsibility and more load. People have often sited benefits of virtualization as it could save up on physical resources like hardware, power-consumption, management and infrastructure, all leading to obvious cost benefits for an enterprise. With machines getting more and more powerful, this could well be a norm in the future. “Utilization” could well be the single most important reason a technology like virtualization could see a lot of interest.

Also of interest is the ability of virtual systems to be used as a potential secure sandbox testing environments. They could be (, and already are being) used for software QA, analysis and for controlled environment test beds for applications and untested software. Tech support can simulate client specific scenarios on virtual systems that mirror real systems. Virtualization is also excellent to carry out performance analysis on systems and run complex debugging scenarios that could be very difficult to do on normal systems.

The next use will probably be of more interest to the average joe/jane net surfer. If you haven’t already guessed it, virtualization could be used to safely surf the web and keep your system free of viruses and worms that infest web-pages, especially the “good ones” (, you cheeky monkey, you!). Even if your virtual system were to get infected or hacked, it’s just a matter of reloading the entire virtual OS and everything is erased clean! No cookies, no browse history no nothing! No one logging into your computer will ever know which sites you visited or what you did. Sounds interesting doesn’t it. 😉

So where can you get your hands on something like this. VMware offers a free virtualization software which you can download an use. I would recommend VMware hands down, since it is probably the best one I have used. Besides that there are also free ones like QEMU, which I use quite often. If you are on Linux, also check out KVM, Xen and lguest though I haven’t used them myself. Then there are commercial ones offered by Microsoft, VMware and others that are a lot more powerful which you are welcome to try.

[Edit] Also Microsoft Virtual PC is another free virtualization software for Windows.

Spying at the workplace. The things you should know.

I was intrigued by an article I read in a mag recently. The article was about employers spying on their employees using computers and networks within an organization. A tad bit disturbing I must say, not completely unexpected though. All previous organizations I have worked for were paranoid about security, so I guess I kinda always knew, ” the boss was looking”, when I was working. The article gave some really good insight on the whole matter. Apparently good enough to have piqued my curiosity, at least enough to write about it.

How do they do it? For those who arn’t tech savvy, let’s look at more technical details. Apparently the easiest and the most powerful way to spy on anyone using a computer is by installing a key/data logger. There are hardware and software keyloggers. The hardware ones can be easily seen just by peeking around your computer’s back. They sit between the Keyboard plug and the computer. A software keylogger is a program that logs every key you hit on your keyboard. So all your passwords and website are basically open for scrutiny. A keylogger program is surprisingly easy to write. A programmer like me could probably do it in about a day or so. Google around and you can probably get a dozen free ones on the internet. A data logger is something similar, but more advanced. It maintains a history of data interactions including keyboard and mouse. Dataloggers are often more difficult to write. However, don’t be mistaken, such programs are available. What’s more such programs are available from professional software development companies focused on security, and you might have one running on your PC right now! No, most key/data loggers are not caught by spyware or antivirus programs. Don’t bother trying it. So if there is one running on your work PC, there is a good chance you have no knowledge about it.

Spying using keyloggers is pretty easy, however that is not the only way an organization can spy on employees. Your email is also subject to scrutiny. If you are using your work email address to send personal messages to friends and family or maybe sending insults about your boss to your friends, there is a good chance they have already been read. Don’t expect to get a raise very soon! It’s child’s play to archive emails from a message queue on a mail server and those can be read during weekends or holidays. Forget about even reading all of them, the system can be configured to run a automatic script to isolate mails that have specific words or phrases. Your laptop isn’t spared either. If you do dock it in when you come to work, it leaves a door open for the sysadmin to logon to your machine and install whatever he/she wants, and if for some reason it has had a recent unexpected trip to the IT department, you probably should be asking the question, “why?” right about now!

Pretty much anything can be monitored, from the sites you visit to the friends you chat with. If you think your organizations is a little bit over protective about security and probably doing it, rest assured, they are! So the ethical question to ask is, are such organizations violating employee privacy? Is spying on employees even allowed? You will be surprised to know the answer! It is! There is no explicit law that forbids employers from spying on employees. Privacy laws are pretty murky when it comes to something like email and chatting. True, no one can barge into your house and violate your privacy there, but your workplace isn’t your house and anything said, written or emailed doesn’t explicitly fall into the category or personal privacy. There can also be serious legal complications associated with seemingly innocent practices. For example forwarding pornographic material in an email can be subject to sexual harassment lawsuits. Even seemingly innocent jokes, that are generally forwarded at a click of a button can be taken to be racist or radical remarks. Copying or emailing copyrighted material can land you in jail, even if you were to do it innocently.

As technology advances so does the need for organizations to protect themselves. Having personal data of an employee on office machines can lead to complications for the organization. With visuses and data mining rampant, organizations are left with little choice but to have more and stringent monitoring policies. I for one believe, organizations should make their policies clear. If they do want to monitor their employees, then there is no harm in letting people know. Spying secretly is not very well appreciated by anyone. It leaves people rather distrustful of the management and the organization. However, rest assured spying at the workplace is all too common. It is here to stay. So the next time you have and urge to forward that great joke or poke fun at your boss, remember “Every breath you take, Every move you make …..”

Designing user-friendly GUI.

What is good GUI? Or should I ask what make a GUI good or bad? Is it the snazzy looks, the cool effects, transparent windows, bright colors or is it really something else? Why do people hate some GUIs and love others? Or is it only just a matter of personal taste? Somethings to really think about I would say. GUI is actually a very broad term and can mean different things for different people. For a web-site a GUI could mean the web-page. For an application it could mean the application interface. For a everyday normal computer user it could very well mean his windowing system. A GUI is a graphical interface to a system, and the system can be, well, anything really. Though this entry is more about application GUIs, most points discussed here are also directly or indirectly valid for other GUIs.

Most novice developers working on GUIs never really understand, or for that matter appreciate the meaning of the phrase “good GUI”. It is often confused with “good-looking GUI”. That however, is not the case. Yes, it is true, good GUI must look good. You or the users of your application wouldn’t want to use bad-looking GUI. However, the single most important thing any GUI must accomplish is user-friendliness. How many times have you seen a good application get a bad reputation just because it was difficult to use? I would say too many times. Sometimes a GUI can make or break your application, but often times a bad GUI can give a bad reputation to applications that would otherwise have had much wider appeal. During my initial years as a developer I worked a lot with GUI libraries, everything from MFC, Qt, FLTK, wxWidgets and learned most of the GUI design principles the hard way, mostly by trial and error.

If you were to Google around for GUI design principles you will no doubt get a bunch of sites that give you some very good subject matter on the topic. However, GUI design is more about experience. Just saying, “You must know your user”, or , “Empower the user”, or , “Know the user’s perspective” doesn’t really cut it. True, most of the data is accurate and you are more than welcome to read it. However, these are very broad phrases and often lead to nothing conclusive. As any good engineer knows, data and facts are what matter. So lets looks at some of the hits and misses in GUI design.

a) Overly bugging a user. This is probably the worst mistake that can be done by any GUI. Remember those annoying pop-up dialogs in Vista! “Do you want to really move a file”, “Do you want to install this program”, and more. Nope, please don’t do that! It will make any UI interface extremely unpopular. The main culprits are Modal dialogs. Modal dialogs are one of the most overused GUI feature. I would go so far as to say, avoid Modal dialogs except for rarest of rare cases. Only use them to warn a user about data loss. Nowhere else! Even in situations where they are used, allow the user some way to turn off the warning message boxes.

b) Use tool-tips and modeless pop-ups, and use tool-tips judiciously. I can’t understand why so many UIs use modal dialogs to present information. Tool-tips are a great way to show information and they can be extremely useful in enhancing the GUI. Use tooltips instead of modal dialogs to convey simple bits of infromation. In one application I replaced information modal dialogs with tooltips that would fade away in about 10 secs or so. They were an instant hit. Azureus comes to mind, the program uses a similar feature to warn users. Also if possible try to replace modal dialogs with modeless ones. Again modal dialogs are not all that great even for user inputs. Compare the search (Ctrl+F) in Internet Explorer 6 and search in Firefox. Which do you think is more intuitive?

c) Another annoyance is leaving the GUI unresponsive. If there is a lengthy task, please delegate it to a worker thread. See to it there is always a way for the user to cancel a lengthy and intensive task. Never ever leave the user in a state where he/she can’t interact with the UI. It’s really annoying

d) Short-cut keys are such a help. I tend to always use them. A GUI application must have short-cut keys for the most commonly used tasks and must be consistent with shortcut keys. For example; never use Ctrl-Z for pasting something, Ctrl-Z is always an “Undo” command under all Windows applications, at least correctly designed ones. So stick to it. Not all tasks need to have shortcut keys. Actually it’s not always prudent to have short cut keys for commands that involve data loss. Also try to keep shortcut keys spaced apart. OK another example here; there was a friend of mine who was working on an application where he assigned Ctrl-S as save and Ctrl-D as delete. A few months later he got a nasty mail from a client asking him to remove that Ctrl-D since the client use to accidentally hit the delete key during save. Also overly complex shortcut keys and key combos like “Ctrl+Alt+F10” are not well appreciated.

e) “Empowering” GUIs are very very popular. GUIs that allow the use of both hands can quickly become a hit. Consider the copy-paste command. Initially the combo used to be Ctrl-Insert and Shift-Insert. However for a right handed person that would mean lifting his hand from the mouse to operate the shortcut keys. So two new short-cuts were introduced Ctrl-C, Ctrl-V. Now the user can select content with a mouse and copy-paste with shortcuts using his left hand. For those who don’t know, Ctrl-Insert and Shift-Insert still work with the Windows Notepad and most other GUI editors under X. Left handed people can still take good advantage of that. Such design can really go a long way because they empower the user to work rapidly.

f) Good GUI applications strive for “Undo” and “Redo”. This is easier said than done. It requires a lot of thought and pre-planning in the design stage of an application to have a full functioning Undo-Redo functionality. However this is a “must” in today’s GUI design. Can’t escape it. Hint: If you want to use Undo\Redo in your application, apply the Command and Memento design patterns to your designs.

g) Tool-bars are friends. Tool-bars are also a must for all but the most trivial applications. True some applications are too small to have toolbars. But the rule of the thumb is if an application has menus, it should also have tool-bars.

h) Another thing I hate is, deeply nested menus. If an application has to have them, then it must have a separate tool-bar for those menus. Deeply nested menu-items can be difficult for the user to locate. I have seen this, maybe not too often, but sometimes applications do have some commonly used functionality deeply embedded in the menu hierarchy. Not too well appreciated I must say.

i) Applications that fail to set the tab order on dialogs can quickly become unpopular, especially with laptop and notebook users. Same is the case with accelerator keys for input widgets. I have seen really good and professional applications miss out on this point.

j) Good GUI designers try to conserve space, but never go overboard. A GUI should not cram widgets, however intelligent choices while selecting widgets can go a long way. For example; often times a combo-box/choice-box will require far less space than a list box and provide equivalent functionality.

k) Readability is another factor. Sometimes snazzy looking applications with all those “skinnable” interfaces can make a mess of it. I generally try to avoid skins, custom fonts or custom colors. It is best to use system default colors and fonts since such GUI scales across systems that may have different settings and hardware setups. It is also best to go for flow layouts or sizer based GUIs. This allows for full compatibility even across platforms and with different windowing systems.

These are probably only some of the things that can go wrong with GUI design. There maybe more, but maybe I got a little bit tired with typing, and maybe you have too (, after reading such a long post). I will just leave you with this link. I think some very good points are addressed there. Have a peek if you are interested.

My programming language is the best!

Very recently I was reading and article on a pretty well-known site where the author just downrightly insults a programming language for what it is. I refuse to quote the site for good reason, but that actually doesn’t really matter. The thing that really got to me is that the author gives downright mundane reasons as to why he hates a language, Java in this case. I have seen this sentiment all through out the industry where many (, not all) programmers think the programming language “they” work on is the best and all others are either sub-par, or downright foolish! I don’t intend to start a flame war on this blog on programming languages, far from it, I will try and debunk some commonly held misunderstandings and misconceptions about programming and programming languages. And yes, I am not going to directly touch on any particular language while doing so.

The biggest misconception of all and held by many is, “You have to learn a particular language (C++, Java or any other) to become a good programmer.” The answer to that is …No! Programming is all about solving problems. If you can do that effectively, then you can become a good programmer using any programming language. The best programmers are the ones that solve difficult problems subtly and elegantly. The choice of language does not determine the quality of a programmer.

The second misconception, and often generally shared by a lot of budding programmers is, “I have to learn the most difficult programming language to get and edge over others in the industry.” Not true either. Some of the best programmers I know started out with Perl, Python and Shell Scripting, which are some of the easiest languages around. In fact, now that I think about it, that experience actually gave them an edge over others, because they learned the holy grail of programming, which is the famous KISS principle (Keep It Short and Simple). If you are starting out, pick the easiest language you can understand, learn it completely and become a good programmer with that language.

The next famous one is, “I can do anything and everything with the programming language I know. Learning or even looking at another language is a waste of time. My programming language is the best!” This is probably the most dangerous one of all, because this sentiment is shared by some senior programmers. Programming languages are like tools, and like any good tool-chest, the more varied the tools you have in it, the better you will be at doing a particular job. Some jobs will require a hammer to force in a nail, others may require a spanner and some others may require a drill. The same thing with programming languages. Many languages are designed to solve specific and sometimes domain related problems, and most of the time they do an excellent job at that. Often better than most other languages that were not specifically designed to address those issues. A good (senior) programmer will often know 3, 4 or even more languages, and will use different languages for different projects when needed. Never be afraid to learn a new programming language, in-fact strive for it.

And another one, “A language doesn’t have an XYZ feature present in my programming language. Hence it is incomplete and therefore it is unsuitable for programming.” This is actually a corollary to the above one, and just as flawed. If a language doesn’t have a feature, then there must be a good reason for omitting it in the first place. The designers, given the nature and scope of the language, felt that the feature was unnecessary. Again if you feel the need for that particular feature in your project, then it’s time to re-evaluate the choice of programming language for that project. It however doesn’t mean the programming language in question is in anyway bad or that matter inferior.

Next misconception, “Programming is too complex, it’s for geeks and really clever people!”. Nope! True some programming languages are more complex than others, but some are really really simple. Check out Python, Perl (, not taking any sides here, you can just as easily use others as well). The rule of the thumb is, if you can use a Spread Sheet package (like Excel), you can learn to program.

Another one that creates a lot of debate is obviously the speed of a language. The misconception is, “XYZ language is faster than PQR language.” Not entirely accurate. A language has no definition of speed. A language is never fast or slow, a program written in a language can be fast or slow. The speed of a program depends on a lot of factors and can be a tricky thing to evaluate. It has more to do with correct design decisions and not so much on the language. There is a phrase I have coined for this over the years debating with people. It goes like this, “The speed of a program directly depends on the intelligence and experience of the programmer.” The fact is a good programmer will produce fast code in any language. It is true that well-written code in some languages can be faster than similar well-written code written in other languages, but unless you are working on bleeding edge of technology, this doesn’t really matter.

A similar misconception is, “XYZ language is more powerful than PQR language!”. Not quite right. It entirely depends on the domain and task at hand. Some languages are specifically designed for certain problem (Erlang comes to mind) and can be extremely effective at addressing those problems, often magnitudes of an order better than others. Other’s are made simple on purpose, just because you can do very rapid programming with it. Some programming languages go an extra mile so redundant and tedious programming tasks and bugs can be eliminated. Again I don’t intend to take sides, but for all rapid programming tasks I use Python because of it obvious benefits. However, there are other languages like C# which are equally good. Every language has strengths and weaknesses, no language is all powerful.

This is how I stand on the whole issue of programming languages; I consider most programming languages, at least the most famous ones used in the industry as “worth learning”. You can never be a master at all, it’s just impossible. However you can certainly be good at a couple. Learning more languages will empower you to be at a better position to choose the right language for the right problem (or task) . Learning more will allow you understand why these languages were designed the way they were and thus make you to be a better programmer in the end. Most production quality languages aren’t bad\incompetent or useless; don’t say they are. (I am talking to experienced programmers here!) Some languages could be inappropriate for a particular task, but harsher words are unwarranted!

Ultraportables.

If you haven’t already heard of the Eee PC (website), then you probably have been on the wrong side of the internet. The subnotebook has left other and more powerful machines biting the dust when it comes to sales figures. The Eee PC seems to be the latest ‘in thing’ when it comes to cool gadgets and might even rival the iPod in the future. I have actually never seen or used the Eee PC first hand but I was browsing through it’s specs just to see what the gadget holds in it’s guts. The specs look pretty impressive for it’s size, but the point of interest as far as I am concerned is the graphics chipset. The Eee PC houses an Intel GMA 900 graphics chip, a little bit disappointing I must say. I would have preferred an NVIDIA or ATI chip for an Intel one. The GMAs have bugged me all through out the development of the game and I am not particularly fond of them. However, given the size and the target audience of the Eee PC, it isn’t too bad.

My primary interest was of course to see the capability of the Eee PC as a gaming platform. Especially since ASUS has explicitly stated in it’s motto that one of the “e”s in the Eee PC stands for “Easy to play”. From the raw spec data, a Celeron processor, 512 Mb RAM, and an Intel graphic chipset makes a pretty ordinary setup even for casual games. Anything that is even slightly heavy on the graphics side will fail to run on the Eee PC. It may run last generation 3D games like Quake 3 or Half-Life but I have serious doubts about anything of a later generation running on this setup. Also at least 4 GB disk-space is I think a must, though you could live with 2Gb.

Another interesting thing is the setup comes bundled with Linux (Xandros) installed. Logical, considering some other OS’ today are particularly heavy on systems and having a proprietary OS would have driven up the cost of the machine. From what I could find the Linux OS runs KDE as it’s default windowing system. Surprising, since KDE is among the most resource heavy (, when compared to Gnome and Xfce). I would have preferred Xfce since it is very light on system resources, however KDE looks much better and snazzier. I guess installing Xubuntu on the Eee PC will do wonders. You apparently can install other OS’ on the Eee PC as well. Coming back to gaming; Linux has very little to no market for games so users of the Eee PC might be limited to a few games that have native Linux support but you always have Wine to run all those Windows games.

All in all a good setup to carry around for the common casual user who wants to surf the net and chat with friends. Considering the popularity the system has achived in such a short time, just goes to shows how much of a solid setup it is. Also, so many people with Eee PCs with Linux installed also means Linux will finally get the attention it deserves. As someone on some internet board said, and I quote “… and people will finally see, they can run thousands of applications… for free!”