Designing user-friendly GUI.

What is good GUI? Or should I ask what make a GUI good or bad? Is it the snazzy looks, the cool effects, transparent windows, bright colors or is it really something else? Why do people hate some GUIs and love others? Or is it only just a matter of personal taste? Somethings to really think about I would say. GUI is actually a very broad term and can mean different things for different people. For a web-site a GUI could mean the web-page. For an application it could mean the application interface. For a everyday normal computer user it could very well mean his windowing system. A GUI is a graphical interface to a system, and the system can be, well, anything really. Though this entry is more about application GUIs, most points discussed here are also directly or indirectly valid for other GUIs.

Most novice developers working on GUIs never really understand, or for that matter appreciate the meaning of the phrase “good GUI”. It is often confused with “good-looking GUI”. That however, is not the case. Yes, it is true, good GUI must look good. You or the users of your application wouldn’t want to use bad-looking GUI. However, the single most important thing any GUI must accomplish is user-friendliness. How many times have you seen a good application get a bad reputation just because it was difficult to use? I would say too many times. Sometimes a GUI can make or break your application, but often times a bad GUI can give a bad reputation to applications that would otherwise have had much wider appeal. During my initial years as a developer I worked a lot with GUI libraries, everything from MFC, Qt, FLTK, wxWidgets and learned most of the GUI design principles the hard way, mostly by trial and error.

If you were to Google around for GUI design principles you will no doubt get a bunch of sites that give you some very good subject matter on the topic. However, GUI design is more about experience. Just saying, “You must know your user”, or , “Empower the user”, or , “Know the user’s perspective” doesn’t really cut it. True, most of the data is accurate and you are more than welcome to read it. However, these are very broad phrases and often lead to nothing conclusive. As any good engineer knows, data and facts are what matter. So lets looks at some of the hits and misses in GUI design.

a) Overly bugging a user. This is probably the worst mistake that can be done by any GUI. Remember those annoying pop-up dialogs in Vista! “Do you want to really move a file”, “Do you want to install this program”, and more. Nope, please don’t do that! It will make any UI interface extremely unpopular. The main culprits are Modal dialogs. Modal dialogs are one of the most overused GUI feature. I would go so far as to say, avoid Modal dialogs except for rarest of rare cases. Only use them to warn a user about data loss. Nowhere else! Even in situations where they are used, allow the user some way to turn off the warning message boxes.

b) Use tool-tips and modeless pop-ups, and use tool-tips judiciously. I can’t understand why so many UIs use modal dialogs to present information. Tool-tips are a great way to show information and they can be extremely useful in enhancing the GUI. Use tooltips instead of modal dialogs to convey simple bits of infromation. In one application I replaced information modal dialogs with tooltips that would fade away in about 10 secs or so. They were an instant hit. Azureus comes to mind, the program uses a similar feature to warn users. Also if possible try to replace modal dialogs with modeless ones. Again modal dialogs are not all that great even for user inputs. Compare the search (Ctrl+F) in Internet Explorer 6 and search in Firefox. Which do you think is more intuitive?

c) Another annoyance is leaving the GUI unresponsive. If there is a lengthy task, please delegate it to a worker thread. See to it there is always a way for the user to cancel a lengthy and intensive task. Never ever leave the user in a state where he/she can’t interact with the UI. It’s really annoying

d) Short-cut keys are such a help. I tend to always use them. A GUI application must have short-cut keys for the most commonly used tasks and must be consistent with shortcut keys. For example; never use Ctrl-Z for pasting something, Ctrl-Z is always an “Undo” command under all Windows applications, at least correctly designed ones. So stick to it. Not all tasks need to have shortcut keys. Actually it’s not always prudent to have short cut keys for commands that involve data loss. Also try to keep shortcut keys spaced apart. OK another example here; there was a friend of mine who was working on an application where he assigned Ctrl-S as save and Ctrl-D as delete. A few months later he got a nasty mail from a client asking him to remove that Ctrl-D since the client use to accidentally hit the delete key during save. Also overly complex shortcut keys and key combos like “Ctrl+Alt+F10” are not well appreciated.

e) “Empowering” GUIs are very very popular. GUIs that allow the use of both hands can quickly become a hit. Consider the copy-paste command. Initially the combo used to be Ctrl-Insert and Shift-Insert. However for a right handed person that would mean lifting his hand from the mouse to operate the shortcut keys. So two new short-cuts were introduced Ctrl-C, Ctrl-V. Now the user can select content with a mouse and copy-paste with shortcuts using his left hand. For those who don’t know, Ctrl-Insert and Shift-Insert still work with the Windows Notepad and most other GUI editors under X. Left handed people can still take good advantage of that. Such design can really go a long way because they empower the user to work rapidly.

f) Good GUI applications strive for “Undo” and “Redo”. This is easier said than done. It requires a lot of thought and pre-planning in the design stage of an application to have a full functioning Undo-Redo functionality. However this is a “must” in today’s GUI design. Can’t escape it. Hint: If you want to use Undo\Redo in your application, apply the Command and Memento design patterns to your designs.

g) Tool-bars are friends. Tool-bars are also a must for all but the most trivial applications. True some applications are too small to have toolbars. But the rule of the thumb is if an application has menus, it should also have tool-bars.

h) Another thing I hate is, deeply nested menus. If an application has to have them, then it must have a separate tool-bar for those menus. Deeply nested menu-items can be difficult for the user to locate. I have seen this, maybe not too often, but sometimes applications do have some commonly used functionality deeply embedded in the menu hierarchy. Not too well appreciated I must say.

i) Applications that fail to set the tab order on dialogs can quickly become unpopular, especially with laptop and notebook users. Same is the case with accelerator keys for input widgets. I have seen really good and professional applications miss out on this point.

j) Good GUI designers try to conserve space, but never go overboard. A GUI should not cram widgets, however intelligent choices while selecting widgets can go a long way. For example; often times a combo-box/choice-box will require far less space than a list box and provide equivalent functionality.

k) Readability is another factor. Sometimes snazzy looking applications with all those “skinnable” interfaces can make a mess of it. I generally try to avoid skins, custom fonts or custom colors. It is best to use system default colors and fonts since such GUI scales across systems that may have different settings and hardware setups. It is also best to go for flow layouts or sizer based GUIs. This allows for full compatibility even across platforms and with different windowing systems.

These are probably only some of the things that can go wrong with GUI design. There maybe more, but maybe I got a little bit tired with typing, and maybe you have too (, after reading such a long post). I will just leave you with this link. I think some very good points are addressed there. Have a peek if you are interested.

My programming language is the best!

Very recently I was reading and article on a pretty well-known site where the author just downrightly insults a programming language for what it is. I refuse to quote the site for good reason, but that actually doesn’t really matter. The thing that really got to me is that the author gives downright mundane reasons as to why he hates a language, Java in this case. I have seen this sentiment all through out the industry where many (, not all) programmers think the programming language “they” work on is the best and all others are either sub-par, or downright foolish! I don’t intend to start a flame war on this blog on programming languages, far from it, I will try and debunk some commonly held misunderstandings and misconceptions about programming and programming languages. And yes, I am not going to directly touch on any particular language while doing so.

The biggest misconception of all and held by many is, “You have to learn a particular language (C++, Java or any other) to become a good programmer.” The answer to that is …No! Programming is all about solving problems. If you can do that effectively, then you can become a good programmer using any programming language. The best programmers are the ones that solve difficult problems subtly and elegantly. The choice of language does not determine the quality of a programmer.

The second misconception, and often generally shared by a lot of budding programmers is, “I have to learn the most difficult programming language to get and edge over others in the industry.” Not true either. Some of the best programmers I know started out with Perl, Python and Shell Scripting, which are some of the easiest languages around. In fact, now that I think about it, that experience actually gave them an edge over others, because they learned the holy grail of programming, which is the famous KISS principle (Keep It Short and Simple). If you are starting out, pick the easiest language you can understand, learn it completely and become a good programmer with that language.

The next famous one is, “I can do anything and everything with the programming language I know. Learning or even looking at another language is a waste of time. My programming language is the best!” This is probably the most dangerous one of all, because this sentiment is shared by some senior programmers. Programming languages are like tools, and like any good tool-chest, the more varied the tools you have in it, the better you will be at doing a particular job. Some jobs will require a hammer to force in a nail, others may require a spanner and some others may require a drill. The same thing with programming languages. Many languages are designed to solve specific and sometimes domain related problems, and most of the time they do an excellent job at that. Often better than most other languages that were not specifically designed to address those issues. A good (senior) programmer will often know 3, 4 or even more languages, and will use different languages for different projects when needed. Never be afraid to learn a new programming language, in-fact strive for it.

And another one, “A language doesn’t have an XYZ feature present in my programming language. Hence it is incomplete and therefore it is unsuitable for programming.” This is actually a corollary to the above one, and just as flawed. If a language doesn’t have a feature, then there must be a good reason for omitting it in the first place. The designers, given the nature and scope of the language, felt that the feature was unnecessary. Again if you feel the need for that particular feature in your project, then it’s time to re-evaluate the choice of programming language for that project. It however doesn’t mean the programming language in question is in anyway bad or that matter inferior.

Next misconception, “Programming is too complex, it’s for geeks and really clever people!”. Nope! True some programming languages are more complex than others, but some are really really simple. Check out Python, Perl (, not taking any sides here, you can just as easily use others as well). The rule of the thumb is, if you can use a Spread Sheet package (like Excel), you can learn to program.

Another one that creates a lot of debate is obviously the speed of a language. The misconception is, “XYZ language is faster than PQR language.” Not entirely accurate. A language has no definition of speed. A language is never fast or slow, a program written in a language can be fast or slow. The speed of a program depends on a lot of factors and can be a tricky thing to evaluate. It has more to do with correct design decisions and not so much on the language. There is a phrase I have coined for this over the years debating with people. It goes like this, “The speed of a program directly depends on the intelligence and experience of the programmer.” The fact is a good programmer will produce fast code in any language. It is true that well-written code in some languages can be faster than similar well-written code written in other languages, but unless you are working on bleeding edge of technology, this doesn’t really matter.

A similar misconception is, “XYZ language is more powerful than PQR language!”. Not quite right. It entirely depends on the domain and task at hand. Some languages are specifically designed for certain problem (Erlang comes to mind) and can be extremely effective at addressing those problems, often magnitudes of an order better than others. Other’s are made simple on purpose, just because you can do very rapid programming with it. Some programming languages go an extra mile so redundant and tedious programming tasks and bugs can be eliminated. Again I don’t intend to take sides, but for all rapid programming tasks I use Python because of it obvious benefits. However, there are other languages like C# which are equally good. Every language has strengths and weaknesses, no language is all powerful.

This is how I stand on the whole issue of programming languages; I consider most programming languages, at least the most famous ones used in the industry as “worth learning”. You can never be a master at all, it’s just impossible. However you can certainly be good at a couple. Learning more languages will empower you to be at a better position to choose the right language for the right problem (or task) . Learning more will allow you understand why these languages were designed the way they were and thus make you to be a better programmer in the end. Most production quality languages aren’t bad\incompetent or useless; don’t say they are. (I am talking to experienced programmers here!) Some languages could be inappropriate for a particular task, but harsher words are unwarranted!

Ultraportables.

If you haven’t already heard of the Eee PC (website), then you probably have been on the wrong side of the internet. The subnotebook has left other and more powerful machines biting the dust when it comes to sales figures. The Eee PC seems to be the latest ‘in thing’ when it comes to cool gadgets and might even rival the iPod in the future. I have actually never seen or used the Eee PC first hand but I was browsing through it’s specs just to see what the gadget holds in it’s guts. The specs look pretty impressive for it’s size, but the point of interest as far as I am concerned is the graphics chipset. The Eee PC houses an Intel GMA 900 graphics chip, a little bit disappointing I must say. I would have preferred an NVIDIA or ATI chip for an Intel one. The GMAs have bugged me all through out the development of the game and I am not particularly fond of them. However, given the size and the target audience of the Eee PC, it isn’t too bad.

My primary interest was of course to see the capability of the Eee PC as a gaming platform. Especially since ASUS has explicitly stated in it’s motto that one of the “e”s in the Eee PC stands for “Easy to play”. From the raw spec data, a Celeron processor, 512 Mb RAM, and an Intel graphic chipset makes a pretty ordinary setup even for casual games. Anything that is even slightly heavy on the graphics side will fail to run on the Eee PC. It may run last generation 3D games like Quake 3 or Half-Life but I have serious doubts about anything of a later generation running on this setup. Also at least 4 GB disk-space is I think a must, though you could live with 2Gb.

Another interesting thing is the setup comes bundled with Linux (Xandros) installed. Logical, considering some other OS’ today are particularly heavy on systems and having a proprietary OS would have driven up the cost of the machine. From what I could find the Linux OS runs KDE as it’s default windowing system. Surprising, since KDE is among the most resource heavy (, when compared to Gnome and Xfce). I would have preferred Xfce since it is very light on system resources, however KDE looks much better and snazzier. I guess installing Xubuntu on the Eee PC will do wonders. You apparently can install other OS’ on the Eee PC as well. Coming back to gaming; Linux has very little to no market for games so users of the Eee PC might be limited to a few games that have native Linux support but you always have Wine to run all those Windows games.

All in all a good setup to carry around for the common casual user who wants to surf the net and chat with friends. Considering the popularity the system has achived in such a short time, just goes to shows how much of a solid setup it is. Also, so many people with Eee PCs with Linux installed also means Linux will finally get the attention it deserves. As someone on some internet board said, and I quote “… and people will finally see, they can run thousands of applications… for free!”

The showdown! OpenOffice.org vs OOXML.

It’s no secret Microsoft has been pushing for it’s Office Open XML (OOXML) standard with the International Organization for Standardization (ISO) for quite sometime now, some might say against the already standardized Open Document Format (ODF) used by OpenOffice.org. It would seem MS is trying for a final push at it at the ballot resolution meeting this Februrary. I must admit this topic is not new and I had earlier refrained from commenting on this topic, since there are already too many blogs that have similar content and for a fact that I hate to be on any one side of the fence. The reason given by Microsoft over it’s rival seems to be that OOXML is more application friendly than ODF and more importantly far more compatible with legacy Microsoft Office formats. The argument put forth by Microsoft is, “There could be more than one XML standard and more than one document format in the market.” Yes there could be, but then again why does Microsoft want it’s format standardized? The fact is users (, like you and me) couldn’t care less what the internals of XML formats are made up of as long as they get the work done. They could just be happy with the OOXML format used by Office 2007 or, for the “poor” people (, like me) who can’t obviously afford MS Office, can be happy with OpenOffice.org. So why push for standardization now when MS Office suites already use and do a pretty good job with proprietary formats, and MS already has 90% of the market?

MS vs OSS battle is nothing new and this just adds more to what has already become a endless debate. However, I like to look at this from a neutral position, someone who is sitting on the fence and not on any one side. To me the reasons given by Microsoft for standardization of the OOXML seem downright selfish. The OOXML format is clearly tailored for Microsoft products and interoperability with MS Office packages. It doesn’t take into account any other vendors or Office suites. People have often said that the OOXML specification is more complex. For products other than the MS line, OOXML is more difficult to adhere to accurately. There is even some talk about the OOXML format bring encumbered with patents which might not allow it’s adoption in any OSS products without infringement of some sort. This to me looks like typical market muscling by Microsoft. It’s argument over ODF holds little ground, even though it’s Office products are far superior to OpenOffice.org or any of it’s other competitors.

It would seem that this post is more about bashing Microsoft, but it is not. Standardization is a complex process that requires serious thought. The very definition of the term means party and vendor neutral standards, which the OOXML format fails to address. It may be true that the format is superior (, I don’t really know or care for that matter) to ODF, but should it be made a standard? You can guess the answer to that one yourself!

Nokia taking over Trolltech/Qt.

In some interesting news, at least for cross-platform and Open-Source developers and particularly to Linux-KDE enthusiasts, Trolltech was acquired by Nokia. I have worked on 2 projects using Qt, but I generally find favour with wxWidgets since I find the moc-compiler thingie to be too much of a compile burden when it comes to complex UI, not to mention the really suspicious licenses for Qt. However that is besides the point. The question to ask is what does Nokia gain with having a framework like Qt under it’s belt? Very clearly Nokia is interested in Qtopia. I first remember reading about it 2 years ago when I was still working with Qt, and it looked pretty impressive at that time. It seems to be have all the bells and whistles for serious mobile development. However, the thing that bothers me is the future of the toolkit/framework in general and all the projects using Qt. Where does Nokia take the toolkit from here? Nokia isn’t a company that licenses toolkits or frameworks. So that is a question one is obviously tempted to ask. My strong suspicion is that fact that Nokia is feeling the pressure from platforms like Windows Mobile and iPhone, and if they want to have at least some semblance of a development environment over these obviously more developer friendly platforms, Qt was the obvious choice. But that still leaves me with one more question; Why aquire Trolltech when you could just use, well, the Qt framework?

In the above linked article they mention that they intend to continue working with the OSS community and intend to port Qt to their mobile devices. That is smart. It would obviously mean a myriad of already existing and upcoming Open-Source applications for Nokia devices at no cost. In any case, KDE itself is quite secure, no need to worry there, but the thing that strikes me as odd is the fact that most Nokia Linux devices (, and I myself didn’t know this until I read this), use Gnome and might well, continue to do so.

In search of a Python IDE.

There is simply no good Python IDE under Linux. Yesterday night I tried searching for one and ended up empty handed, well amost. Under Windows the situation isn’t too good either. I mostly use PythonWin over there and it gets the job done, at least most of the time. Probably not as good as I would like but it does the job. Under Linux however the situation is even worse. There is no IDE, that can be used for serious Python development. Maybe it’s me, but I found it a little bit strange that such a popular language like Python would be lacking a proper IDE. To be fair the only thing that comes close to a good Python programming environment was Komodo Edit though it itself is rough around the edges.

KDevelop is kinda OK. Even though it is good for C++ development, it lacks proper support for Python. For one I couldn’t get the Python debugger working under KDevelop 🙁 . Also KDevelop uses Makefiles for its project management and that just made me run away from it rather quickly. Makefiles are just a little bit too much for a simple scripting like Python. The other editors/IDEs I tried were SPE, Eric, DrPython, Editra, Boa Constructor and Emacs. While most of the IDEs/editors are fairly decent, none of them are up to the mark. I would place gold o’l Emacs at number 3 since it does the job fairly well without crashing or major hiccups. Most of the other editors were either clunky or just simply crashed like way to often (, haven’t tried any commercial ones, sorry, strapped for cash here).

Komodo Edit is more like an editor that has partial support for Python. I haven’t managed to get the debugger working with it in a short while that I have used it (, no idea if you can actually do such a thing) 🙁 . But it seems the best bet for Python development under Linux (, if you want to use free software). The good thing about this editor is the fact that you can run custom commands. So you basically have to run your script via a custom command since the editor itself doesn’t provide a run command out-of-the box. The project layout; well there is none, you basically place everything under the project directory and the editor just picks it up in it’s tree window. Probably a little trivial, but come to think of it, what else do you need when it comes to Python. It’s not like you have compiler switches or linker optimizations that need to be performed. Besides such a setup means there is less complications running scripts from the command line since in Python all module paths are made relative to the script anyways. Overall, Komodo Edit is a good bet if you want to do quick and simple Python scripting under Linux.

Invasion of the multi-core machines.

They are here. They were already here, but now they are really here and can no longer be ignored. Developers and programmers, especially game developers, can no longer afford to sit back and just watch the invasion of machines with multi-core processors. While hardware manufacturers have scaled their processes to bring them to us, software and compilers haven’t scaled equally well. Ironically with current programming methodologies, programming practices and compilers, programmers can’t yet take full advantage of the CPU power thrown at them. It’s not entirely the programmer’s fault, and neither are they limited by their intelligence. The fact is current generation programming models fall short of addressing the multi-core issue in a reliable way. Yes currently there are workarounds and you can definitely use them to get some advantage on a multi-core machine. Applications using even simple threads can benefit from multiple cores, but merely having multiple threads in an application doesn’t mean that application or the threads will run at twice the speed for a dual core CPU. The performance boost for a “normal” multi-threaded application running on a multi-core system will be rather minuscule compared to the computing power of what a multi-core system provides. To be frank all those cores are of little use from the “average joe” programmer’s perspective. That’s just because, if not programmed with care, the advantages provided by those cores is useless, at least in one given application.

Lets look it from a more technical perspective. Most multi threaded applications are not written to use multiple concurrent threads aggressively. A typical multi-threaded application delegates only a minuscule portion of it’s program code to a thread, often called a worker thread. Generally the most CPU intensive or blocking operations in a program are done inside this thread. Consider an example of a web-downloader application. You have one worker thread doing all the downloading while the UI thread waits and processes user input like “Cancel” or “Quit”. Here the program was explicitly designed this way so that the UI can respond to user input, while at the same time the task of downloading a file can go on. In other situations threads may be used for a different purpose. Take the case of the Doofus game. In the game shadow calculations are the most CPU intensive operations, but are not required per frame (, or cycle). So the shadow calculations are done inside a thread at lower priority, typically the calculations are distributed so that the results are obtained per 2 or 3 frames. Whatever the case maybe, the fact remains, such designs are not the most optimal way to program for multi-core machines. In the case of the web-downloader application, one thread waits while one thread does all the work. In the case of the game the situation is a little bit better, but still the task is not optimally distributed between threads. The ideal case would be to have multiple threads share the entire workload of the application so that all the threads are busy all the time. If that were indeed the case and if these threads were to run on separate cores, you would then be able to harness the true power of a multi core machine.

Programming multiple concurrent threads in an application is difficult. Thread synchronization is not for the faint hearted and having a lot of threads in an application can create bugs that are difficult to debug. Traditional multi-threading is synchronized using lock-based methods and lock-based synchronization is prone to problems like deadlocks, livelocks and race conditions. Experienced programmers will try to avoid multi-threading if and when they can and try for simpler and often single threaded solutions. This is not what is advocated by concurrent computing and parallel programming, which clearly can take advantage of multiple cores very effectively. It is true that even with current multi-threaded designs you could benefit from multi-core architecture. Even if such programs internally wouldn’t be able to use the power of multiple cores, the OS can still make full use of the architecture for multi-tasking. To put it in simpler language, multiple applications running at one time will run faster (, note the subtle difference there). For a typical system, applications like anti-virus programs and background processes along with other user applications, running all at once, will definitely benefit from additional cores. This however isn’t very helpful from a game development perspective, since obviously most games are single applications. Games typically take up most of the system resources while running and the advantages of multi-tasking are all but useless while running a game. Games therefore must be able to harness the true power of multiple cores internally. What does that mean? Does it mean a paradigm shift in how games are built? Do we need a special language to do so? Should we shun the current programming practices? Drop C/C++ and look at maybe Erlang or Haskell? Use parallel programming concepts? The questions are many and are increasingly asked by a lot of people. The truth and the solution however, is not quite so simple.

Continue reading

Comedy of errors.

A mishap happened last night. I was typing out this rather long blog entry regarding multi processing and multi-core machines when there was a nasty power surge and the machines went down. When the machines came back online I had lost a good 80% of the post entry because of some comedy of errors, which, just left me feeling rather frustrated! First, let me assure you it was no one’s fault, not even mine. WordPress saves blog entries every 2 minutes or so when you are typing, which would mean I wouldn’t have lost more than about 2 to 4 lines of whatever I was typing. However what happened was really strange. When the machines did comeback on line, I restarted FireFox and it prompted me for a restore session, which I incidentally did. I shouldn’t have, but I did! That just loaded the saved version of the page which had about 80% of the post missing, and it just so happened, WordPress just automatically saved that post overwriting my current copy with the earlier version loaded by FireFox!

Damm! I was typing that out for like a week now during breaks. What happened was really frustrating!

Is there a server in the house!!?!!

It would seem, when all else fails you load a crap OS on to a super expensive machine and start marketing them as servers to kids and moms! I am talking about, well, Windows Home Server. Now Microsoft is all set to make children understand the Stay-At-Home Server by using a children’s book. Yes you heard it right! OK hold a sec there, back up a bit. First of all, can someone please explain me this whole concept of a home server? What is a home server and what exactly will it do or rather what extra functionality is it going to provide that is not already provided by the good o’l desktop. I was reading through the features list, and what a bunch of b**l s**t. A server for backup and photo sharing! You could do that with your lap and desktops as well, and yeah automatically too. They even go on to imply it could be used as a web-server. The last time I checked home internet plans explicitly forbid the use of their IPs for web servers. Oh yeah, on the same page, please read the disclaimers in small print. “Please contact your broadband service provider”, yeah right! It’s not Microsoft’s problem it’s the service provider’s problem. Wonder why the service providers are so paranoid about security? Maybe because it could be used for all illegal stuff, but hey, that’s just the service provider’s problem.

Ports, IP addresses, service providers, TOCs, subnets, DNS servers, name resolvers, firewalls, web-servers, hand-shakes, packet-fragmentation, VoIPs, streaming media. Kids, wasn’t this all taught in kindergarten to you. Hmm… maybe it should be, then you can be CCIEs by the time you graduate.

An Unreal Crysis.

If you are graphics geek and love to see those so called next-gen effects, then recently released games like Crysis, UT3 and to some extent Bioshock will give you lot to cheer about. Crysis for one has shown that modern top line cards can push extraordinary amounts of detail. However, raw figures show that Crysis and UT3 sales have been anything but extraordinary. They have in fact fallen flat! Interesting figures there, and to some extent I am a bit surprised by what the figures show. As the articles point out both games were pretty hyped out before the release and they should have made flat out more sales than what the did. True Crysis has some crazy hardware requirements, but still the game can be played with older and less powerful graphics cards, so can UT3. Maybe not with all the graphics effects and resolution maxed out, but they can be played nevertheless. Besides both games have *huge* fan bases so the figures are very surprising indeed.

Well I can’t speak for everyone but, my personal take on the whole thing is the fact that vanilla FPS genre is kinda getting old. After so many games that churn out the same mundane gameplay, it has pretty much lost it’s charm. True the graphics have improved but not the gameplay in general. Games like Bioshock stand apart from the crowd because they give that little bit more to the overall game and it is exactly why they sell more. I can tell you from my experience over that years of playing games is the fact that (, and I have pretty much repeated this a lot of times on this blog,) FPS games are getting kinda boring. As a gamer I want more interesting stuff in there. That is exactly the reason I spent nearly 6 months playing Oblivion. The game gave me so much more to do than just run kill, run kill, collect ammo, run kill, collect health, run kill …..

I myself haven’t played UT3 and for that matter only watch someone else play Crysis, but from what I have heard people say about the games makes me wonder if they are nothing more than tech demos. Maybe we should look at it from a different perspective; it’s a fact Epic markets it’s engines via the UTx games, and I think to some extent Crytek does that too. So maybe that is exactly why those game are here for, to show off what their respective engines can achieve. The graphic brilliance achieved by both games/engines is amazing, there is little doubt to that, and the hardware requirements for the games is equally demanding. But that is for now. The same hardware will become mainstream in another 6 to 8 months and the same engines can be used/licensed to make other games. I therefore wouldn’t count them as outright failures.

Different people have different tastes and different points of view, so naturally have different tastes for game genres. However the feeling I get is, in general, game genres are beginning to overlap. This I think that is because of necessity. Game designers that strive to make their games “immersive” have started incorporating ideas and methods from other game genres to make gameplay more interesting and challenging. However having an equally good engine is a must. Case and point to Oblivion. The game looks great because it uses Gamebryo, which is another good engine. I am pretty sure we will see more and better games using both the engines in the future.