It’s no secret Microsoft has been pushing for it’s Office Open XML (OOXML) standard with the International Organization for Standardization (ISO) for quite sometime now, some might say against the already standardized Open Document Format (ODF) used by OpenOffice.org. It would seem MS is trying for a final push at it at the ballot resolution meeting this Februrary. I must admit this topic is not new and I had earlier refrained from commenting on this topic, since there are already too many blogs that have similar content and for a fact that I hate to be on any one side of the fence. The reason given by Microsoft over it’s rival seems to be that OOXML is more application friendly than ODF and more importantly far more compatible with legacy Microsoft Office formats. The argument put forth by Microsoft is, “There could be more than one XML standard and more than one document format in the market.” Yes there could be, but then again why does Microsoft want it’s format standardized? The fact is users (, like you and me) couldn’t care less what the internals of XML formats are made up of as long as they get the work done. They could just be happy with the OOXML format used by Office 2007 or, for the “poor” people (, like me) who can’t obviously afford MS Office, can be happy with OpenOffice.org. So why push for standardization now when MS Office suites already use and do a pretty good job with proprietary formats, and MS already has 90% of the market?
MS vs OSS battle is nothing new and this just adds more to what has already become a endless debate. However, I like to look at this from a neutral position, someone who is sitting on the fence and not on any one side. To me the reasons given by Microsoft for standardization of the OOXML seem downright selfish. The OOXML format is clearly tailored for Microsoft products and interoperability with MS Office packages. It doesn’t take into account any other vendors or Office suites. People have often said that the OOXML specification is more complex. For products other than the MS line, OOXML is more difficult to adhere to accurately. There is even some talk about the OOXML format bring encumbered with patents which might not allow it’s adoption in any OSS products without infringement of some sort. This to me looks like typical market muscling by Microsoft. It’s argument over ODF holds little ground, even though it’s Office products are far superior to OpenOffice.org or any of it’s other competitors.
It would seem that this post is more about bashing Microsoft, but it is not. Standardization is a complex process that requires serious thought. The very definition of the term means party and vendor neutral standards, which the OOXML format fails to address. It may be true that the format is superior (, I don’t really know or care for that matter) to ODF, but should it be made a standard? You can guess the answer to that one yourself!
In some interesting news, at least for cross-platform and Open-Source developers and particularly to Linux-KDE enthusiasts, Trolltech was acquired by Nokia. I have worked on 2 projects using Qt, but I generally find favour with wxWidgets since I find the moc-compiler thingie to be too much of a compile burden when it comes to complex UI, not to mention the really suspicious licenses for Qt. However that is besides the point. The question to ask is what does Nokia gain with having a framework like Qt under it’s belt? Very clearly Nokia is interested in Qtopia. I first remember reading about it 2 years ago when I was still working with Qt, and it looked pretty impressive at that time. It seems to be have all the bells and whistles for serious mobile development. However, the thing that bothers me is the future of the toolkit/framework in general and all the projects using Qt. Where does Nokia take the toolkit from here? Nokia isn’t a company that licenses toolkits or frameworks. So that is a question one is obviously tempted to ask. My strong suspicion is that fact that Nokia is feeling the pressure from platforms like Windows Mobile and iPhone, and if they want to have at least some semblance of a development environment over these obviously more developer friendly platforms, Qt was the obvious choice. But that still leaves me with one more question; Why aquire Trolltech when you could just use, well, the Qt framework?
In the above linked article they mention that they intend to continue working with the OSS community and intend to port Qt to their mobile devices. That is smart. It would obviously mean a myriad of already existing and upcoming Open-Source applications for Nokia devices at no cost. In any case, KDE itself is quite secure, no need to worry there, but the thing that strikes me as odd is the fact that most Nokia Linux devices (, and I myself didn’t know this until I read this), use Gnome and might well, continue to do so.
There is simply no good Python IDE under Linux. Yesterday night I tried searching for one and ended up empty handed, well amost. Under Windows the situation isn’t too good either. I mostly use PythonWin over there and it gets the job done, at least most of the time. Probably not as good as I would like but it does the job. Under Linux however the situation is even worse. There is no IDE, that can be used for serious Python development. Maybe it’s me, but I found it a little bit strange that such a popular language like Python would be lacking a proper IDE. To be fair the only thing that comes close to a good Python programming environment was Komodo Edit though it itself is rough around the edges.
KDevelop is kinda OK. Even though it is good for C++ development, it lacks proper support for Python. For one I couldn’t get the Python debugger working under KDevelop 🙁 . Also KDevelop uses Makefiles for its project management and that just made me run away from it rather quickly. Makefiles are just a little bit too much for a simple scripting like Python. The other editors/IDEs I tried were SPE, Eric, DrPython, Editra, Boa Constructor and Emacs. While most of the IDEs/editors are fairly decent, none of them are up to the mark. I would place gold o’l Emacs at number 3 since it does the job fairly well without crashing or major hiccups. Most of the other editors were either clunky or just simply crashed like way to often (, haven’t tried any commercial ones, sorry, strapped for cash here).
Komodo Edit is more like an editor that has partial support for Python. I haven’t managed to get the debugger working with it in a short while that I have used it (, no idea if you can actually do such a thing) 🙁 . But it seems the best bet for Python development under Linux (, if you want to use free software). The good thing about this editor is the fact that you can run custom commands. So you basically have to run your script via a custom command since the editor itself doesn’t provide a run command out-of-the box. The project layout; well there is none, you basically place everything under the project directory and the editor just picks it up in it’s tree window. Probably a little trivial, but come to think of it, what else do you need when it comes to Python. It’s not like you have compiler switches or linker optimizations that need to be performed. Besides such a setup means there is less complications running scripts from the command line since in Python all module paths are made relative to the script anyways. Overall, Komodo Edit is a good bet if you want to do quick and simple Python scripting under Linux.
They are here. They were already here, but now they are really here and can no longer be ignored. Developers and programmers, especially game developers, can no longer afford to sit back and just watch the invasion of machines with multi-core processors. While hardware manufacturers have scaled their processes to bring them to us, software and compilers haven’t scaled equally well. Ironically with current programming methodologies, programming practices and compilers, programmers can’t yet take full advantage of the CPU power thrown at them. It’s not entirely the programmer’s fault, and neither are they limited by their intelligence. The fact is current generation programming models fall short of addressing the multi-core issue in a reliable way. Yes currently there are workarounds and you can definitely use them to get some advantage on a multi-core machine. Applications using even simple threads can benefit from multiple cores, but merely having multiple threads in an application doesn’t mean that application or the threads will run at twice the speed for a dual core CPU. The performance boost for a “normal” multi-threaded application running on a multi-core system will be rather minuscule compared to the computing power of what a multi-core system provides. To be frank all those cores are of little use from the “average joe” programmer’s perspective. That’s just because, if not programmed with care, the advantages provided by those cores is useless, at least in one given application.
Lets look it from a more technical perspective. Most multi threaded applications are not written to use multiple concurrent threads aggressively. A typical multi-threaded application delegates only a minuscule portion of it’s program code to a thread, often called a worker thread. Generally the most CPU intensive or blocking operations in a program are done inside this thread. Consider an example of a web-downloader application. You have one worker thread doing all the downloading while the UI thread waits and processes user input like “Cancel” or “Quit”. Here the program was explicitly designed this way so that the UI can respond to user input, while at the same time the task of downloading a file can go on. In other situations threads may be used for a different purpose. Take the case of the Doofus game. In the game shadow calculations are the most CPU intensive operations, but are not required per frame (, or cycle). So the shadow calculations are done inside a thread at lower priority, typically the calculations are distributed so that the results are obtained per 2 or 3 frames. Whatever the case maybe, the fact remains, such designs are not the most optimal way to program for multi-core machines. In the case of the web-downloader application, one thread waits while one thread does all the work. In the case of the game the situation is a little bit better, but still the task is not optimally distributed between threads. The ideal case would be to have multiple threads share the entire workload of the application so that all the threads are busy all the time. If that were indeed the case and if these threads were to run on separate cores, you would then be able to harness the true power of a multi core machine.
Programming multiple concurrent threads in an application is difficult. Thread synchronization is not for the faint hearted and having a lot of threads in an application can create bugs that are difficult to debug. Traditional multi-threading is synchronized using lock-based methods and lock-based synchronization is prone to problems like deadlocks, livelocks and race conditions. Experienced programmers will try to avoid multi-threading if and when they can and try for simpler and often single threaded solutions. This is not what is advocated by concurrent computing and parallel programming, which clearly can take advantage of multiple cores very effectively. It is true that even with current multi-threaded designs you could benefit from multi-core architecture. Even if such programs internally wouldn’t be able to use the power of multiple cores, the OS can still make full use of the architecture for multi-tasking. To put it in simpler language, multiple applications running at one time will run faster (, note the subtle difference there). For a typical system, applications like anti-virus programs and background processes along with other user applications, running all at once, will definitely benefit from additional cores. This however isn’t very helpful from a game development perspective, since obviously most games are single applications. Games typically take up most of the system resources while running and the advantages of multi-tasking are all but useless while running a game. Games therefore must be able to harness the true power of multiple cores internally. What does that mean? Does it mean a paradigm shift in how games are built? Do we need a special language to do so? Should we shun the current programming practices? Drop C/C++ and look at maybe Erlang or Haskell? Use parallel programming concepts? The questions are many and are increasingly asked by a lot of people. The truth and the solution however, is not quite so simple.
A mishap happened last night. I was typing out this rather long blog entry regarding multi processing and multi-core machines when there was a nasty power surge and the machines went down. When the machines came back online I had lost a good 80% of the post entry because of some comedy of errors, which, just left me feeling rather frustrated! First, let me assure you it was no one’s fault, not even mine. WordPress saves blog entries every 2 minutes or so when you are typing, which would mean I wouldn’t have lost more than about 2 to 4 lines of whatever I was typing. However what happened was really strange. When the machines did comeback on line, I restarted FireFox and it prompted me for a restore session, which I incidentally did. I shouldn’t have, but I did! That just loaded the saved version of the page which had about 80% of the post missing, and it just so happened, WordPress just automatically saved that post overwriting my current copy with the earlier version loaded by FireFox!
Damm! I was typing that out for like a week now during breaks. What happened was really frustrating!
It would seem, when all else fails you load a crap OS on to a super expensive machine and start marketing them as servers to kids and moms! I am talking about, well, Windows Home Server. Now Microsoft is all set to make children understand the Stay-At-Home Server by using a children’s book. Yes you heard it right! OK hold a sec there, back up a bit. First of all, can someone please explain me this whole concept of a home server? What is a home server and what exactly will it do or rather what extra functionality is it going to provide that is not already provided by the good o’l desktop. I was reading through the features list, and what a bunch of b**l s**t. A server for backup and photo sharing! You could do that with your lap and desktops as well, and yeah automatically too. They even go on to imply it could be used as a web-server. The last time I checked home internet plans explicitly forbid the use of their IPs for web servers. Oh yeah, on the same page, please read the disclaimers in small print. “Please contact your broadband service provider”, yeah right! It’s not Microsoft’s problem it’s the service provider’s problem. Wonder why the service providers are so paranoid about security? Maybe because it could be used for all illegal stuff, but hey, that’s just the service provider’s problem.
Ports, IP addresses, service providers, TOCs, subnets, DNS servers, name resolvers, firewalls, web-servers, hand-shakes, packet-fragmentation, VoIPs, streaming media. Kids, wasn’t this all taught in kindergarten to you. Hmm… maybe it should be, then you can be CCIEs by the time you graduate.
If you are graphics geek and love to see those so called next-gen effects, then recently released games like Crysis, UT3 and to some extent Bioshock will give you lot to cheer about. Crysis for one has shown that modern top line cards can push extraordinary amounts of detail. However, raw figures show that Crysis and UT3 sales have been anything but extraordinary. They have in fact fallen flat! Interesting figures there, and to some extent I am a bit surprised by what the figures show. As the articles point out both games were pretty hyped out before the release and they should have made flat out more sales than what the did. True Crysis has some crazy hardware requirements, but still the game can be played with older and less powerful graphics cards, so can UT3. Maybe not with all the graphics effects and resolution maxed out, but they can be played nevertheless. Besides both games have *huge* fan bases so the figures are very surprising indeed.
Well I can’t speak for everyone but, my personal take on the whole thing is the fact that vanilla FPS genre is kinda getting old. After so many games that churn out the same mundane gameplay, it has pretty much lost it’s charm. True the graphics have improved but not the gameplay in general. Games like Bioshock stand apart from the crowd because they give that little bit more to the overall game and it is exactly why they sell more. I can tell you from my experience over that years of playing games is the fact that (, and I have pretty much repeated this a lot of times on this blog,) FPS games are getting kinda boring. As a gamer I want more interesting stuff in there. That is exactly the reason I spent nearly 6 months playing Oblivion. The game gave me so much more to do than just run kill, run kill, collect ammo, run kill, collect health, run kill …..
I myself haven’t played UT3 and for that matter only watch someone else play Crysis, but from what I have heard people say about the games makes me wonder if they are nothing more than tech demos. Maybe we should look at it from a different perspective; it’s a fact Epic markets it’s engines via the UTx games, and I think to some extent Crytek does that too. So maybe that is exactly why those game are here for, to show off what their respective engines can achieve. The graphic brilliance achieved by both games/engines is amazing, there is little doubt to that, and the hardware requirements for the games is equally demanding. But that is for now. The same hardware will become mainstream in another 6 to 8 months and the same engines can be used/licensed to make other games. I therefore wouldn’t count them as outright failures.
Different people have different tastes and different points of view, so naturally have different tastes for game genres. However the feeling I get is, in general, game genres are beginning to overlap. This I think that is because of necessity. Game designers that strive to make their games “immersive” have started incorporating ideas and methods from other game genres to make gameplay more interesting and challenging. However having an equally good engine is a must. Case and point to Oblivion. The game looks great because it uses Gamebryo, which is another good engine. I am pretty sure we will see more and better games using both the engines in the future.
First of all, a “Very Happy New Year” to all.
Just to highlight some interesting news and events that happened the year gone by, plus my own experiences.
- It was probably the game of the year (, at least as far as I am concerned), I am talking about Bioshock. Enjoyed playing it even though not on my PC and I still haven’t completed it. Truly amazing graphics and a new twist to FPS style of play.
- The Elder Scrolls VI: Oblivion, this game didn’t come in first place because it was not launched this year, but the last. It is here since I could only manage to play and complete the game this year. Played this game along with the Shivering Isles and Knights of the nine expansions for like more than 5 months 😛 starting July, and I must say I have come to thoroughly enjoy the sandbox style gameplay the game offers. Don’t be surprised if I start getting crazy ideas of creating games like this in 2008 😉 .
- Just when we thought nothing could tax the 8800, Crysis hit! The game takes away the best visual graphics award of 2007. Amazing eye candy and surely the sign of things to come, though I am not sure about the overall gameplay.
- A couple of other interesting games as well like GOW 2 and Gears of War, but didn’t get my hands on them as yet.
- Programming and Development:
- Biggest disappointment was the postponement of OpenGL 3.0 specs. I was hoping to see at least something concrete on this, but to no avail. I hope 2008 will give us more to look forward to.
- 2007 saw the release of Visual Studio 2008 and it’s Express editions. Not too much to complain or praise there. .NET 3.5 was released along with the studio versions.
- While major releases were few and far between, minor releases like Cg 2.0 and Silverlight dominated most of the programming and development news.
- Personal projects:
- Biggest miss was not being able to launch Doofus 3D. Period! The game was stated to release October/November but inevitable delays and project pressures resulted in the game not being shipped. This has been the biggest disappointment from my side.
- The project is however still on track and baring time delays the product and the engine has become stable and looks more and more like a very solid platform for future projects. Most (almost all) of my ideas (some reallly crazy ones too) have thankfully worked!
- My RnD on scripting engine integrations has yielded good results. I remember my promise, will update the blog with some statistical data on this, just tied up with project pressures for now. On the whole RnD this year from my side was lower then what it was last year.
- Got a new website this year, migrated the blog and also have one lined up for the game release.
- The year belonged to NVIDIA and the 8800 has pretty much dominated the graphics scene unchallenged for most of 2007. There was a feeble attempt by AMD(/ATI) at the end of the year but the HD 3870 and 3850 have been plagued with shipping problems, though they have shown impressive figure and amazing value for money considering the price point. However, I expect the green brigade to counter that since they are already well ahead in the race to do so.
- The next was Intel which has successfully managed to run the competition (AMD) to the ground with it’s chips, the Core 2s, pretty much dominating the market. The Phenoms are here but still have to prove themselves. It’s safe to say Intel ruled 2007.
- Operating Systems:
- I have done enough Vista bashing on this blog already, so no more! My sentiments however remain unchanged regarding the OS. 2007 has been particularly bad for Vista, the OS was given flak on a lot of articles on the web. My recommendation; give the OS a skip for the time and use XP and/or…
- Ubuntu 7.10 code named Gutsy Gibbon (released 2007) has been a revelation for me. I have been using this OS for a month now on my internet PC and I am more than happy with it. True there are some quirks that remain but Ubuntu is great OS for, well, everyone and anyone. I recommend this OS hands down!
- Misc News:
- India wins the 20-20 world cup 2007.
New year resolution:
Release Doofus 3D.
A lot of plans in mind, but more on that later.