Be-aware while signing up on social networking sites.

Do you get those annoying mails asking you to join up on social networking sites? (Names intentionally left out.) Most are along these lines “Your very special friend Joe Sasquatch has invited you to join the Big Foot community. Please sign-up so you can share your precious and rare photos with your buddies and build a gigantic network of friends.” Hmm… maybe not so annoying if you like a little bit of flattery. All depends on how you take it really. Considering there are literally 100s of social networking sites opening up, I am sure most people must be getting similar emails from time to time. Well, you won’t believe it, in the past two days alone I received 10 such requests. These included requests from former colleagues, friends and some even from distant acquaintances whom I barely know. Some of them were from people with whom I had only briefly interacted with in the past. Maybe just via a couple of emails. Now I know the 10 requests in 2 days maybe purely coincidental but what I like to point out here is the fact that, it made me believe (and rightly so) that somehow those people didn’t always willingly send me those requests.

I have nothing against social networking in general, but I decided to find out more. So I went ahead and began the sign-up procedure on one of those, and there; it asked me for my email and password for that email account. “What the…!” That left me a little stumped. The official reason given was “Automatically link with all your friends”, which is a euphemism for, “We will open your email, go through your address book (and emails), find out who you have communicated with since the dawn of time and then send them invitation request.” Wow! Do you know what just happened here? You could have just given them your email address and password instead. Their spider probably went through ever email and contact you ever had and sent each of them an invitation request on your behalf. Is it just me who sees a problem here? Hell I wouldn’t want someone to send any email on my behalf to every joe and jane I have had contact with! My God that would be a catastrophe 😉 . In simple terms it means, “Dude! Your email and its password is with them.”

OK let me make it clear that this happens on some sites and not all. So the point of this blog entry is to make people aware (, it would seem most aren’t) of what takes place behind the scenes when you fill that little email box and punch in it’s password. Read their privacy statements, no explicit guarantee of safety of your data is made. Your exposed data to any organization could be used to generate a profile of you, your habits and the people you communicate with. Again I am not saying they will or are doing it, I am trying to make everyone aware of what could happen. Fear not, if you have been a victim of this, it’s just a matter of changing the passwords on your email accounts. Do it now. Oh yes and don’t be crazy and use your work email on any of these sites, ever!

Larrabee isn’t necessarily a means to a custom graphics API.

Has the graphics world come a full circle now that we see Intel’s first tech presentations of Larrabee? Will we see a resurgence of people writing custom software rasterizers? Is the heyday of the GPU truly coming to an end? Are APIs like OpenGL and Direct3D going to become redundant? I have seen these and a lot of similar questions being asked the past couple of days. People even going as far as the saying that technologies like Larrabee could be used to write custom graphics APIs. This has been, in part, due to the huge emotional response to the OpenGL debacle a couple of days back and partly due to the fact that Intel unveiled portions of it’s (up until now mysterious) Larrabee technology recently. Some people seem to have thus drawn up conclusions that soon we may not require the currently used graphics APIs anymore. Larrbee does promise freedom from the traditional hardware based approach. Rendering APIs today are closely connected to the underlying hardware and the graphics programmer using them is, thus, limited to what the hardware offers him/her.

Technologies like Larrabee do offer immense flexibility and power. There is no doubt in my mind that if needed one could create a custom graphics API using them. Unfortunately writing custom APIs might not be the answer or an option and there are good reasons to not do that. The first and probably what people see as a less important reason, is the fact that APIs like OpenGL and Direct3D are standards and therefore it is not advisable to dismiss them outright. What if code needs to ported across platforms where Larrabee might not be available? Then how do you scale custom API for that hardware? But one could argue that you could probably get more performance cutting across any layer that sits inbetween and using a direct access to Larrabee hardware. Call me a skeptic but I see issues here as well. It maybe very easy to hack up a simple rasterizer, but it’s a completely different thing to produce a vector optimized one even for a technology like Larrabee. It’s not a trivial task even if we have the best vector optimizing compilers from Intel. I would lay my bets on the star team working at Intel to produce a better rasterize than I probably can. Also I am pretty sure this (rasterizer) will be exposed via Direct3D and/or OpenGL interfaces. Yes you could probably make certain specific portions of your engine highly optimal using generic Larrabee architecture but a custom rendering API may not necessarily be the best option.

As a piece technology Larrabee is very interesting especially for real-time graphics. For the first time you will have the capacity to be truly and completely (maybe not completely) free from the shackles of hardware. There are so many more things you could accomplish with it. There are other things you could use Larrabee for, like for instance parallel processing and/or for doing intensive highly vectorized computations very efficiently.

OpenGL 3.0 is finally released, and it disappoints.

ARB has released the much anticipated OpenGL 3.0 spec and if you were the one following developments of OpenGL for sometime, you would know that hopes were riding high on the fact that OpenGL 3.0 would be a revolutionary redesign of an ailing and a rather old API. Apparently it’s none of that and even worse it’s actually nothing at all. OpenGL was drugging along for the past 15 years, adding on layer upon layer of muckish extensions to the point that many had expected ARB to really go ahead and make radical changes in the 3.0 specification. None of that has happened. Most of the radical changes promised have not been delivered. All that seems to have happened is the standardization of already existing extensions by making them a part of the the standard. Sad, really sad.

As a game developer and more as someone who has been using OpenGL for the past 8 years I am pretty disappointed. I was hoping to see a refreshing change to OpenGL. I am at a loss of words here; no really I am. There is really nothing more to say. The changes have been so shallow, that I wonder why it called for a major version number change in the first place. 2.1 to 3.0, phooey, it should have been 2.1.1 instead. Let me put it in another way; my current OpenGL renderer which is based on OpenGL 2.x could be promoted to 3.0 probably with 4 or 5 minuscule changes or maybe none at all! Where is the Direct3D 10+ level functionality what was hyped about? Where is the “radically forward looking” API?

What does this say for the future of OpenGL? Sadly not very much at least in the gaming arena. It was already loosing ground and there was a lot of anticipation that ARB would deliver a newer OpenGL to “take on” Direct3D. I must say that a powerful Direct3D (thanks to DirectX 11) looks all set to become the unequivocal champion when it comes to gaming graphics. OpenGL will clearly take a back seat to DirectX here. While some may argue that OpenGL will continue to flourish in the CAD arena, I am not so sure that Direct3D wont find favor over there as well. OpenGL drivers from most vendors already fall short of their Direct3D counterparts. That’s to be expected. It’s not their fault either. What else can they do when you have a 15 year old API to support whose legacy functionality is out of touch with modern day reality.

EDIT: The major thing missing as far as OpenGL 3.0 was a clean API rewrite. When you compare OpenGL 3.0 with Direct3D 11 it’s how things look from here on forward is what bothers me. Direct3D is more streamlined to address developments in hardware and while vendors could also expose similar functionality via OpenGL using vendor specific extensions, the whole situation doesn’t look too good. Making a driver that is fully OpenGL compatible will cost more in terms of manpower. That is because the specification is so large. Yes there is opportunity to deprecate things but I am not too sure how things will pan out there as well. Supporting older features on newer hardware means compromises and sacrifices in quality and performance. Driver writers cannot optimize for everything and that is why in the end performance suffers; or in worst case, ships out broken.

It’s true, trueSpace is indeed free.

Update: Microsoft has taken down the Caligari website and terminated trueSpace. Don’t bother looking for it, trueSpace is dead. If you are looking for a free powerful 3D modeling package, try Blender 3D.

I couldn’t believe it at first, but after the acquisition of Caligari, Microsoft has released the fully-featured 3D authoring packagetrueSpace for free. Simply put trueSpace is a 3D modeler and seems a pretty good one looking at the features it supports. It may not dethrone Maya or Max anytime soon, but for nada it packs a lot of punch, especially if you are an indie game studio or a budding 3D artist and can’t or don’t have the finance to invest in something along the lines of the top modelers mentioned above. I am not saying trueSpace is the best, quite frankly I haven’t even given the package a complete look through as yet. It takes a considerable amount of time and a sizable investment in effort to fully grasp any 3D authoring package. Well, it takes probably a lot more before you can become truly productive at it. trueSpace is no different. I haven’t personally gone and modeled anything with it as yet, nor do I currently have the time to invest in such an endeavor (maybe after the game ships).

However from the looks of it a free trueSpace seems to be something that can’t be ignored. The next thing I wanted to look for is whether the modeler could be integrated with a dev cycle for a game. That would require the package to have some sort of scripting system and/or allow an SDK, using which custom export scripts and engine functionality can be integrated with the authoring system. I was browsing the website and from the looks of it, C++/C SDKs and Python scripting is in fact offered by trueSpace. Again I haven’t had a good look at it, but the fact that it’s there should be a good enough reason to have a look at it if you are interested. The most important factor in selecting any authoring package is the availability of tutorials and that’s also another reason trueSpace stands out. The documentation and the video tutorials are also made available along with the package. Yes, I know it seems too good to be true. Video tutorials are invaluable while learning any 3D modeling. I remember years ago it was Blender video tutorials that really got me going with Blender. While my 3D skills leave a lot to be desired, most of the current game wouldn’t have been possible without those tutorials.

All of the above points make trueSpace a serious option to consider if you are a beginner or an indie game developer. While not the best, trueSpace is very attractive given the feature set and the price (which is 0). To be fair, I have only given the package a fleeting glimpse and that’s not how I would like to evaluate trueSpace, or for that matter any 3D package. So make your own assessments about the strengths and weaknesses of trueSpace by using the package yourselves. I would recommend having a go at the videos and tutorials first.

Doofus gets a dose of Optimizations.

Ah! It’s the optimization phase of the project and I am knee deep in both CodeAnalyst and NVIDIA PerfHUD. As far as memory-leak testing goes, most, no all of the memory leak testing is done by my own custom memory manager built directly into the engine core, so no third-party leak detectors are needed by the game. AMD’s CodeAnanlyst is a utility that is invaluable when it comes to profiling applications for CPU usage and the fact that it’s free makes it even better. NVIDIA PerfHUD is probably the champion among graphics performance utilities and which, I think, is vital when it comes to bullet proofing any graphics application for GPU performance. Too bad it doesn’t support OpenGL yet, but the O2 Engine’s renderers mirror each other almost to the point where an performance enhancement under the Direct3D renderer is almost similarly experienced under the OpenGL renderer. I would have really liked PerfHUD to have supported OpenGL though. There are some issues under GL; like for instance, FBOs under OpenGL perform a tad bit slower than Render-Targets under Direct3D (on the same hardware), which I must admit has left me a little dumbfounded. Maybe it is just for my GPU (yeah My GPUs are a bit old I must say,) or maybe the drivers are at fault but I have noticed a performance variance between the two even after considerable experimentation and optimization. It would have been good to have a utility like PerfHUD to probe directly at the dra calls and/or FBO switches. I am trying my luck with GLExpert, but I am not there yet. I must however say that GLExpert is nothing compared to PerfHUD.

Code Analyst
AMD CodeAnalyst

NVIDIA PerfHUD
Doofus running under NVIDIA PerfHUD

DirectX 9 to DirectX 11, where did 10 go?

This week there was a lot of buzz about DirectX 11. Yes, the newest version of the graphics API was unveiled by Microsoft at the XNA game fest and it has an interesting feature set that, I think, were long overdue. Most of DirectX 11 doesn’t diverge from version 10 (and the almost not eventful, version 10.1), but I think DirectX 11 should see a renewed interest from game developers since it provides features that were desperately needed in light of recent hardware developments. 11 (of course with the features of 10 and 10.1) now seems to be a more complete API to addresses issues related to game and graphics development and seems to be a more complete solution for the future.

What is really interesting to see is the emergence of what Microsoft terms as the “Compute Shader”, no doubt a marketing speak for GPGPU which they claim will allow the GPU, with it’s awesome power to be used for “more than just graphics”; which smells like CUDA (Compute Unified Device Architecture) to me. I wouldn’t be surprised if both turned out to be very similar (remember Cg/HLSL). In any case, what is important is the fact that such technology will be available to game developers under version 11. Technologies like CUDA (GPGPU) are the requirement of the hour and this could be the fact that 11 might see a lot more interest than the earlier (10.x) versions.

There is a lot of talk about hardware based tessellation, but frankly I haven’t seen too many details on that. At least not enough to make a detailed comment on it. From what little is being said, DirectX 11 hardware based tessellation could be used to make models appear “more smooth”. How this ultimately translates to actual implementation will be clear when more details come out. I am hazarding a guess here, but there should be something along the lines of some technology that allows sub-surf LODs to be calculated in real-time and/or displacement/bump/normal mapping to be done on the fly. I am not too sure as yet, but could be something along those lines, or maybe something in-between, or a combination of those techniques. Whatever it is, this would mean really good looking games in the future.

Issues like multi-threaded rendering/resource handling are things that were long time coming and yes, it’s a good thing we will finally see them in the newer version. It just makes my job as a game developer a whole lot easier. Most details on Shader Model 5.0 are pretty sketchy, so I won’t go into things like shader length and function recursion. However, I hope such issues are addressed satisfactorily in the newer shader model.

So will DirectX 11 succeeded where DirectX 10 failed? Will it get mass adoption like DirectX 9? Difficult to say. While most cutting edge games have adopted DirectX 10, it’s usage remains low because of several factors. For one many people still use XP which doesn’t support version 10 (or greater) of the API (for whatever reason) which means most developers have to adopt the lowest common denominator of the alternatives available, and that generally is DirectX 9.0. Also many people still don’t have DirectX 10 class hardware and that is also another reason not to go for 10.x. The issue with DirectX 10.1 is a total mess. It’s interesting, but there is even talk that NVIDIA might skip over 10.1, giving the version a total miss and aim directly for version 11 class hardware. There is logic to that decision; given that most games (except of the really high end ones) don’t even bother to use DirectX 10 let alone 10.1. All this makes adoption of 10.x a non lucrative issue for game developers.

Version 11 does bring in some really good features to gaming in general but that is not necessarily the reason the API will succeed. As a game developer, 11 holds some serious promise and could be a success if Microsoft plays it’s cards right. However there are some issues (mentioned above) that still bother me. Microsoft is still fixated on releasing version 11 only for Vista, so don’t expect your XP machines to ever run DirectX 11 even if you buy brand new hardware. That said, like most previous versions, DirectX 11 is backward compatible with version 10 and 10.1 and even 9.0. It would be impossible for Microsoft to ignore 1000s of games that already use DirectX 9 so it’s almost a written fact that newer versions of the API will continue to be backward compatible until and unless we see a complete divergence of a sizable amount of games to newer versions, and that could be a long way away since many games even today are still being produced on the 9.0 version.

XP is dead, long live ???

Yes, the Windows XP OS is dead. Today is the last day the OS will be officially shipped by Microsoft and I can’t help but feel a tad bit sad that the OS has been finally put down. I have used the OS for a long time, too long I must say and it did the job pretty well. I hate to see it go like this, especially when it is at the top of the table and probably giving it’s younger flashier brother a run for it’s money. But like all things good, this too must come to an end and balance must be restored. It’s a pity the way things panned out and though there have been rumors that the OS might have one last gasp of breath left and somehow the “cruel” echelons of power at Microsoft will indeed put the aging OS on a ventilator, those are just that, rumors.

While the OS will remain an integral part of my desktop for some more time, it seems the time has come near to say a·dieu to XP after all. Lets see what really ends up taking it’s place, for the time Xubuntu seems to be the most likely contender, but you never can tell.

Wanted: More than a simple Add/Remove.

There are some flawed assumptions about Windows vs Linux debate and one of them is, “It’s easier to install applications on Windows than it is on a Linux distro”. A few weeks back I was attending a seminar on some rather uninteresting technical presentations. That’s besides the point, what really is the point is, in one presentations the speaker actually stresses on the fact that Linux is difficult for mass adoption because it is very awkward for a new user to install applications on Linux. That’s laughable, because it’s clear the speaker has not done his homework nor has he any experience with any modern Liunx distro. This archaic argument has it’s roots at the time when you had to compile almost everything under Linux to get it to work. Although *NIX veterans may still do the same, for most of us times have changed. On the contrary I have to argue otherwise. Installation on Linux is slowly becoming easier, in fact in some cases it’s almost trivial. Unfortunately the presentation did not offer a Q&A session (strange I know), else I had some “really good questions” for that particular speaker. Anyways, I have my blog to rant about them 😀 .

After using Ubuntu for about 9 months now, I have grown to be extremely fond of the Synaptic package manager. While there are other package managers under other distros, (and I don’t want to belittle any of those) what would really be interesting to see is something similar on other operating systems, maybe Windows too. For those who have little clue as to what Synaptic does, and for the windows (only) users; Synaptic is Add/Remove Windows feature on steroids. It takes a step further in installation features and combines some very crucial functionality that is not present in the normal Add/Remove. Contrary to what was said and is popularly believed, Synaptic is so much more than a simple add/remove. It manages download, setup  and a full install of an application including the automatic setting up of dependencies of an application or a library with a click of a button. It’s almost a no brainer. All available applications are listed and categorized on distro servers and you can use synaptic to query and search them as required.

Synaptic package manager.

One of the greatest strength of Synaptic is probably the categories and filters it allows on installed and installable packages. It allows the user to browse through all packages in a particular category, thus enabling him to see a variety of similar or related packages, before he decides to install or remove a particular application or library. To a veteran debian and/or Ubuntu user this may seem trivial, but it is not. When you consider other platforms like windows where such facility is unavailable, hunting down applications often means a trip to Google. Now there is nothing wrong with that. However very rarely does Google results throw up exactly what is required, unless of course you are an “absolute nerd” at search-engine queries or, you are extremely lucky. Often times it’s through a lot of query refinement do you get down to results you require. Queries like “comparison of paint applications”, “best photo editing software”, “list of  best mp3 players” are all to common. This however doesn’t always give you what you are looking for and may not provide you with the best possible alternative out there. For example, a query “best photo editing software” returns link to reviews, and it’s only after some refinement do you really get to software download. Under Synaptic, it’s just a matter of simple search. I did a “C++ IDE” search under synaptic and it returned me a list of IDEs available in a snap. Everything from Code::Blocks, Eclipse, Anjuta were listed. All I had to do was right-click install on the one I wanted and Synaptic took care of all the dependencies and every other headache.

Synaptic is interesting but like most problems with Linux, it is distro specific. You will find Synaptic on most Debian based distros. Ubuntu takes it a step further and also features an interactive separate Add/Remove feature where the user can browse entire categories of applications with a brief explanation on each of them. (I am not too sure if other distros support Synaptic or how far it will work with other packaging systems like yum and others). However there is an interesting project in the works called PackageKit. While it looks very much like Ubuntu’s Add/Remove feature it also works on other distros, and infact works towards “providing a common set of abstractions that can be used by standard GUI and text mode package managers”.

There are a lot of software applications out there. So many in fact that you would probably miss out on most of them. You would never know they existed because you never go out and look for them and even if you did, you would probably miss out on most. Which is exactly why applications like Synaptic and PakageKit go a long way in advertising these apps. I would say that such functionality is probably an feather in the cap of distros like Ubuntu and Debian. It actually expands the distros by giving them a reach beyond a normal CD/DVD install and frees the user from the shackles of “using only what the OS provides”.

Opera is impressive, but back to Firefox.

Ah for the last 4 days I have been using Opera, and that’s after a pretty long time. I must admit, for the past 5 years I have been been a loyal Firefox fan. There was a time when Opera used to be my browser of choice, but somehow Firefox managed to squeeze it for the number one position. However the recent hype of Opera 9.5 was just enough to pep my interest in the browser once again. I decided to give the browser a try and found Opera to be surprisingly good. The interface is more streamlined and the browser gives a lot of screen real-estate to work with. No doubt other people have also mentioned the very same points.

Firefox 3 was also released about 2 days back so one can’t help but compare the two. Actually I have been using Firefox 3 betas for the past couple of months so the final release did very little to add to what I was already using. True to it’s tradition Firefox 3 has been an excellent release, at least for me. Firefox 3 also boasted of considerable features that were added in this release, and I must say it does deliver on them. Oh yes, and if not less, there was an equally amount of hype surrounding Firefox 3 release.

So what’s the truth? Which browser is better? There are things both browsers have and don’t have. I would have loved to have a seed dial for Firefox by default. I know there is a plug-in for that, but it’s such a nice feature to have. On it’s part, Opera should have something like NoScript (by default). That thing has saved me countless times before. Then again Opera’s Dragon Fly is equally impressive. Firefox also has a lot more plug-ins and using Opera just makes me miss all of them.  I am not too big on themes, so it doesn’t bother me one way or the other. Apart for these and other small things, most of the hype created around the release of both browsers is largely unwarranted. None of the two browsers bring in revolutionary changes, and none of the browsers are deficient in any particular area.

For me the most important point for adopting any application and particularly applications that I tend to use on a daily basis, is “productivity”. It’s “how fast can I adapt” or “how fast I can get things done”. It’s more about “how much do they nag” and “how annoying things are while using them”. That’s why I generally take a considerable amount of time in deciding whether an application is worth the effort to switch to and/or adapt to. Browsers absolutely fall in that category.

I guess most of the people who surf the net, of-course myself including, are similarly fanatical about the choice of browsers. While Opera is going to stay on my box for some more time, Firefox retains the numero uno position as far as I am concerned. Not because it is revolutionary or because it is the best. It’s because, well, I am simply used to it!

A long silence.

OK I have been silent on the blog for some time know. I know I know, but there were some pressing issues that needed 200% involvement both on the game and on some other issues as well. So it has been an unusually quite two weeks. Actually I have a list of posts that I am through half-way and haven’t been able to finish them and/or polish them up for final release. I hope the next few weeks won’t be this quite since I see an easing up of the work from now on. Also the game is mostly there. Even though beta testing continues, no breaking bugs have been reported thus far.