It’s been some time since the movie was released, but I only managed to watch Avatar yesterday. Ok, before I proceed let me put a “spoiler alert”. If you haven’t, go see the movie and then read the rest of the entry .
I would describe the movie as, “great graphics, superbly imaginative environments, great blending of live actors and CG, but a rather bland and ordinary storyline”. The movie is a graphics galore, but the story itself is rather dull and predictable. Throughout the movie you can almost sense what’s going to happen next, and that’s exactly what happens — leaving little room for mystery. I am a James Cameron fan (, who isn’t), but in most of his movies he does find a subtle and an uncanny way to weave a wacky (but believable) story around the whole action movie concept. Unfortunately, Avatar doesn’t quite have all of that.
The whole dull story thing however, could be easily forgiven given that most of the time is spent admiring the visual effects, graphics and stunningly beautiful environments modern CG can achieve. I found the movie rather enjoying. I guess Avatar is natural fodder for a 3D graphics geek like myself, but apart form that the movie does an excellent job at handling or rather blending graphics with real life actors. You would be forgiven for mistaking reality from CG especially when live human actors interact with CG actors and the environment. I was doubly interested with how the environment behaved in response to the human actors actions. The most difficult part of compositing a 3D CG environment with actual actors actions is the interactions of the CG elements (with the actors). The subtle swish of the grass when an actor runs, the rustle of the leaves when a an actor goes through a bush, these are small things that makes a CG scene believable. My hunch is — all that was done and captured in real time in a 3D studio environment.
The truly spectacular achievement of the movie/technology and the one that impressed me the most is the facial animation. Any computer modeled facial animation is bound to be hit by the uncanny valley effect, but in Avatar the facial expressions, though not flawless do turn a page (no they are not fully human like, but are definitely believable). The technological achievement is commendable and some critical reviews don’t do justice to, what is a pretty good effort on the part of the CG team. I know how hard it is to have a seamless facial animation system (I myself am working on one) and the movie and it’s mocap technology to simulate facial movements does bring in a lot of realism. A lot of ideas there for future gaming projects.
I am pretty impressed with the movie as a whole. Yes it has a linear and an ordinary story but it does push the envelop in CG technology. The graphics are stunning, but what is more interesting is the composition of graphics and human actors. For me the facial animation was probably the best part. It’s not a new idea, ie. to capture live human actor’s facial movements on a CG character, but Avatar does it so very well.
Best wishes and a happy new year to all.
Hmm… I am disappointed (story). No, I wasn’t expecting the first versions of the technology to be game changer in the graphics or for that matter in the HPC or the compute world, but I was very very interested in knowing more about the Larrabee technology. Thus far Intel has only thrown “bits and pieces” about their new tech, and that in no way gives one a clear picture. No, Intel hasn’t given up on the technology, but seems to have postponed the release in it’s current form because the performance targets weren’t being met. Ironically, Intel had initially made claims that Larrabee chips would stand up to discrete solutions from ATI and Nvidia. However, it looks like the tech still needs some work done to measure up to that.
At this point all we can do is speculate, but the fact is — building a chip that can do HPC and compute and graphics and have driver/software/optimizing compilers working perfectly is a tall order, even for a giant like Intel. I am sure they have done most of it right, but most of it isn’t all of it, and that’s probably the reason we are seeing the launch being canceled in it’s current form.
Many-core computing is the next big thing, and technologies like Larrabee are the future. I am disappointed because more than the tech, Larrabee would have been a window into how things are shaping up. How does software development scale to the future? Would the new optimizing compilers allow the use of current software methods? Or, does it mean a radical shift in the way software systems are built? How would the new tech address task parallelism? — I guess we will have to wait a while longer to see how these (and I am sure may more) questions are answered.
OK I have been off the radar for too long. Unfortunately I have been plagued with some hardware problems and have had limited access to the the Internet. Most of it is resolved now so back to normal from today.
It’s almost time for Windows 7 and along with that the first lot of DirectX 11 class hardware has started to appear. This time the first off the block was, surprise surprise, ATI. The 5800 series cards were released a couple of days ago and there are already impressive reviews about the new cards all around. I am sure it wont be long before Nvidia, which has been uncannily silent, comes out with their line-up. So it is safe to assume that there will be DirectX 11 class hardware on the shelves going into Windows 7 release (Windows 7 RC already has DX 11 support and will also be available for Vista soon). It will however, still take a few weeks more for the initial euphoria to settle, and we should see prices of the cards drop around the holiday season, and probably that is when I will go in for an upgrade as well. I have been running the HD 4850 for some time now and thus far it’s proving to be sufficient, not only for gaming but also for my programming needs. The HD 4850 has been surprisingly good given it’s price point and one would expect the same from 5800 series given the already positive reviews.
There are a couple of things that are in favour of DirectX 11. The first is the API itself. DirectX 11 offers more than just a simple evolutionary upgrade (more here). DirectX 10 was mostly a non event. The enormous success and the longevity of XP and the XBox 360 ensured that the 9 version of the API far outlived most expectations (and probably will continue to live for some time to come). The story of DirectX 10 is also intrinsically connected to Vista. Vista’s low adoption meant not enough people were running a DirectX 10 capable software platform, which Microsoft stubbornly refused to port to XP for whatever reasons. Even though 10 class hardware was available during Vista’s reign, nagging hardware issues and poorly implemented drivers meant DirectX 10 never really caught on like 9 did.
That brings us to the second point in favour of DirectX 11 — Windows 7. XP is old, and I mean seriously old. I am still running a 2004 copy of XP on my machine and though it’s doing it’s job admirably, it’s due for an upgrade. Windows 7 seems to have gotten over those little annoying quirks of Vista which we hated and shouted so much about. My hunch is most people who have stuck with XP will probably upgrade too. Maybe not on immediate release, but 2-3 months down the line when things settle in, after those initial bugs have been addressed and more and more reviews of the OS come out; 7 should slowly see wider adoption. With Vista it seemed like things were rushed into and hyped up. In contrast Microsoft has been careful with Windows 7. The RC of Windows 7 has been somewhat of a “soft launch” and though I haven’t myself had the chance to try it out, it would seem (from reviews and from what people are saying) Windows 7 is much better off than what Vista was. So it’s fair to assume that 7 will catch on more than Vista and in the process DirectX 11 will “get on” to the Desktop.
Does it mean DirectX 11 will be the defacto API for coming games? For that lets look at the games developed today. Yes most of the games that are developed today are still developed primarily for DirectX 9.0 class hardware. Why? Consoles that’s why. You do see AAA titles advertise DirectX 10 and 10.1 support, but even those games are developed with DirectX 9.0 class hardware in mind. Yes some features here and there, usually eye-candy to impress your overzealous graphics fanboi can be found, but the engine and tech itself is designed for platform compatibility. Which ironically means not all of the features of the newer DirectX versions are exploited. As I said before, DirectX 11 is more than just a simple upgrade to the API, it’s also a new way to do things. But since the older hardware still has to be supported, compromises have to be made. There are probably no AAA titles exclusively for the PC, so even if PCs all around were to have DirectX 11 support, it’s not until the consoles catch up will you see all the cool things the newer API has to offer come to the fore.
There is little doubt that version 11 of will make games look better. But there is so much more to the API than just improving looks for games. Many of the features in the new API mirror hardware changes that have taken place, like moving away from the fixed function pipeline, the evolution of GPUs as massively parallel compute devices. All this does mean that DirectX 11 is an API to look at seriously. But how quickly will games start using all these features? I guess only time will tell.
It’s interesting, but our good old menu-bar, yes, the one having “File Edit View …” on it, is slowly disappearing from some of the most popular software apps. Today I happened to wander across to the Firefox proposed UI design page and the thing I immediately noticed was the absence of the menubar in the new designs. Good riddance! No seriously, how often do you use the browser menu-bar? For me the browser is the second most frequently used application and 99% of the time I never bother about the menu-bar. I, however, would love every millimetre of screen space while browsing and am more than happy to see the menu-bar go.
There have been some subtle changes in UIs over the past couple of years. No I am not talking about glass effects and pretty pictures and icons, I am talking about design. Though not the first, it was MS Office that got the world’s attention by replacing the menu-bar with the “Ribbon Control”. A bold step, but the idea was to combine the tool-bar and menu into a single tabbed interface. To be honest, yes the ribbon idea is cool, maybe not innovative, but definitely cool. The interface had mixed reactions initially, but as people got more and more familiar with it, things started to get comfortable and soon other applications followed suite. At a first glance having no menu-bar is disorientating for a long time computer user. I did find it a bit “unusual” to navigate my way around Office 2007 (the one and only time I used it). On the other hand, I never missed the menu-bar (even once) while using the Chrome browser. I guess the whole idea needs a bit getting used to, but apart from that, I really did like the whole concept of replacing the menus and tools into one single compact unit. Makes sense. Tool-bars, after all, only do complement the menus. It is therefore logical that both be combined into one.
I personally feel this use of the tab interface is a step in the right direction when it comes to exposing complex UI. The resurgence of the tab control and it’s innovative use in hiding complexity whist combining two separate controls (the menu-bar and the tool-bar) into a single entity to service both options is intuitive and resourceful. A similar usage of the tab control has also found it’s way into the mobile device world where it is going to be the main stay of the Moblin Linux platform. That is interesting indeed. So will we see our “Desktop Application Panels” and “Start Menus”, which are basically menus too, being replaced by tabs soon?
Holy cr*p! It’s been like 40 days since I have posted anything! That’s probably the longest the blog has gone silent. Well, that’s partly because I have been working on a new game and yes a new game engine. The new game will not be a 3D game but a 2D game. No, I haven’t abandoned the O2 game engine, to the contrary the O2 Game engine has had quite a few updates to it. So why another game engine?
Long long ago (, maybe not far far away) when I was designing/prototyping the O2 engine, I had plans for it to be a 2D and a 3D game engine. Overambitious? Yeah, maybe. But tell me who hasn’t had overambitious design plans? Anyway, during that time a lot of design went into the 2D part. I even went ahead and created a mock prototype of 2D game builder UI. In any case, that was soon put on the back burner since we started in earnest on the 3D game. The design however, lingered in the archives, until I had the time to revisit it after the Doofus game was completed. Instead of letting the design rot, I decided to peruse and finish the 2D part.
A couple of problems immediately arose when I had a re-looked. The O2 engine is designed to be a 3D engine and a lot of abstraction has been made to ensure that the engine does 3D properly and efficiently. That’s not really required for 2D. 2D is a lot less complicated than 3D and most of the sections of the 3D game engine felt over-engineered when considered in the context of 2D. Most of the book-keeping required for 3D is down right unnecessary for 2D. In the end the two designs were a significant overlap, so I decided to rehash the 2D concept using similar design but start over from the engine core. Basically meaning, a rewrite of a significant section of the game engine code. To my surprise the process was completed far sooner than anticipated, and that too without significant regressions. Yes the new engine uses a lot of same code-base as the O2 engine, but the engine itself is very different from the 3D engine.
Next came the game. There was an old prototyped 2D game (more details soon) that was lying with me for ages. Most of the game was polished up and ported to the new engine (lesson: Never throw away your old experimental code; ever. ), and that’s about it. I am pretty happy with the results of my 2D experiment thus far. Completing a game is a complicated process. It requires a lot of things to come together and work correctly. True, 2D is lot easier than 3D, but there is still work to be done finishing the game and I hate to give out a release date just yet (, because of the fear of being proved abjectly wrong yet again ) .
I was invited to the preview of Alienware M17x unveiled by Dell to cater to the high end, hard-core gaming enthusiast. It was my first experience with the Alienware brand, though I have often read about other high performance laptops from them. Dell is marketing the M17x product as “The most powerful gaming laptop machine in the universe”. Well, that is probably correct, at least for now the machine is more than capable of pushing anything out there within it’s resolution limits. Alienware is known for it’s high end machines and this avatar in the Alienware series is no different in keeping with the brand image.
The M17x packs some heavy duty, top of the line stuff in it’s guts, probably far more than what is required for a gaming notebook. Ergonomically the M17x is designed to please the hard-core gamer, and also to make a style statement. Complete with flashing lights, multi-colored keyboard and scintillating sound, every effort has been made to attract yours and everybody else’s attention. The whole laptop is designed to look different and will stand out from anything else in the room. If you want to show off your “new gaming laptop” then the M17x is probably what you should be looking at.
No high end gaming rig can be complete without a heavy duty GPU, or should I say GPUs (plural), 2 in fact. The M17x features either with Dual SLI Nvidia GTX 260M or the Nvidia GTX 280M GPUs. I would suggest the 280M. (Well, if you are going for a high end gaming system, you might as well get a top line GPU.) The 280M is, as of today, the highest performing GPU for notebooks. The laptop comes fitted with the Intel Core 2 Extreme mobile processor and you seem to have an option of choosing Dual or Quad core CPU. The choice of the CPU will depend on the type of games that are played. Games like Oblivion and Fallout 3 are more CPU intensive since a lot of data is streamed in real-time, but in any case I don’t think there should be too many problems even with a Dual core CPU since most games wont go CPU bound with a powerful GPU setup and fast 1333 MHZ GDDR3 RAM. Again, if you are the one to play at exceptionally high frame-rates and can’t tolerate even the slightest glitch, then by all means the Quad core option is also provided for the M17x.
While M17x looks like a laptop, it’s actually is a mobile desktop. Weighing in at more than 5 Kgs, it isn’t something you can lug around to every place you go. The weight of the laptop must be due to the 2 GPUs, heat dissipation devices and the large battery that will be needed for such a huge performance monster. The M17x is without a doubt a high performance gaming rig. I personally tried pushing Crysis at 1440×900 at full AA and AF and there were no visible hiccups or slowdowns and the gameplay was flawless. I bet it will be able to push every game out there without a sweat. Too bad it only has a 17″ screen. For this kind of performance the 17″ screen looks a tad bit small. I would have loved to see a larger and a higher resolution monitor but I guess the compulsions of space and laptop dimensions made 17″ the largest choice.
The only real nag that I found were the lights. At a first glance you may (or may not) like the flashing lights and the multi colored keyboard, but once you start using the machine, the lights are nothing more than a distraction, especially in fast paced game. Well, you have an option to turn them off, so I guess that’s not too much of a bother. Also the only real advantage of a Dual GPU setup is for systems that have enormous resolutions (2560×1600) or for multi monitor systems. SLI combos are excellent when rendering with very heavy fillrates and even at it’s highest resolution the 17″ monitor isn’t quite in the league for Dual SLI, considering that it already has the 280/260M. Having 2 GPUs instead of one also means the machine will generate quite a lot of heat, guzzle battery power and will weigh substantially more than it would have with a single GPU. However the choice seems to have been made to please the hardest of the hardcore gamer out there. The M17x makes absolutely no compromises on performance, anywhere.
Well, there isn’t too much further to say regarding the machine. My experience with the rig was limited, but it is interesting that Dell launched the Alienware brand in India. India is not known for it’s hardcore gaming enthusiasts and you wont find too many laptops specifically for “the gamer”, at least nothing in the league of the M17x. Kudos to Dell for that.
Yes the blog has been neglected again, but I was very busy for a while. I have been pulling 3 projects in all for the last couple of months. Some consulting work got thrown my way and in these times of crisis, every bit helps (not that it was too much to begin with). Nothing related to games or 3D, but you can’t be too picky these days. I am glad it’s finally over. Not only has the blog suffered, but the work on the engine has been on the slower side as well. The May and June targets for the engine tech haven’t been met, and I have a huge list of pending items to put in. Everything from soft-shadows and deferred shading to integration of new Blender features have yet to be done. The 2D game has also been kept on the back burner and I hope that I can finish it soon so I can get back to the 3D engine at least by August. On the whole it’s been a very hectic 6 months or so. But consulting is a good way to sustain oneself since games typically take time to develop and deploy, and in the meantime contract work can keep things rolling. OTH, it can suck up time like a vacuum cleaner. Good and bad, one has to learn to balance these things.
Ah yes, I can get back to Fallout. It’s been ages since I fired that game up. Still so many games to catch up to. Haven’t even installed Left 4 Dead.
What! Google is releasing a new Chrome OS. Wow! Hold on there…. Read carefully, it’s just another Linux distro, maybe with a new UI. But just read what’s is written on the blog…
Google Chrome OS will run on both x86 as well as ARM chips and we are working with multiple OEMs to bring a number of netbooks to market next year. The software architecture is simple — Google Chrome running within a new windowing system on top of a Linux kernel. For application developers, the web is the platform. All web-based applications will automatically work and new applications can be written using your favorite web technologies. And of course, these apps will run not only on Google Chrome OS, but on any standards-based browser on Windows, Mac and Linux thereby giving developers the largest user base of any platform.
Emphasis mine. What about apps that already work on other standard compliant browsers? I bet they will work too. I read that post and it made me smile; no it made me laugh. Talk about marketing hype. Come on, it’s just the Chrome browser ported on to Linux, probably using GTK+ or something else. But branding it as another OS, sheesh! At the very least Google could have called it a Chrome distro.
Well the general move everywhere is away from the desktop to full online apps, so it doesn’t come as a surprise to me. In fact, after ranting about web-applications taking over the desktop just yesterday this almost makes me feel vindicated!