Qt to go LGPL.

That’s really great news! Qt the open-source and cross-platform tool-kit/framework  from Nokia (formerly from Trolltech) is going to be released under a more liberal LGPL license. What it means is, you could now use Qt in any of your projects provided you agree with LGPL. Well, does it also mean that you could finally see all those wonderful KDE apps ported across platforms? I sure hope so. KDE was built on top of Qt and thus shares a lot from Qt, and thus it’s fair to assume that KD too could benefit from this move.

Nokia states that having Qt under LGPL will allow “wider adoption”, and it may very well turn out that way. The earlier GPL license was, according to me, hindering the adoption of the toolkit and this is a welcome development indeed. Qt is a very polished GUI toolkit there is no denying that, however the license may not be the only reason why developers choose other tool-kits/frameworks over Qt.

I have used Qt quite a lot in the past, both for commercial and open-source development. However, it’s been some time since I have dabbled with the toolkit/framework and over the years I have slowly moved on to other tool-kits like wxWidgets. I haven’t been too fond of Qt’s moc-compiler thing which can be a pain to work when the project size gets large. Having said that one can’t dismiss the fact that Qt is probably the leading cross-platform toolkit out there. It provides a huge number of widgets and a myriad of functionality that would have to be rewritten or re-invented if one were to use any other toolkit. Ot offers a strong development environment and an equally strong GUI designer; often missing in most other tool-kits. It has a proven legacy and is used by companies big and small for almost all types of GUI.

Would I switch to Qt if it were LGPL? No, probably not. I am perfectly happy with Code::Blocks and wxWidgets combo and I don’t see any reason to move to Qt. Most of my projects use pretty complex but consistent UI and wxWidgets serves me pretty well in that regard.  The game builder I am currently working on goes pretty nicely with the existing wxWidgets framework and the toolkit offers me more than what I need. So I personally see no reason to switch.

Is it another year already?

A very Happy New Year to all. A bit belated I know, but I was kinda busy doing nothing. Well, not really. Yeah, I have been taking time off, but I was also busy with other activities, most importantly, marketing of the game.

So what’s 2008 been like? Well for me it was pretty uninteresting. Not, that I didn’t enjoy it, it’s just there was precious little in the way of what I like to do best; research. Most of 2008 was spent on fixing bugs, play testing, hardware testing, level creation and solving some insanely complicated issues, issues that shouldn’t have been there in the first place and some unavoidable circumstantial problems, that shouldn’t have been there in the first place. Most of the coding that was done was also equally uninteresting. Majority of the time was spent on getting thing working right with gameplay and design. Not the most pleasurable of things I must say, at least not for me. That said, a lot of ground work has been done w.r.t the engine, most of which will not have to be repeated for sometime to come. So that’s a big positive, something I can take away from 2008 as being extremely productive.

Having said that, the biggest hit of the year for me is of-course the release of the game; which took far more time than I had initially anticipated. True, it turned out OK (great 😉 ) given the budget, time and resource constraints, but I would have liked to do more. Maybe all that was missed in this one can quickly be added to the next one. A Causal Analysis is due, however I would like to hold on to that a bit longer. At least till we finish up with the final marketing parts which I am currently focusing on. A part of  last year was also spent in starting 3D Logic Software and there are a lot of things that had to be done before we went online. Unfortunately they accounted in a pretty big delay for the launch of the game.

On the tech front, 2008 has been equally low. Very little interesting developments. Most of things that happened were evolutionary rather than revolutionary. On the OS front XP still rules and will probably do so in 2009 as well. However, the year belonged to the underdog Apple. Both their OS and their products have gained significant market share and will probably continue to do so in 2009. Linux has always been interesting and 2009 will be no different. Linux grows from strength to strength in some areas and remains the same in others. If anything I am looking forward to Linux in 2009, some interesting developments on the horizon.

In 2008 we saw a resurgence of the GPU battles with ATI throwing in some impressive technology, and that’s good thing. For the first time I am an owner of an ATI card (HD 4850) and though NVIDA held on to the top spot (barely), ATI was close behind and even edging out in front at times during the year. Then again we can’t forget general purpose computing on the GPU. The year has been interesting for GPU and GPGPU. Powerful cards with supercomputing capability were unveiled and this year will see more power being packed into cards as the GPU titans clash with better with more powerful weapons at their disposal. Oh, let’s not to forget Intel here. Intel finally unveiled Larrabee, so you very well could have another titan arising in those battles.

Personal wish list for 2009.

  • Intel comes around to finally putting a proper on-board GPU with at least good hardware T&L and releases moderately good drivers.
  • Microsoft releases DirectX 11 for XP along with Vista and Windows 7.
  • OpenGL spec gets a overha….. well forget it!
  • Linux gets a single package-management/installer system that everyone across the board adopts, and most importantly is easy to use and deploy.
  • The economic downturn ends.
  • All people in the world become sane and killing of innocent people stops completely.

That all for now, 😀

Once again a Happy New Year.

Supercomputer@Home

The past few weeks has seen a spat of newswires from leading GPU manufacturers showcasing GPUs with potential supercomputing capabilities. Some even going as far as saying that a powerful GPU could be used to achieve the power of a supercomputing cluster. Building a supercomputer at home might sound like an impossible task, but the fact is, very soon you could well have one sitting on your desk. No, this is not one of those machines that is just a bit faster than the previous or current generation PC you already have. This machine could well have a mother load of computing power and it might not feel all that fast while running your GNOME, KDE or Vista UI; or for that matter your Office applications. In fact it might not feel any different at all to the average user. However, hidden beneath the UI exterior will be a system that could be a power monster, at least for applications that have been designed to take advantage of what has become the latest buzzword in programming; parallel computing.

There has been a lot of talk about how a supercomputer could be built using top line GPUs, but up until very recently GPGPU was a difficult thing to achieve. This was due to the fact that most GPGPU solutions in the past involved tweaking already existing graphics APIs (OpenGL and Direct3D) for GPGPU tasks. This method was fraught with problems, mainly due to the fact that graphics APIs were never designed with general purpose computing in mind. That’s why GPGPU technologies like CUDA were developed and with the advent of those and an almost exponential increase in GPU power, the Supercomputer@Home has become a reality.

No, this is not another entry that praises GPGPU, well it is, but not entirely. Let’s be fair, GPGPU is only a part of the solution. As I have repeatedly said in my earlier entries, GPGPU has tremendous potential when applied to the correct problem. If you were to believe GPU manufacturers, it would seem that the GPU is the answer to all the problems out there. The fact however remains that the GPU is only a part of the solution. GPUs are designed to address data parallelism very efficiently. The roots of this obviously lie in how graphics is processed. Graphics, or should I say modern game graphics generally require a lot of parallel data processing and GPUs have evolved to address precisely that.

The current generation GPUs have immense computing power, and most of us already know that. However, GPUs are not all that great when dealing with task based parallelism. Simply put, GPUs are not designed to run several different tasks concurrently. So for problems involving a lot of separate tasks that can benefit from simultaneous execution, GPUs are of little help. However, if you have a large set of data comprising of a lot of smaller data units, and want to perform same operations on those data units, then using the GPU could give you a huge performance enhancement. The solution would involve streaming the entire data on to the GPU, and executing the operation on all the units simultaneously. GPUs excel in such situations.

Which, brings us to  data streaming. Streaming data on to the GPU can be a costly operation and is best done for large data chunks. Streaming data too often and in smaller data chunks can usually offset the performance benefit the GPU has to offer. While this is not a limitation of the GPU per say and more of a limitation of the bus, data transfers to and from the GPU should be kept as little as possible to achieve maximum performance throughput. Graphics programmers are already aware of this. OpenGL and Direct3D both encourage programmers to send maximum data across to the GPU all at one time in what is called as “batching”. Both APIs advise the programmer to stall the GPU as little as possible, and in many instances even go as far as adding a memory/resource overhead to achieve a better throughput.

After reading the above paragraphs, one must invariably ask the question, “Can all problems be efficiently data parallelized?” The answer is no. Not every problem is solved by efficient data parallelism. A lot of problems can be, but many computing problems can’t effectively take advantage of data parallelism and therefore can’t take advantage of the GPU in general. Also if you have a small set of data and want to perform a lot of different operations on that same set, the GPU is of little help. Actually it is you good old CPU that is designed for task parallelism. While a lot of hype has been going on about the latest generation GPUs as potential replacement for supercomputing clusters, the CPU has been left on the sidelines. Or has it?

The next question to ask is, “What role will the CPU play in your Supercomputer@Home?” Well make no mistake about it, a very significant role indeed. In the time the GPU has been growing from strength to strength, the CPU has been having a transition of it’s own. While things have moved on from the days when clock speeds were treated as marketing tools, the CPU has itself seen some significant development. Although nothing to get exited about, the CPU cannot be ignored especially if you have to work on problems which involve multi tasking or multi-threading. Most non trivial programing problems require a mix of programming solutions. Some parts may require data parallel solutions, others may require task parallel. Then there are problems that can’t be solved by either. Therefore the CPU will still continue to play a critical role in any supercomputing system you may build.

It’s completely possible to build a supercomputer at home today. But, building a supercomputer is the easy part. Taking advantage of all this power is whole another story. It would involve an approach where the programmer will have to split his/her design so that data parallel parts could be executed on the GPU; at the same time multiple parallel tasks are executed on the CPU. Yes, it would imply a design that is similar to modern game engines, where data is sorted, batched and the entire data is sent across to the GPU to be processed by the rendering system. While this happens parts, or rather tasks of the application can be executed in parallel by conventional multi-threading or using parallel programming (OpenMP) to achieve maximum performance.

A machine with say a Tesla GPU or 2 HD 4870 GPUs connected via crossfire with a new and upcoming Core 7i CPU from Intel and oodles of memory could very well be a machine with supercomputing capability. But, where would one use so much power? Obviously computer games is one area. For a game developer like me, you just can’t have too much power. I always manage to stress out everything I have on my machines, however high end it maybe. But seriously, where else could you use all this power? Maybe for someone who wants to do heavy duty scientific calculations, could indeed benefit from such a machine. Another use could be compression algorithms, especially video and audio compression. People in video/audio editing business could also benefit from such setups. However, for the joe user so much power is all but useless.

Angle between 2 vectors.

Anybody and everybody who has worked with 3D has, at one point or the other, run into a situation where an angle between 2 vectors was needed. The solution is rather straight forward; take the dot-product and take an arccosine of it. That’s it, you have the angle between the two. Well yes, but then again I wouldn’t be writing this blog entry if there wasn’t a story behind this. Yes, the above method will definitely give you the angle between vectors; well most of the times anyways. It’s mathematically correct. However, taking the arccosine (acos) can be prone to floating point overflows and can cause simulations and calculations to go out of whack unexpectedly. That’s because the acos function in the standard library operates in the range between [-1, 1]. Values very near to the extreme range values can cause the function to give out strange angle values and\or will give exceptions if the range values are exceeded even slightly. It’s not always you can clamp these values to prevent error, especially in situations where smooth transition of angles is required. Clamping the values will cause a jerk in the simulation and might reduce precision if it were required.

So what do we do in such a case?  Don’t worry there is a solution. Instead of using acos use atan2. atan2 takes two arguments (x,y) and finds the counterclockwise angle in radians between the x-axis and the point (x, y) in 2-dimensional Euclidean space. More importantly it is valid for all values of x and y except (0, 0) which is the origin btw. It uses the signs of both arguments to determine the quadrant of the result. The x and y in our equation are simply perp-dot-product and dot-product respectively, or simply put

angle = atan2( magnitude(cross(a,b)),  dot(a,b) );

Just remember one thing, atan2 will give you the results in the range -pi to pi radians. So you may (or may not) have to adapt your code to handle that.

The HD 4850 and the story with AMD/ATI.

First the HD 4850. I was testing the game on the new HD 4850 (Palit 512MB) today and some interesting things I observed with the graphics card. For one it gives a serious bang for the buck. Doofus 3D clocked at about 140 FPS at a resolution of 1024×768, AF 16x with graphics quality set to high. Even with AA 2x Doofus 3D clocks more than 120 FPS and I have a strong suspicion the game was going CPU bound at those frame-rate, since the machine had a 3 year old CPU. I can tell you for a fact, the card is a serious performance monster, but then again Doofus 3D ain’t a top line game. However, for me, this is the first time I have seen Doofus 3D under 4x AA and 16x AF running at a playable FPS since up until now I have had only GeForce 6200, 6600 (and to some extent the 8600) cards. There is no denying that the HD 4850 is more than worth it’s price for someone who is looking for a budget card and expects to run most of the top-line games today. The card runs a little bit hot but that’s to be expected given the amount of triangles it can push and effects it can deliver. Hats off to AMD/ATI in that regards. If you are someone who is looking for a mid-range card right now, the HD 4850 is excellent value for money.

That was the overview from non-programming point of view. Now the programmer in me has something to say. The card maybe excellent, however it’s not all that cozy with ATI drivers. OpenGL drivers are a mess, with the bundled driver not even having extensions like EXT_stencil_two_side support. Even basic functionality like (for example glDrawRangeElements() ) seems to be broken at times, even showing messed up graphics when using Vertex Arrays on older cards. Now this exact same functionality is available under DirectX. Lets say it’s safe to assume that GL drivers haven’t been updated in a while and\or AMD/ATI just isn’t interested. The only issues that were reported in this round of testing were on ATI cards, so I had to literally debug the application on ATI hardware to ascertain that these were indeed driver problems. Some of the issues I have mentioned occur on guess what, the HD 4850 also. The only workaround seems to be, vendor specific hacks! That doesn’t make me a happy programmer at all!

The story with Direct3D is a lot better and no issues were observed under DirectX renderer of the game. That just tells you something doesn’t it!

It’s a cracker!

It’s probably well known that GPUs are powerful beasts, and I have repeatedly pointed out on this blog that the awesome power of the GPU can be used for more than just graphics. For tasks and computations that can be executed in parallel, GPUs are a lot faster than CPUs and also more powerful. So it won’t come as a big surprise to learn that people have put GPUs to good use to do all kinds of stuff. GPGPU has been more than a buzzword off late and with technologies like CUDA and Larrabee, it has become even easier to get at all this power. However like every other piece of technology, GPGPU also has it’s downsides. This article I read recently briefly outlines the fact the GPU could be put to work as a generic brute force cracker. I am no expert in cracking, but I am a person who has played around with GPGPU long enough to understand how serious this could be. I read the article and the first thought that crossed my mind is, “Hey, you know, this is the kind of thing the GPU excels in actually!”

GPUs today can deliver computational power in teraflops. Very soon we could have hardware that can do 100s of times that. There is also another interesting thing that GPUs allow you to do. You can stack a series of these buggers together and achieve a phenomenal boost to this already awesome power. You could increase the computational power of a machine by several orders of the magnitude by stacking GPUs in parallel. It’s a disturbing fact that such power, until a few years ago, was only available on top-of-the line mainframes. Today your could build a machine that has the power of a supercomputer with components probably available at your nearest computer hardware store. That just doesn’t bode will with the fact that anyone with a brain and time to kill can hack-up a brute force cracker and put it to work — and with enough “horsepower”, might even succeed.

As more and more powerful GPUs hit the market and as GPGPU technologies progress, we will see newer machines with unheard of computing power on our desks and laps. While this means more interesting games and faster number crunching for most of us, there are those who will put such tech to vile use. What we probably also need are better security systems and stronger encryption systems along with better games and faster number crunchers.

Tweaking the game to run on a wide range of hardware.

For the last week I have been involved in rather uninteresting activity. Well, I have been literally throwing the game on all possible hardware configs hoping it will run. All of this (yes, again) to find out how the game fares when exposed to different hardware configurations. Well it may seem like this activity is rather mundane, then let me assure you — it is. Well, not entirely 😀 . It takes some effort to get a game to scale seamlessly to all kinds of hardware and currently I am enduring all the pain of crappy drivers and broken functionality, which,  should I say, underscores some of the major headaches in real-time graphics development. It’s not like you can throw the game with it’s peak setting ON and expect it to run on a crappy Intel on-board graphic cards. Such a thing will just end in a disaster. The game must scale to different kinds of hardware and in our case especially so; that too seamlessly and effectively.

Doofus 3D is uniquely placed. It doesn’t aim to be a top-line, hardware intensive, hard-core gamer only, triple A (AAA) title. Neither is it a 2D game capable of running flawlessly under software rasterized graphics on your grandma’s old school PC. It is geared more towards intermediate level hardware. Hardware that most people have on their work laptops and home desktops. This effectively means an extremely wide range of hardware to cater to, and that in turn means scaling the game’s software paths (internally) based on a *lot* of underlying factors. Assuming a player to have a specific functionality available on his hardware setup can be catastrophic and disastrous. Such assumptions could mean a total failure of the game on a machine and could mean a potential loss of a buyer in the end.

While drawing up specs of Doofus 3D we were especially careful not to go overboard with graphics galore. Even with careful planning, there was significant feature creep, and with each new feature that was added, new countermeasures had to be put in place so that the game would scale to lower-end hardware. Not everything was straight forward, but we still did manage to push it through. If you have been following my blog for some time now, you would know that this is not the first time I am into such activity. I (personally) run such tests after each beta (feature addition/ feature freeze) of the game. That is probably why we haven’t faced too many problems this time around.

Under Doofus 3D we followed a process that is a bit different from traditional software development. Every beta under this game project was actually a feature complete runnable version of the game. Before or between any beta, every release was an internal alpha version. A beta meant, “A set of features is complete enough to be tested”. After each beta, each feature was tested on various hardware setups. Something like an iterative method of software development, but not quite. I would say, a process tailored specifically for our project and more specifically for our situation given our limitations.

Doofus 3D runs on most middle rung hardware without too much problems. It will run on on-board graphics cards too, but I find Intel on-board graphics to be an abomination. Hopeless hardware support for 3D graphics and equally crappy driver support! Enough reason for the engine to scale the game to run on a low setting when it detects an Intel graphics card. The situation with NVIDIA and ATI cards is a lot better with ATI’s low end cards (,assuming the price point, ) to be consistently outperforming NVIDIA cards. That said, NVIDA has the most stable hardware and drivers and most settings work uniformly across cards and driver setups, though there can be problems there as well. ATI’s drivers can be buggy at times and in case of OpenGL can be totally broken. Fortunately the O2 Engine and the Doofus Game can use either Direct3D or OpenGL as rendering APIs. For any high end or for that matter even for most mid-range graphics cards, Doofus 3D is not a problem at all.

Selecting a scripting engine – Part 4.

This is the fourth installment in the series (here are 1, 2, and 3) and this time it was the turn of Javascript. Yes I know a lot of buzz has been going on about the sudden jump in speed of the scripting language, thanks to the introduction of the blazingly fast V8 and, yes an even faster SquirrelFish Extreme (SFX) engines (oh and not to forget Mozilla’s Tracemonkey). Javascript has been at the back of my mind and I have been fiddling around with Javascript for some time now. I had actually tried out Javascript as a scripting language earlier for a completely different project altogether, but that prototype was left on the back-burner since there were more pressing issues to solve. Then, of course, I left the project and the experiment didn’t get beyond the prototyping phase. Too bad really. That was the time when Javascript was pretty slow, or should I say slower than it is today. However the story with the new generation Javascript engines is very different from the old ones. Most engines compile Javascript code to “fast native code” on the platform they run on, and that’s the reason for the sudden spike in speed. According to the SFX blog, SFX goes a step further and does something the call as “polymorphic inline cache” (explained on the link provided above) which they claim to be a exciting new optimization. I haven’t taken SFX for a ride yet, but I have had a peek at V8 and I must say I am more than impressed by Google’s implementation.

If you haven’t already guessed by now, my main interest in any scripting language is obviously an integration with our Game engine (, code named O2 Engine). From my previous posts it’s pretty clear; I am on a quest for an elusive perfect solution. A please all, perfect scripting language (, which I know I will not find). Unfortunately ideal solutions are impossible and most of the times it’s a compromise between what is available and/or what you can afford with the resources you have. With Javascript things are no different. All kinds of remarks have been made about the scripting language. Everything from, “Javascript is the worst language ever!”,  to “Javascript: The world’s most misunderstood language.” Google around and you will see what I mean. But there are two sides to every story. While most programmers who work with stricter languages frown upon Javascript, the fact remains, Javascript is easy to understand than most other languages. It is used by programmers and non-programmers alike and is used by countless web-sites to deliver Internet content.

Javascript’s shares it’s syntax to some extent with C and Java, but the similarities end there. It is not a very “strict” language and this is in fact by design and is actually intentional. Less strictness means non-programmers can adapt to the language more easily. Javascript is probably the least strict language among most other languages I have used. This can be good and bad, but while considering a scripting language, I can probably live with that given the rapid prototyping capabilities it offers. If you look at the language itself, Javascript shares most of the common properties found in other scripting languages (Lua, Python and others). The interesting thing about Javascript is the function-object duality the language proposes. This can be confusing for a programmer working on a language that uses an imperative approach (something like C++ or Java), maybe not so if you started out with Javascript or already know functional programming. However the thing that is really interesting is the fact that functions are first class citizens under Javascript. This concept itself is not something that is unique to Javascript, Lua has similar concept too and both languages can be used for functional (style) programming. Inheritance under Javascript is prototype based which is again similar to how Lua implements inheritance, but is different from the classical inheritance that defines object-oriented languages like C++ and Java. I don’t see any particular problem with that, given that there is a way to export class hierarchy.

Integrating or embedding Javascript with an application is no different than what it is for most other scripting languages. I found Google’s V8 code to be the easiest and cleanest to understand and integrate among all other languages/embedding libraries I have encountered. However, the entire process can be extremely repetitive and time consuming if a large library/classes/functions have to exported. Again, this is no different for most other scripting languages (except maybe Python, where you have an automatic process). I haven’t had time to look at SFX and TraceMonkey yet, but I am sure the process should be similar.

So what is special about Javascript? If you consider the language and the embedding platform/technique, nothing actually. Head-to-head it’s probably equivalent to Lua or any other embeddable scripting language out there. However, what sets Javascript apart from the rest of the pack today is actually mentioned in the first paragraph of this article. Yes,  it’s the new generation of scripting engines from Mozilla, Google and the Webkit team. They give a huge advantage to Javascript in terms of speed and performance. JIT compilation to native code and the optimizations that have been introduced, do result in significant increase in speed of the script code that is being executed.

I know most game developers (incuding myself) are performance junkies, but hold on there before you jump in and start integrating Javascript into your favourite game engine. There are some points here worth noting. For native code compilation to work, the Javascript compiler needs an assembler to convert the compiled code to native code instructions. So on platforms for which the Javascript compiler has no assembler, you are basically out of luck. V8 has assemblers for Intel and ARM as of now. So for any other architecture the V8 engine will be of no help, unless that is, you implement a version of the in-built assembler for that platform yourself. I am not too sure how this will scale to platforms like the PS3 and XBox and could very well be a sticking point. The other thing that caught my attention is that JIT compilation to native code means Javascript code has be distributed as plain text files. Such code could be subject to manipulation and a potential cause for concern if your code has to adhere to security specifications.

That said, given the fact that so much RnD is going into Javascript, makes the language a top contender for the post of a scripting engine.

Doofus 3D hits Code-Freeze.

Whew! The game has officially (finally) hit the last code-freeze today, actually yesterday but I tagged the repository today so it’s officially code freeze today. All the levels have been done and most of the internal (alpha) testing is complete. Yeah, the blog has been silent for a while; but that’s to be understood, I have been really hard pressed to finish this on time, yet I overshot my (mostly self-imposed) deadline by a good 15 days In part due to unforseen circumstances, in part due to my own faults, but that’s the way it goes (generally 😀 ). I am really glad it’s finally done and all but the most trivial issues remain. There is some artwork left though, I still have to finish up the initial screens and some sound-tracks and some sound-effects need to be incoroprated into the game as well. Some parts of the demo version also need to be finished up. However, I am going to hold on to that till I release the RC for one final bout of testing, that way I have a good feedback on which levels to include in the demo and which to hold off for the full version.

 

 

In-game advertising.

I was recently asked as to what I thought about in-game advertisements. Given the fact that I am a game developer, the whole concept of in-game advertisements does seem like an intriguing subject. There are two ways to really look at the whole idea of in-game advertisements.The first and the most obvious one is of course as a potential money spinning tool for developers/publishers and is quite an interesting idea to explore. But there is a also a view that such ads inside games will end up polluting and degrading the overall game experience and could attract an ire from the players or the community as a whole, and in the worst case have an adverse effect on games sales. The last thing you want is a commercial break or a cutscene marketing something inside a fast paced game. Such a thing will be a disaster and I am sure players will frown on games that take this road.

My feeling is, game designs don’t need to this drastic to have the whole in-game advertising thing working for them. It would be a horrendous mistake to have advertisements inside the game that degrade gameplay. However having interactive ads that are properly integrated into the game may not necessarily be a bad prospect. That is exactly why ads in games need to be more than just inert props and simple banners. To a player such things make little impact and might or might not get noticed. The impact from any 2D inert items will be limited, especially if the game features a 3D world (2D games could still make use of banners more effectively than 3D ones, but even they could do better if the ads were interactive). Games typically differ from other traditional forms of digital entertainment in the fact that they are an interactive medium and therefore like every other gameplay element, ads within games will also be most effective when the player interacts with them.

Gamers (, especially hard-core gamers) might frown on the idea of having in-game ads but it may not be all bad. Not all games are ideal for in-game ads, I will touch on that point a little later. I personally haven’t seen any ads in most of the games that I have played. But I have seen it in some games, the example I can sight is Second Life. Second Life is a great example of how ads can make into games without being immediately frowned upon. The player is given a choice of whether he/she wants to interact and view the content (of the ad) instead of something being forced on him/her. A player will appreciate this, and the ad campaign will thus be successful. In the brief time I played Second Life (here), I visited a couple of interesting places where the in-game advertisements looked really great. Music seems to be the best appreciated followed by fashion when it came to ads there, but I am sure people must be advertising all sorts of stuff over there. As I said, my stint with Second Life was pretty brief.

As a game developer I look at the whole scenario of in-game advertisements positively. I am sure using proper game design advertisements can be made sufficiently interactive so that they could “fit into” a game without actually nagging the player. I would even go further and say that if used effectively a game could actually be enhanced (case in point Second Life) so that marketing inside games could be something that player can look forward to and actually find an interest in the whole idea of having an online try-before-you-buy opportunity. Can all games be effective as in-game advertising mediums? No, some might do a better job at it while others might not be as effective. MMOGs like Second Life will probably be far better at it than say a FPS with it’s setting on an alien planet. Some games like shoot-them-up tournaments might not be effective at all. Having said that, we can never be too sure about marketing ideas and how “genius minds” work. So, someone might just find some way to insert a “Matrimony Online” banner inside the Strogg Nexus on Stroggos, you never can tell.