Are integrated graphics chips the new battlezone?

In what could be an “one up” and almost a sucker punch to Intel, AMD announced an amazing new chipset, the 780G which is sure to create some flutter in the industry. The 780G puts a full fledged GPU on to the main-board and while I was reading the specs, it does seem to be substantially better than any of the other on-board or, ( in correct terminology,) integrated graphics chips out there. While Intel claims to have “more half of the graphics” market, the graphics or should I say “craphics” cards supported by Intel (, and to some extent AMD earlier) are nothing more than a big joke. The only reason they have such a huge portion of the market is because the average joe/jane is stuck with it and because it came pre-installed. I was recently quizzed by an acquaintance as to why his system could not run Bioshock and the only answer I could give him really was, “Well your system isn’t designed for that sort of gaming”. To that his reply was “Oh I just got a brand new system. How is it that it can’t run a latest game?”

It’s really disturbing for people who buy a brand new PC only to see it fail, utterly miserably I might add, to even push a last generation game at a shallow 24 FPS. Most are clueless, and while their PCs may be brand new with a “fast” multi-core processor with gazillions of RAM at it’s disposal, it can only but run their Office applications. Yes they run faster and better! No such luck with games though. People have to realized, having a faster CPU or for that matter having more cores doesn’t really help too much with games. It does to some extent, but as it stands right now, I would rather have a top-line graphics card like the 8800 GTX than a quad core CPU. It’s a very deceptive concept, I know, but thats how it is.

Anyone who has worked on graphics knows how utterly lousy and how much of a pathetic abomination integrated graphics chips can be. I have battled with all sorts of problems, everything from broken drivers to faulty implementations to near absent feature support. I hope things are finally changing for the better. The question is where does that leave Intel? Intel has been desperately trying to get a better graphics solution on to it’s boards without too much luck. The chipset that AMD has thrown up beats anything that Intel can conjure up hands down! At least in the near future that is. While Intel may add on more cores, they aren’t going to be too useful for people who want to run the latest games. With quality of integrated graphics on offer by Intel, users will have to install, at the very least, a low end graphics card. Sorry Intel, that’s how bad things are!

Then what has the Green Brigade (NVIDIA) have to say to all this? AMD’s acquisition of ATI is finally showing it’s advantages. While the graphics chips may not be the fastest out there, they are indeed very attractive considering the price point. Chipzilla and Graphzilla better get their acts together because if 2007 was the year both ruled in their respective departments, there is a new kid in town. He’s got better and faster guns, and looking more attractive than any of the old boyz!

Optimizations on game levels.

Just an update on the Doofus game and on what I have been working on for the past couple of weeks. The past couple of weeks have seen me seriously working at getting the triangle count down in the game levels. The tri count had been increasing steadily for the last few levels and it just started hitting on the FPS value real bad. That is why I had no option but to go for Triangle decimation. The amount of triangles for even moderately complex levels started turning out to be surprisingly high and most triangles were all but useless. The reason; Doofus 3D levels use brush based geometry and the tris are a result of successive CSG (Constructive Solid Geometry) splits. The more detail I added to the levels, the more redundant splits occurred with the brushes. Meaning the FPS started falling like a rock for arbitrarily complex levels.

The optimization technique I was working on reduces the number of triangles by a) Removing redundant vertices and b) Collapsing unwanted edges. Simple right, not quite. Triangle decimation turned out to be somewhat more complex than I had anticipated. Fortunately and after some real hard brainstorming I managed to get it working just as I wanted it to. Now in some situations the triangle count reduces to as much as 4%. But an average value of around 10 to 20 % is what I usually get. That is also quite significant to say the least. Thank God my effort has not been in vain after all. It was a real pain to get that working correctly. Check out the images below to actually see the optimizations at work.

Original scene.
A smaple Doofus 3D scene

Triangles in the unoptimized version (click to enlarge)
Triangles in the scene before optimization.

Triangles in the optimized version (click to enlarge)
Triangles in the same scene after optimization.

I have also been working on completing the AI. Sorry but I don’t have screens for those, maybe the next time. The AI still needs some amount of tweaking to get things working perfectly. I am not saying too much at this point in time; maybe in one of my next posts I will get into more details. Hopefully I can finish this last pending thing in the game soon.

More than impressed with Xfce.

I am a long time Gnome fan, but recently I had an unexpected run in with Xfce. I was visiting a friend of mine and he had an old laptop that couldn’t be used for anything much really. So we decided to give a shot at install Linux on it. Obviously Xubuntu was the distro of choice since the hardware was pretty old. We got around to installing it and I quickly noticed how fast the GUI was responding even on such old and rather archaic piece of hardware. The Xfce environment looked really slick indeed. I was under the wrong impression that Xfce missed all the bells and whistles provided by Gnome or KDE. Obviously the next thing was to install it on my own desktop, which I did, and I can tell you, the Xfce desktop manager is quite a bit faster than its older and heavier cousins. I generally don’t mess around with stable OS configurations, but I happen to be a speed freak and anything that is fast and light always tends to get my attention. Naturally I made an exception with this one. Now, Xfce is my default desktop.

With everything set to default, Xfce does take less memory than Gnome or KDE. But it wasn’t the only thing that impressed me about this desktop environment. In functionality too it seems to be designed to enhance productivity. Not like other managers aren’t, but you know those little things that nag you about other windowing systems under Linux; well they are nicely taken care of in Xfce. The desktop environment has a uncluttered interface, and though it may miss the richness of KDE, every focus is made so that the user can find his\her way around quickly. Xfce feels and looks very much like a lightweight clone of the Gnome manager (, thought it is not). Also Xfce will happily work with Gnome and the two can exist on the same machine without conflict and to some extent are even interoperable and share data between them (, like Xfce being able to use Gnome icons and files). That’s just a thumbs up as far as I am concerned.

The default file manager under Xfce desktop, Thunar is much (much) faster to open up, and though it lacks some features of Nautilus, I didn’t find any work hindering features missing. So on the whole, is Xfce for you? Well judging from this post you can pretty much see where I stand. However it is a matter of personal taste. If you like an uncluttered fast desktop, or have a lot of windows open which you switch around often (, I know I do),  then you have to check Xfce out. I for one, am pretty happy with Xfce, and I am not switching in a hurry.

STL map/multimap woes.

I was working on porting someone else’s C++ code from Windows to Linux system. This code made heavy use of STL, no problem there. I have a good hand on STL (or so I thought). My engine also makes heavy use of STL. However this code was written using Microsoft’s version of STL. So what’s the problem? STL is STL right? It’s standard across platforms right? No, wrong! Apparently not entirely true. Microsoft’s version of STL is not 100% standards compliant. I had read this before, but didn’t actually come across a case where I found incompatibilities in code across STL libraries.

Until now that is. The code I was porting happens to have a lot of maps and multimaps, with deeply nested template code. A pain in the neck to debug I must say. The problem started with the compiler throwing some ridiculous errors, almost illegible which I traced back (, with some amount of difficulty) to map<>::erase() function. MS’ version of the function returns an iterator, the standards version doesn’t return anything! So I checked the one I use in my engine, STLPort, and it too doesn’t return an iterator for map<>::erase(). Googled around bit, and found that indeed there is no return value for that function.

Strange. I would generally agree with MS on this one. Most other containers like vector and list return an iterator on erase() so should map and multimap. I don’t understand the logic behind map<>::erase() not returning an iterator value. Maybe the standards committee got it wrong or maybe I haven’t fully understood the reasons why. A caveat to those who use MS STL; don’t. Though the erase() issue is to some extent trivial, debugging template code can be really difficult. I for one use standards compliant STLPort to avoid such issues. Though it may be a little difficult to setup, I would recommend people to use the same.

Code::Blocks 8.02 has been released.

The much awaited release of Code::Blocks has finally happened. I was waiting for this for like forever! It’s time to go get the new IDE. From the initial looks of it, it’s been worth the wait. More later!

Into the mind of a casual gamer.

I haven’t said or ranted about gaming in quite a while now. Quite unlike me. I often have to say and rant a lot, especially about gaming. If you read the last 10 posts you would never guess I was a game developer. So I guess it’s time to put the Blog back on track! The absence of any game related post have been for a couple of reasons. For one, I haven’t installed any new games as of now. Too busy with finishing off Doofus 3D. Neither have I upgraded my old aging NVIDIA 6200 and 6600 cards. So anything new like Bioshock or Crysis is out! I have promised myself a brand new 8800 GTS or something newer as soon as I complete and ship the game, and that’s been now like, forever! On the contrary, for the past 6-8 months I have had the opportunity of actually watching others playing games and it’s been a learning experience as a game designer. You wouldn’t believe it, one of them happens to be my mother. She has been doing all the playing while I have been working my ass off to finish my game, though I must say her taste is very (very) different from mine. No Half Life-s or Oblivion-s for her, she kinda enjoys the Zuma-s and the Insaniquarium-s; casual games in general.

Just watching and being a spectator can teach you volumes about how people enjoy computer games. Casual gamers in many ways can very different from hardcore counterparts and can be very similar in other respects. A casual gamer is generally not interested in complex gameplay. They are the ones who enjoy simple yet immersive games. Simple puzzles and not overly complex challenges are ones that are favored over long story lines and dark alleys. Also appreciated are short levels. For instance, I tried selling a game called Caesar III to my mother but nope, didn’t work. For those who don’t know, Caesar III was a city building game, kinda like Sim City and I loved that game when I played it. The game has no gore, so I was pretty sure she would love it. However, the comments I got from her were, “Too complex”, “Very difficult to manage”, “Can’t understand it”. It just goes to show casual gamers tend to favour easier set of “game-rules”. Anything that is more complex than a few set of keystrokes is a big NO.

Over the past few months I observed and chatted with a few other casual gamers too. Mostly friends and relatives who often play casual games. It’s always good to understand the market mentality, especially if you are developing a product catering to them. Most casual gamers I chatted with have played 2D games, but quite a few I must say, have played 3D games. Interesting, since I am developing a 3D casual game myself. When asked if they would be interested in such a game, the overwhelming response was, Yes, to obviously a simple 3D game. Most of them said they would be very interested, but were unsure if they would like it. Some even told me they wouldn’t mind learning a few extra keys to play something like Marble Blast. So it means Doofus 3D does have a market after all, but it remains to be seen if it would indeed translates into sales.

In general I have found casual gamers dislike having to learn a lot of rules, period! Games that involve significant learning effort quickly become unpopular. So I guess games like Oblivion are out. Also not so much appreciated is, gore. There is an interesting paradox here, unexpected I must say; many casual players admitted to playing arena games like Quake III. The reason; such games was easy and fast paced. Most enjoyed it, some said the games were violent but they generally didn’t mind. None of the casual gamers I talked to knew exactly what game genres were, none cared. Most said they looked at the game cover or screenshots to decide if they would indeed play a game. 90% said they played flash games. About half said they don’t understand things like DirectX or OpenGL or h/w acceleration. Nearly all said they had only on-board graphic cards. More than half said they didn’t even know you could install separate graphics cards. More than 75% said they had and do play games on laptops.

OK, my survey is far from being completely accurate. All I have done is maybe asked a few questions to people who I have met in the past year or so. I have in no way used a proper methodical approach for the survey. However, it does give you an insight into the mind of a casual gamer. As a game developer I generally tend to forget these important lessons and sometimes do go overboard. Then, I usually drag myself back to reality!

W’ow’ubi!

Finally someone did it! I mean this could have, and should have been done a long time ago, but no one really bothered. OK, what the hell am I talking about here? Oh well, seems like with the upcoming release of Ubuntu 8.04 code named Hardy Heron, you will finally have an option of installing Linux on a Windows installed PC without tinkering with the file-system and disk-partitions. A new installation mechanism called Wubi will allow Windows users to install Ubuntu on to their Windows OS using nothing but a simple Windows installer. Click and Install! No partitions no data-loss no headaches. It would seem Wubi installs the entire OS on a disk image which neatly sits in a folder on your existing Windows drive. Uninstalling is equally trivial. Simply use Add/Remove from Windows, just as you would use a normal uninstaller. Cool!

Linux installations are typically non trivial. I mean at least for the average home user, who doesn’t understand things like partition tables and dual booting options. Yes, there will be those of us who will continue to have separate partitions for Linux, but most non tech savvy users would try to skip such headaches if they can. If Wubi does achieve what it says it can (, still remains to be seen if it indeed can), anyone will be able to install and use Linux just like they would use any other software. It’s still an open question if this will mean more people signing on to Linux. Though debatable, Linux as an OS is still not as “user-friendly” or as some would say “Windows-friendly” as Windows is. Yet this is an interesting development indeed!

Wha’zup with Microsoft?

In an almost uncanny act, which has surely taken a lot of people by surprise, Microsoft very recently pledged that it would open up it applications and platforms to allow for greater interoperability. Yes, Even pledging to work with open-source communities and developers. All this seems to be a part of it’s compliance to the EU anti-trust decision. Interesting, very interesting indeed. Microsoft has been getting cute with open-source communities for some time now, but this is probably more than anyone had expected. However, it remains to be seen if Microsoft genuinely believes about open standards. I was reading through the published article and there are some things in fine print that still seem a little bit vague (, or maybe I am limited by my intelligence). Some things do appear rather subjective. A phrase like “low royalty rates” is open to debate. What does “low” mean? How low is “low”? Ah, and what’s a “covenant“? It will be interesting to see how these new initiatives are adopted. Does it mean all projects that opt for interoperability with Microsoft products, end up paying royalties in some form to Microsoft? Reading some portions of newsletter sure seems to suggest so.

However with everything said, it will be interesting to see and explore possibilities for FOSS projects under these new initiatives by Microsoft. For whatever it’s worth, it’s definitely a step in the right direction. So let’s hope the new gesture by Microsoft does pave the way for better interop solutions between FOSS and MS applications.

Virtualization: What is it exactly?

There has been a lot of buzz about virtualization and a lot has been said about it everywhere. Recently everybody and anybody who has something to do even remotely with operating systems is talking about it. So what exactly is it? And more importantly, what benefits do you gain from virtualization? Is it even that useful? A lot of questions, lets look at them one by one. First, lets look at what virtualization actually is. By it’s very definition virtualization means “abstraction of computer resources”. To put it simply it means sharing of physical resources on one system via several or multiple logical resources. In it’s true sense virtualization is a broad term and can mean different things. A CD/DVD emulation software can also be called as one form of virtualization. However virtualization is often mixed and used interchangeably with hypervisor. A hypervisor is a virtualization platform, meaning it is a platform using which you can run multiple operating systems simultaneously, often under one parent or host operating system, i.e. under one operating system kernel. Every operating system under a hypervisor runs inside it’s own virtual world or virtual computer (, if you could put it that way); completely isolated from the host or any other operating system that might also be running under the hypervisor.

ReactOS
React OS running under QEMU
Ubuntu (Gutsy Gibbon) 7.10.

So what is the real use of the whole virtualization thing? Several, and as a programmer/developer even more. My first experience with virtualization was with VMware, about 4 years ago when I was working on a freelance project. I used VMware to port an application across windows to RH Linux. I did a lot of monkeying around with the virtualization platform back then, and soon realized that such technology can be put to good use. Besides it’s obvious benefit for a cross-platform developer, virtualization can also be used very effectively for other purposes. More so now that we have machines with multiple cores. Since every OS runs in it’s own separate world, you can pretty much do anything with it. To name a few examples, how about hacking the Linux kernel or maybe writing a device driver. Maybe try a hand at screwing the file system. Crazy things you would never try on your machine, are all possible using virtualization software.

On a more serious note though, besides giving an environment to screw around what more benefits does virtualization provide? The one that is probably more of an interest to an enterprise is, “Utilization”. Underutilized servers and machines can be made to run multiple virtual machines, each taking more responsibility and more load. People have often sited benefits of virtualization as it could save up on physical resources like hardware, power-consumption, management and infrastructure, all leading to obvious cost benefits for an enterprise. With machines getting more and more powerful, this could well be a norm in the future. “Utilization” could well be the single most important reason a technology like virtualization could see a lot of interest.

Also of interest is the ability of virtual systems to be used as a potential secure sandbox testing environments. They could be (, and already are being) used for software QA, analysis and for controlled environment test beds for applications and untested software. Tech support can simulate client specific scenarios on virtual systems that mirror real systems. Virtualization is also excellent to carry out performance analysis on systems and run complex debugging scenarios that could be very difficult to do on normal systems.

The next use will probably be of more interest to the average joe/jane net surfer. If you haven’t already guessed it, virtualization could be used to safely surf the web and keep your system free of viruses and worms that infest web-pages, especially the “good ones” (, you cheeky monkey, you!). Even if your virtual system were to get infected or hacked, it’s just a matter of reloading the entire virtual OS and everything is erased clean! No cookies, no browse history no nothing! No one logging into your computer will ever know which sites you visited or what you did. Sounds interesting doesn’t it. 😉

So where can you get your hands on something like this. VMware offers a free virtualization software which you can download an use. I would recommend VMware hands down, since it is probably the best one I have used. Besides that there are also free ones like QEMU, which I use quite often. If you are on Linux, also check out KVM, Xen and lguest though I haven’t used them myself. Then there are commercial ones offered by Microsoft, VMware and others that are a lot more powerful which you are welcome to try.

[Edit] Also Microsoft Virtual PC is another free virtualization software for Windows.

Spying at the workplace. The things you should know.

I was intrigued by an article I read in a mag recently. The article was about employers spying on their employees using computers and networks within an organization. A tad bit disturbing I must say, not completely unexpected though. All previous organizations I have worked for were paranoid about security, so I guess I kinda always knew, ” the boss was looking”, when I was working. The article gave some really good insight on the whole matter. Apparently good enough to have piqued my curiosity, at least enough to write about it.

How do they do it? For those who arn’t tech savvy, let’s look at more technical details. Apparently the easiest and the most powerful way to spy on anyone using a computer is by installing a key/data logger. There are hardware and software keyloggers. The hardware ones can be easily seen just by peeking around your computer’s back. They sit between the Keyboard plug and the computer. A software keylogger is a program that logs every key you hit on your keyboard. So all your passwords and website are basically open for scrutiny. A keylogger program is surprisingly easy to write. A programmer like me could probably do it in about a day or so. Google around and you can probably get a dozen free ones on the internet. A data logger is something similar, but more advanced. It maintains a history of data interactions including keyboard and mouse. Dataloggers are often more difficult to write. However, don’t be mistaken, such programs are available. What’s more such programs are available from professional software development companies focused on security, and you might have one running on your PC right now! No, most key/data loggers are not caught by spyware or antivirus programs. Don’t bother trying it. So if there is one running on your work PC, there is a good chance you have no knowledge about it.

Spying using keyloggers is pretty easy, however that is not the only way an organization can spy on employees. Your email is also subject to scrutiny. If you are using your work email address to send personal messages to friends and family or maybe sending insults about your boss to your friends, there is a good chance they have already been read. Don’t expect to get a raise very soon! It’s child’s play to archive emails from a message queue on a mail server and those can be read during weekends or holidays. Forget about even reading all of them, the system can be configured to run a automatic script to isolate mails that have specific words or phrases. Your laptop isn’t spared either. If you do dock it in when you come to work, it leaves a door open for the sysadmin to logon to your machine and install whatever he/she wants, and if for some reason it has had a recent unexpected trip to the IT department, you probably should be asking the question, “why?” right about now!

Pretty much anything can be monitored, from the sites you visit to the friends you chat with. If you think your organizations is a little bit over protective about security and probably doing it, rest assured, they are! So the ethical question to ask is, are such organizations violating employee privacy? Is spying on employees even allowed? You will be surprised to know the answer! It is! There is no explicit law that forbids employers from spying on employees. Privacy laws are pretty murky when it comes to something like email and chatting. True, no one can barge into your house and violate your privacy there, but your workplace isn’t your house and anything said, written or emailed doesn’t explicitly fall into the category or personal privacy. There can also be serious legal complications associated with seemingly innocent practices. For example forwarding pornographic material in an email can be subject to sexual harassment lawsuits. Even seemingly innocent jokes, that are generally forwarded at a click of a button can be taken to be racist or radical remarks. Copying or emailing copyrighted material can land you in jail, even if you were to do it innocently.

As technology advances so does the need for organizations to protect themselves. Having personal data of an employee on office machines can lead to complications for the organization. With visuses and data mining rampant, organizations are left with little choice but to have more and stringent monitoring policies. I for one believe, organizations should make their policies clear. If they do want to monitor their employees, then there is no harm in letting people know. Spying secretly is not very well appreciated by anyone. It leaves people rather distrustful of the management and the organization. However, rest assured spying at the workplace is all too common. It is here to stay. So the next time you have and urge to forward that great joke or poke fun at your boss, remember “Every breath you take, Every move you make …..”