Visual Studio Community Edition (Free).

Microsoft has released free Visual Studio Community Edition. It’s basically a free full featured Visual Studio IDE (apparently with everything included) that can be used to make all kinds of apps. I was even surprised to find support for Python and Git included. Even more surprising was the Apple and Android logos in the “supported platforms” section!

Link: http://www.visualstudio.com/products/visual-studio-community-vs

Can parallel processing really cut it?

When Larrabee was first delayed and then “postponed” most of us weren’t surprised (, at least  I wasn’t).  Parallel computing, though advocated as a world saver, isn’t the easiest model to program to. Doing everything in “software” (graphics, HPC and all) ‘might not’ be as easy as was anticipated. The cold hard reality is that languages like C++, Java and derivatives (mostly OOP ones,) were never really designed for parallelism. A multi-threading-here and a asynchronous-there, doesn’t really cut it. Using the full potential of parallel devices is very challenging indeed. Ironically most of the code that runs todays software isn’t geared for parallel computing at all. Neither are todays programmers.

But experts advocate  a parallel computing model for the future. But, is it easy to switch to? Will an innovation in hardware design, or a radical new compiler that optimizes away your “for() loop” the real answer? A very interesting article to read (even if you are not into graphics and game programming) is :

http://www.brightsideofnews.com/news/2010/5/27/why-intel-larrabee-really-stumbled-developer-analysis.aspx?pageid=0

Very rarely do I quote articles, but this one is really worth a read. Well-written and well said.

Excuse me, but where’s the menu?

It’s interesting, but our good old menu-bar, yes, the one having “File  Edit  View …” on it, is slowly disappearing from some of the most popular software apps. Today I happened to wander across to the Firefox proposed UI design page and the thing I immediately noticed was the absence of the menubar in the new designs. Good riddance! No seriously, how often do you use the browser menu-bar? For me the browser is the second most frequently used application and 99% of the time I never bother about the menu-bar. I, however, would love every millimetre of screen space while browsing and am more than happy to see the menu-bar go.

There have been some subtle changes in UIs over the past couple of years. No I am not talking about glass effects and pretty pictures and icons, I am talking about design. Though not the first, it was MS Office that got the world’s attention by replacing the menu-bar with the “Ribbon Control”. A bold step, but the idea was to combine the tool-bar and menu into a single tabbed interface. To be honest, yes the ribbon idea is cool, maybe not innovative, but definitely cool. The interface had mixed reactions initially, but as people got more and more familiar with it, things started to get comfortable and soon other applications followed suite. At a first glance having no menu-bar is disorientating for a long time computer user. I did find it a bit “unusual” to navigate my way around  Office 2007 (the one and only time I used it). On the other hand, I never missed the menu-bar (even once) while using the Chrome browser. I guess the whole idea needs a bit getting used to, but apart from that, I really did like the whole concept of replacing the menus and tools into one single compact unit. Makes sense. Tool-bars, after all, only do complement the menus. It is therefore logical that both be combined into one.

I personally feel this use of the tab interface is a step in the right direction when it comes to exposing complex UI. The resurgence of the tab control and it’s innovative use in hiding complexity whist combining two separate controls (the menu-bar and the tool-bar) into a single entity to service both options is intuitive and resourceful. A similar usage of the tab control has also found it’s way into the mobile device world where it is going to be the main stay of the Moblin Linux platform. That is interesting indeed. So will we see our “Desktop Application Panels” and “Start Menus”, which are basically menus too, being replaced by tabs soon?

A richer faster web.

fflogoFirefox 3.5 was released a few days ago and it’s not surprising that the browser adds a host of features to the new release, some innovative and some maybe not so much. Yes, we do see some ideas from Chrome and other browsers carried over (as we expected they would), and yes we also see a faster browser, mostly due to a faster Javascript engine (Tracemonkey) being deployed. (Though I have to say that Chrome still beats Firefox on  my machine on both the SunSpider and the V8 benchmarking suite.)  Firefox is something that I always use, mainly because I don’t run Win32 on my Internet machine and there is no real replacement for Firefox on other platforms when it comes to a good professional browser (except maybe the Mac, which I don’t happen to have) . So predictably, I am glad to see the new version up to speed with Chorme.

Recently there has been some sort of a competition between browsers; each trying to outperform the the other. Whatever you may call it, the browser wars  show one clear trend — the increase of Javascript speed to allow client side code to run at par, if not at the speed of desktop applications. The wars started with the launch of Chrome. Through Firefox started early, it was beaten to the soap-box, probably because Firefox runs across so many platforms, and it took more time to get everything running at par on everything supported. In any case, V8 outperformed it’s rivals by JIT compiling Javascript code (, which is similar  to what’s been done in SquirrelFish Extreme and now by Tracemonkey). But why this sudden obsession with Javascript speed? What’s so special about Javascript? As you probably already know, Javascript is the defacto client side language that is embedded in web content (HTML). It is used by almost all websites that display dynamic content and thus by analogy, faster Javascript means faster and consequently more powerful web applications. So as websites like Google push for more Javascript in their online services, the plan is to offset the use of conventional desktops applications, replacing them with mobile/online services. Take the example of gmail, Google’s email service. There was a time when I couldn’t live without Pop3 (Outlook express/Outlook/Thunderbird). I even got pissed when Yahoo stopped Pop3 support for their free service.  Now, since Gmail and the advent of fast Javascript browsers, I don’t even bother. Gmail was the first to push the bar and soon everybody followed. There was a time however when everything was not as speedy. Gmail used to take a long time to load and one did question the sanity behind having a Javascript heavy email service. However, what is more subtle is the fact that because of Chrome and the subsequent across the board Javascript enhancements that followed, gmail is now comparable to any desktop mail client you can find. Oh yes remember, gmail mails occupy zero space on the hard-drive, are accessible from any place that has a Internet connection and you don’t have to upgrade the application when a new version comes out. Log-in and you are there! With Chrome and now all others following suit, we could see even more enhancements to existing Google services and might even see some new applications as well. Whether this progression was evolutionary or revolutionary is up to you to speculate, but it does come across as a clever strategy from Google.

Coming back to Firefox. Interestingly Firefox also brings in some other changes. Apart from the obvious Javascript engine enhancement it includes support of HTML 5.0. Though HTML 5.0 isn’t formally ratified by the W3C, it is as they say it “a game-changer for web application development”. The support for the ogg container and patent free multimedia formats like Theora and Vorbis means an end to propitiatory technologies like Flash and Silverlight. Or does it? Well, at least that was the general idea with HTML 5.0. We will still have to see how that is actually implemented. Predictably not every browser vendor is happy about adopting open formats, but with HTML 5 the road is pretty much set. The push is on for a more dynamic and richer web. But, wasn’t it happening all the time. Well, not entirely. Most of the sound and video on the web today is played via third-party plug-ins like Flash. They are not natively supported by HTML and there in lies the problem. Come to think of it, it’s ridiculous that a multimedia container like a browser which allows rich content like pictures (.gif, .png, .jpg …) to be displayed doesn’t, even in this day and age, support video and sound. Today videos are pretty standard way of displaying content on the web, and mostly every site/portal what wants to demo it’s content has and does show a video clip somewhere. Then the question is; when are we going to see browsers ending their dependence on proprietary technologies like Flash and Silverlight and implement a standard for video and audio as well. Interestingly, Firefox 3.5 does allow the use of audio and video tags. Kudos to Firefox for incorporating the new changes, though it maybe a while before websites support them. I hope they do sooner rather than later, it’s almost trivial.

Mostly the changes in Firefox 3.5 were on expected lines. Nevertheless it does bring in some new enhancements and is a noticeably faster browser. I think the (earlier) release of Chrome did take some shine off the speed increase but still on the whole it’s a welcome development. Lets see when others like Opera catch up, I am sure they will soon, it’s just a matter of time.

Interestingly, PHP 5.3.0 has been released with additions like namespaces, late static binding, lambda functions and closures to the language. I say these features should have been a long time coming to such an immensely popular language like PHP. These are a welcome enhancements to the language indeed. While this may not have any direct bearing on the client side speed of web applications, it does fill in some gap with other popular server side web languages.

trueSpace has been put down!

Well, what can you say? The once great modeler trueSpace has been killed. Some may remember it as being acquired by Microsoft, and then in what seemed to be too good to be true, was offered as a freeware not long ago. Not entirely unexpected I must say. There was nothing much being done so to say, but TS did have a sizable community who is now understandably disappointed. My first experience with TS was a long time ago, when I was taking baby steps towards learning 3D and 3D modeling. I confess I am not a 3D artist, but I work with 3D tool-chains a lot and I had to switched over to Blender rather quickly since TS back then wasn’t free. When the application was released as a freeware, I did try my hand at some of the tutorials that accompanied the installation and was pretty successful at doing things, mostly because my experience with Blender could be carried over to trueSpace in some way. True it’s interface is a bit wired, but that’s something that could have been worked on. It may not have had all the bells and whistles of the truly top modelers, but it wasn’t all that bad. It’s unfortunate that it got terminated the way it did.

There is something to think about here. Had trueSpace been open source, it would have been forked and the project would have continued to live via enthusiasts and developers, maybe even via contributions from existing developers. The unfortunate thing is — it isn’t and therefore it’s fate is sealed. It’s really sad to see such a product go down and the anguish of the community members that have helped the product grow, is justifiable. Believe me, it’s not easy to see software that you have worked with for years go down like that. 3D software, especially 3D modelers take a considerable amount of time to learn, and it takes even more time to become truly productive with them. My guess it there are going to be a lot of 3D artists that would really pissed off right now.

Now that I think back, it’s a good thing we decided to use Blender as our primary content creation tool. Not that we were going to use TS in the first place. However, had we used some closed source software like TS, we would have burned down with it if it would ever had gone TS’ way. Most of the content pipelines would have been lost, a lot of code would have to be re-engineered for another application. This has been precisely my argument from the very beginning. However popular an application may be, it could very well end up being put down to protect corporate bottom-lines of large corporations. With closed source applications, one can never discount such occurrences.

Then again, could we see a resurrection of TS in some way? I sure hope so. Maybe there will be a release of it in a new incarnation — a integrated game creation tool for XBox/PC? — OR — Are we going to see a new tool to rival Sketchup from existing codebase?  Maybe there will be a community buyout? Or is Microsoft going to release TS as OSS? That would be interesting indeed! No, I am not keeping my fingers crossed.

Netbeans for C++?

I am a sucker for IDEs and I have been meaning to take Netbeans out for a test ride for sometime now. I have been hearing a lot of good things about Netbeans every since version 6.0 came out and more so ever since the version 6.5 arrived. OK before I proceed further, let me point out the fact that I have been using Netbeans for C++ development and have never used the IDE for anything other than C++. So my comments may be somewhat inaccurate for other languages.

For one the IDE seems to be solidly built and you can find your way around the place easily. Everything is where it should be and there is no second guessing as to what functionality a particular window, menu item or an option provides. Clean and sweet. Netbeans has probably the cleanest interface among IDEs. This is probably the strongest points about the IDE and given the fact that it is available on multiple platforms, means it could be used by people who do cross platform development.

I ran the IDE on Windows using the MinGW and MSYS systems, and it wasn’t very difficult to setup given that I already had MinGW and MSYS setup. The build system for the IDE is via your native Makefiles (*shiver*). I severely dislike the taste of Makefiles; especially maintaining them for large, cross-platform projects that have multiple dependencies. But the IDE manages Makefile issues nicely and I can live with that.

For those who don’t already know, Netbeans is actually a framework and a platform to build applications. The IDE is an application built on top of this platform. The strength of the Netbeans platform is it’s ability to have modules. Platform modules are basically Java classes that interface with Netbeans open API. The IDE also, can be extended via modules to add and enhance functionality. An open API like that also means Netbeans can be turned into almost any type of IDE by simply programming in functionality for a language.

The IDE has an excellent code completion feature, and I have to say it is surprisingly fast. The intellisense of the IDE  is top notch, probably better than most free IDEs out there, including my current hot favorite, Code::Blocks. I would even go as far as saying that in some situations it is better than even Visual Studio Express. The Navigator and the class display windows are pretty snappy. Any addition or change made to the code is updated very very quickly. On the intellisense front, it deserves a 7 on 10. The real-time syntax checker also deserves a praise. Oh how I miss these things in VS Express 🙁 !! Small things go a long way in enhancing productivity and Netbeans is by far the best among the free IDEs in that regard.

Netbeans has a lot of modules using which you can extend the functionality of the IDE and the community support is equally good. I would seriously recommend this IDE to all those who want a free IDE for C++. Netbeans by default supports only the GNU toolset, meaning you won’t be able to use compilers from Microsoft, Intel, Borland and others. The debugger used is gdb but the display and setup of the debugging GUI under the IDE can probably rival any other IDE for completeness.

So what’s to complain? Nothing really; but, just to nitpick, the fonts look a bit messy. There is no hinting on the fonts and it does look a bit drab. No hinting also means I have to use a higher font size than I normally do under VS and Code::Blocks. Talking about Code::Blocks, for me C::B still edges out in front of Netbeans just because it has a build-in GUI designer for wxWidgets, and well Netbeans dosen’t but that is probably just for me. I hope someone writes one soon 😀 . Netbeans is too good an IDE to ignore. However, I must say, I am impressed with Netbeans. It sure seems OSS IDEs are rapidly closing the gap between commercial ones.

Tryst with video recording.

Shooting a movie for the Doofus game turned out to be more than a headache; a bad case of migraine I must say. Well it all began soon after releasing the game. The logical next step was to shoot a movie/video to put on Youtube. What was supposed to be a 2 hour job turned out to be a lot harder than I had anticipated. Most screencap utilities do a pretty good job at capturing screen movies, however, what I failed to realize is the fact that most of them are hopeless when capturing any Direct3D or OpenGL rendered visuals withing a game. I am extremely disappointed with capture software that is available for recording an in-game movie. I tried several applications that are available, both free and commercial ones, but all of them turned out to be poor — either extremely slow or extremely buggy.

In the end I had to manually write an AVI capture facility into the engine code; ie. physically get the Back-buffer, StretctRect it into a texture, download it off the GPU and  store it’s contents into an AVI file via a bitmap, frame by frame. Similarly with the music and game sounds, for which I had to code in wave capture in OpenAL. Whew done! Unfortunately not all went as planned. I soon realized that the video and audio streams in the recorded AVI file went completely out of sync. That’s because the game’s frame-rate varies considerably while playing, whereas the sound is always played at the same rate. The problem unfortunately is — unlike the game the AVI file’s frame-rate is always fixed. So after a 1 min shoot, I could clearly notice a mismatch in video and sound. I tried unsuccessfully to correct the problem, but the problem still persists. That said, at least the results of  video capture were better than any 3rd party application I had tried before. So it wasn’t a total waste of time.

So yeah, I could shoot video clips, albeit not as good as I would have liked. I wanted a 1024×768 res video and all I could manage was a 640×480 one at pretty moderate quality given that all the streaming was done into a MPG4 compressed stream and there was a noticeable loss in quality. Then came the next challenge; editing the video into a full streaming movie. Movie maker was a the only free option available and the app is not too difficult to use. However, the app encodes videos only in WMA format and I couldn’t locate a MPG , AVI or an FLV option.  That meant I needed to convert the movie to a flash movie (FLV) so it could be streamed off the Internet using a SWF flash plugin. Bah! WTF! Well it turns out ffmpeg can re-encode movie files to most formats; including FLV and it’s free. Thank you ffmpeg.

Then it was Youtube. Well it seems when you upload a video to Youtube the server converts and re-encodes the uploaded video file using a really poor quality compression. I am not sure which format the FLV encoder on Youtube uses, but the results turned out to be a blocky pixelated mess. I guess, after some many conversion and switching formats, the video quality on Youtube turned out to be pretty poor. You can compare the quality with the ones on the Doofus website (the larger one here) and  you will understand what I mean.

Bah! The next time I am directly streaming content into a external HD video recorder via the TV-out option of the video card to avoid such craziness!

Qt to go LGPL.

That’s really great news! Qt the open-source and cross-platform tool-kit/framework  from Nokia (formerly from Trolltech) is going to be released under a more liberal LGPL license. What it means is, you could now use Qt in any of your projects provided you agree with LGPL. Well, does it also mean that you could finally see all those wonderful KDE apps ported across platforms? I sure hope so. KDE was built on top of Qt and thus shares a lot from Qt, and thus it’s fair to assume that KD too could benefit from this move.

Nokia states that having Qt under LGPL will allow “wider adoption”, and it may very well turn out that way. The earlier GPL license was, according to me, hindering the adoption of the toolkit and this is a welcome development indeed. Qt is a very polished GUI toolkit there is no denying that, however the license may not be the only reason why developers choose other tool-kits/frameworks over Qt.

I have used Qt quite a lot in the past, both for commercial and open-source development. However, it’s been some time since I have dabbled with the toolkit/framework and over the years I have slowly moved on to other tool-kits like wxWidgets. I haven’t been too fond of Qt’s moc-compiler thing which can be a pain to work when the project size gets large. Having said that one can’t dismiss the fact that Qt is probably the leading cross-platform toolkit out there. It provides a huge number of widgets and a myriad of functionality that would have to be rewritten or re-invented if one were to use any other toolkit. Ot offers a strong development environment and an equally strong GUI designer; often missing in most other tool-kits. It has a proven legacy and is used by companies big and small for almost all types of GUI.

Would I switch to Qt if it were LGPL? No, probably not. I am perfectly happy with Code::Blocks and wxWidgets combo and I don’t see any reason to move to Qt. Most of my projects use pretty complex but consistent UI and wxWidgets serves me pretty well in that regard.  The game builder I am currently working on goes pretty nicely with the existing wxWidgets framework and the toolkit offers me more than what I need. So I personally see no reason to switch.

Is it another year already?

A very Happy New Year to all. A bit belated I know, but I was kinda busy doing nothing. Well, not really. Yeah, I have been taking time off, but I was also busy with other activities, most importantly, marketing of the game.

So what’s 2008 been like? Well for me it was pretty uninteresting. Not, that I didn’t enjoy it, it’s just there was precious little in the way of what I like to do best; research. Most of 2008 was spent on fixing bugs, play testing, hardware testing, level creation and solving some insanely complicated issues, issues that shouldn’t have been there in the first place and some unavoidable circumstantial problems, that shouldn’t have been there in the first place. Most of the coding that was done was also equally uninteresting. Majority of the time was spent on getting thing working right with gameplay and design. Not the most pleasurable of things I must say, at least not for me. That said, a lot of ground work has been done w.r.t the engine, most of which will not have to be repeated for sometime to come. So that’s a big positive, something I can take away from 2008 as being extremely productive.

Having said that, the biggest hit of the year for me is of-course the release of the game; which took far more time than I had initially anticipated. True, it turned out OK (great 😉 ) given the budget, time and resource constraints, but I would have liked to do more. Maybe all that was missed in this one can quickly be added to the next one. A Causal Analysis is due, however I would like to hold on to that a bit longer. At least till we finish up with the final marketing parts which I am currently focusing on. A part of  last year was also spent in starting 3D Logic Software and there are a lot of things that had to be done before we went online. Unfortunately they accounted in a pretty big delay for the launch of the game.

On the tech front, 2008 has been equally low. Very little interesting developments. Most of things that happened were evolutionary rather than revolutionary. On the OS front XP still rules and will probably do so in 2009 as well. However, the year belonged to the underdog Apple. Both their OS and their products have gained significant market share and will probably continue to do so in 2009. Linux has always been interesting and 2009 will be no different. Linux grows from strength to strength in some areas and remains the same in others. If anything I am looking forward to Linux in 2009, some interesting developments on the horizon.

In 2008 we saw a resurgence of the GPU battles with ATI throwing in some impressive technology, and that’s good thing. For the first time I am an owner of an ATI card (HD 4850) and though NVIDA held on to the top spot (barely), ATI was close behind and even edging out in front at times during the year. Then again we can’t forget general purpose computing on the GPU. The year has been interesting for GPU and GPGPU. Powerful cards with supercomputing capability were unveiled and this year will see more power being packed into cards as the GPU titans clash with better with more powerful weapons at their disposal. Oh, let’s not to forget Intel here. Intel finally unveiled Larrabee, so you very well could have another titan arising in those battles.

Personal wish list for 2009.

  • Intel comes around to finally putting a proper on-board GPU with at least good hardware T&L and releases moderately good drivers.
  • Microsoft releases DirectX 11 for XP along with Vista and Windows 7.
  • OpenGL spec gets a overha….. well forget it!
  • Linux gets a single package-management/installer system that everyone across the board adopts, and most importantly is easy to use and deploy.
  • The economic downturn ends.
  • All people in the world become sane and killing of innocent people stops completely.

That all for now, 😀

Once again a Happy New Year.

It’s New, it’s Chrome, it’s Fast.

I know I have written too much about browsers on this blog, but I could not resist writing about the newest member of the Google family, aka the new browser Chrome. OK, so the browser is pretty fast, no kidding, I know how deceptive this can all be but you can benchmark the Javascript speed here and see that Chrome leaves the competition biting the dust. The above test will freeze IE, and for me it froze IE 7.0 for well over a minute with a dismal score of 26. Safari clocked 93 on the test, FireFox 116, Opera 161, but Chrome clocked an amazing 1183. Just shows you how much speed the Javascript engine of Chrome delivers. You can feel it when you browse other sites as well. The rendering engine is at par with FireFox and I couldn’t notice much difference between the two browsers when Javascript was not around.

The other thing I love about this browser, is the amount of real-estate it gives you. Gone is the menu-bar and the status-bar has shrunk down to a small strip which appears when needed. These are the kinds of GUI innovations that I really appreciate. Logical; since most of us rarely use browser menus when we surf, at least I don’t. An rare trip to the “Preferences” section to clear browsing history is all I need to do with the menus anyways, so I am not too bothered with the fact that the menus in Chrome have been shrunk down to two little corner drop-down buttons. As a matter of fact I have customized FF (FireFox) to do the same using an extension. As expected the navigation bar has a tight integration with the search engine and you will find suggestions being popped up as and when you write directly from Google search. There is still some work to be done on this front though, it would be wonderful to see something along the lines of what Yahoo has on it’s search.

The things that I really miss in Chrome is AdBlock and Noscript. I hope Google will allow some way of having extensions to Chrome so people can come-up with all those amazing things that FF currently has. In fact I miss the extensions thing that is so popular under FF. I also could not find an RSS reader or any method of importing RSS/ATOM links into the browser. I also felt that the status bar was a bit short and could not display longer web-links. Chorme crashed on me once and ironically it was Google’s very own site, google books and yes I was browsing Chrome’s very own book Google Chrome when that happened. It somewhat flies in the face of the claim about application not crashing if one instance crashed. While Google has touted memory management of Chrome as great, the browser does take up a significant chunk of memory with even a modest number of tabs open. It’s certainly a lot more than FF, about 20 – 40% or so more for the same number of tabs.

For some reason the browser continuously uses about 60 to 70% of both CPU cores on my machine and that happens even when the machine is sitting idle and there is no interactions with the browser. Interestingly I also found a lot of memory swapping taking place after the browser ran for 2 hours or more and the memory usage keeps increasing steadily. That would somewhat contradict what was written in the Chrome book about low memory usage. The thing worth mentioning here is the fact that each tab of the browser is a separate process, meaning every tab is actually a separate instance of a browser running.

In the end, Chrome is still a beta product and I would expect significant improvements in the coming versions. I would also hope that Google will at the very least release a Linux version of the browser. The speed of the browser is truly impressive, however it still rough around the edges. The interface is good, but that’s just my taste. I always prefer a minimalistic approach when it comes to GUI. I am however not sure how others may view this. I would conclude by saying, Chrome is a browser worth trying and certainly a browser to keep your eye on.