Netbeans for C++?

I am a sucker for IDEs and I have been meaning to take Netbeans out for a test ride for sometime now. I have been hearing a lot of good things about Netbeans every since version 6.0 came out and more so ever since the version 6.5 arrived. OK before I proceed further, let me point out the fact that I have been using Netbeans for C++ development and have never used the IDE for anything other than C++. So my comments may be somewhat inaccurate for other languages.

For one the IDE seems to be solidly built and you can find your way around the place easily. Everything is where it should be and there is no second guessing as to what functionality a particular window, menu item or an option provides. Clean and sweet. Netbeans has probably the cleanest interface among IDEs. This is probably the strongest points about the IDE and given the fact that it is available on multiple platforms, means it could be used by people who do cross platform development.

I ran the IDE on Windows using the MinGW and MSYS systems, and it wasn’t very difficult to setup given that I already had MinGW and MSYS setup. The build system for the IDE is via your native Makefiles (*shiver*). I severely dislike the taste of Makefiles; especially maintaining them for large, cross-platform projects that have multiple dependencies. But the IDE manages Makefile issues nicely and I can live with that.

For those who don’t already know, Netbeans is actually a framework and a platform to build applications. The IDE is an application built on top of this platform. The strength of the Netbeans platform is it’s ability to have modules. Platform modules are basically Java classes that interface with Netbeans open API. The IDE also, can be extended via modules to add and enhance functionality. An open API like that also means Netbeans can be turned into almost any type of IDE by simply programming in functionality for a language.

The IDE has an excellent code completion feature, and I have to say it is surprisingly fast. The intellisense of the IDE  is top notch, probably better than most free IDEs out there, including my current hot favorite, Code::Blocks. I would even go as far as saying that in some situations it is better than even Visual Studio Express. The Navigator and the class display windows are pretty snappy. Any addition or change made to the code is updated very very quickly. On the intellisense front, it deserves a 7 on 10. The real-time syntax checker also deserves a praise. Oh how I miss these things in VS Express 🙁 !! Small things go a long way in enhancing productivity and Netbeans is by far the best among the free IDEs in that regard.

Netbeans has a lot of modules using which you can extend the functionality of the IDE and the community support is equally good. I would seriously recommend this IDE to all those who want a free IDE for C++. Netbeans by default supports only the GNU toolset, meaning you won’t be able to use compilers from Microsoft, Intel, Borland and others. The debugger used is gdb but the display and setup of the debugging GUI under the IDE can probably rival any other IDE for completeness.

So what’s to complain? Nothing really; but, just to nitpick, the fonts look a bit messy. There is no hinting on the fonts and it does look a bit drab. No hinting also means I have to use a higher font size than I normally do under VS and Code::Blocks. Talking about Code::Blocks, for me C::B still edges out in front of Netbeans just because it has a build-in GUI designer for wxWidgets, and well Netbeans dosen’t but that is probably just for me. I hope someone writes one soon 😀 . Netbeans is too good an IDE to ignore. However, I must say, I am impressed with Netbeans. It sure seems OSS IDEs are rapidly closing the gap between commercial ones.

Tryst with video recording.

Shooting a movie for the Doofus game turned out to be more than a headache; a bad case of migraine I must say. Well it all began soon after releasing the game. The logical next step was to shoot a movie/video to put on Youtube. What was supposed to be a 2 hour job turned out to be a lot harder than I had anticipated. Most screencap utilities do a pretty good job at capturing screen movies, however, what I failed to realize is the fact that most of them are hopeless when capturing any Direct3D or OpenGL rendered visuals withing a game. I am extremely disappointed with capture software that is available for recording an in-game movie. I tried several applications that are available, both free and commercial ones, but all of them turned out to be poor — either extremely slow or extremely buggy.

In the end I had to manually write an AVI capture facility into the engine code; ie. physically get the Back-buffer, StretctRect it into a texture, download it off the GPU and  store it’s contents into an AVI file via a bitmap, frame by frame. Similarly with the music and game sounds, for which I had to code in wave capture in OpenAL. Whew done! Unfortunately not all went as planned. I soon realized that the video and audio streams in the recorded AVI file went completely out of sync. That’s because the game’s frame-rate varies considerably while playing, whereas the sound is always played at the same rate. The problem unfortunately is — unlike the game the AVI file’s frame-rate is always fixed. So after a 1 min shoot, I could clearly notice a mismatch in video and sound. I tried unsuccessfully to correct the problem, but the problem still persists. That said, at least the results of  video capture were better than any 3rd party application I had tried before. So it wasn’t a total waste of time.

So yeah, I could shoot video clips, albeit not as good as I would have liked. I wanted a 1024×768 res video and all I could manage was a 640×480 one at pretty moderate quality given that all the streaming was done into a MPG4 compressed stream and there was a noticeable loss in quality. Then came the next challenge; editing the video into a full streaming movie. Movie maker was a the only free option available and the app is not too difficult to use. However, the app encodes videos only in WMA format and I couldn’t locate a MPG , AVI or an FLV option.  That meant I needed to convert the movie to a flash movie (FLV) so it could be streamed off the Internet using a SWF flash plugin. Bah! WTF! Well it turns out ffmpeg can re-encode movie files to most formats; including FLV and it’s free. Thank you ffmpeg.

Then it was Youtube. Well it seems when you upload a video to Youtube the server converts and re-encodes the uploaded video file using a really poor quality compression. I am not sure which format the FLV encoder on Youtube uses, but the results turned out to be a blocky pixelated mess. I guess, after some many conversion and switching formats, the video quality on Youtube turned out to be pretty poor. You can compare the quality with the ones on the Doofus website (the larger one here) and  you will understand what I mean.

Bah! The next time I am directly streaming content into a external HD video recorder via the TV-out option of the video card to avoid such craziness!

Qt to go LGPL.

That’s really great news! Qt the open-source and cross-platform tool-kit/framework  from Nokia (formerly from Trolltech) is going to be released under a more liberal LGPL license. What it means is, you could now use Qt in any of your projects provided you agree with LGPL. Well, does it also mean that you could finally see all those wonderful KDE apps ported across platforms? I sure hope so. KDE was built on top of Qt and thus shares a lot from Qt, and thus it’s fair to assume that KD too could benefit from this move.

Nokia states that having Qt under LGPL will allow “wider adoption”, and it may very well turn out that way. The earlier GPL license was, according to me, hindering the adoption of the toolkit and this is a welcome development indeed. Qt is a very polished GUI toolkit there is no denying that, however the license may not be the only reason why developers choose other tool-kits/frameworks over Qt.

I have used Qt quite a lot in the past, both for commercial and open-source development. However, it’s been some time since I have dabbled with the toolkit/framework and over the years I have slowly moved on to other tool-kits like wxWidgets. I haven’t been too fond of Qt’s moc-compiler thing which can be a pain to work when the project size gets large. Having said that one can’t dismiss the fact that Qt is probably the leading cross-platform toolkit out there. It provides a huge number of widgets and a myriad of functionality that would have to be rewritten or re-invented if one were to use any other toolkit. Ot offers a strong development environment and an equally strong GUI designer; often missing in most other tool-kits. It has a proven legacy and is used by companies big and small for almost all types of GUI.

Would I switch to Qt if it were LGPL? No, probably not. I am perfectly happy with Code::Blocks and wxWidgets combo and I don’t see any reason to move to Qt. Most of my projects use pretty complex but consistent UI and wxWidgets serves me pretty well in that regard.  The game builder I am currently working on goes pretty nicely with the existing wxWidgets framework and the toolkit offers me more than what I need. So I personally see no reason to switch.

Is it another year already?

A very Happy New Year to all. A bit belated I know, but I was kinda busy doing nothing. Well, not really. Yeah, I have been taking time off, but I was also busy with other activities, most importantly, marketing of the game.

So what’s 2008 been like? Well for me it was pretty uninteresting. Not, that I didn’t enjoy it, it’s just there was precious little in the way of what I like to do best; research. Most of 2008 was spent on fixing bugs, play testing, hardware testing, level creation and solving some insanely complicated issues, issues that shouldn’t have been there in the first place and some unavoidable circumstantial problems, that shouldn’t have been there in the first place. Most of the coding that was done was also equally uninteresting. Majority of the time was spent on getting thing working right with gameplay and design. Not the most pleasurable of things I must say, at least not for me. That said, a lot of ground work has been done w.r.t the engine, most of which will not have to be repeated for sometime to come. So that’s a big positive, something I can take away from 2008 as being extremely productive.

Having said that, the biggest hit of the year for me is of-course the release of the game; which took far more time than I had initially anticipated. True, it turned out OK (great 😉 ) given the budget, time and resource constraints, but I would have liked to do more. Maybe all that was missed in this one can quickly be added to the next one. A Causal Analysis is due, however I would like to hold on to that a bit longer. At least till we finish up with the final marketing parts which I am currently focusing on. A part of  last year was also spent in starting 3D Logic Software and there are a lot of things that had to be done before we went online. Unfortunately they accounted in a pretty big delay for the launch of the game.

On the tech front, 2008 has been equally low. Very little interesting developments. Most of things that happened were evolutionary rather than revolutionary. On the OS front XP still rules and will probably do so in 2009 as well. However, the year belonged to the underdog Apple. Both their OS and their products have gained significant market share and will probably continue to do so in 2009. Linux has always been interesting and 2009 will be no different. Linux grows from strength to strength in some areas and remains the same in others. If anything I am looking forward to Linux in 2009, some interesting developments on the horizon.

In 2008 we saw a resurgence of the GPU battles with ATI throwing in some impressive technology, and that’s good thing. For the first time I am an owner of an ATI card (HD 4850) and though NVIDA held on to the top spot (barely), ATI was close behind and even edging out in front at times during the year. Then again we can’t forget general purpose computing on the GPU. The year has been interesting for GPU and GPGPU. Powerful cards with supercomputing capability were unveiled and this year will see more power being packed into cards as the GPU titans clash with better with more powerful weapons at their disposal. Oh, let’s not to forget Intel here. Intel finally unveiled Larrabee, so you very well could have another titan arising in those battles.

Personal wish list for 2009.

  • Intel comes around to finally putting a proper on-board GPU with at least good hardware T&L and releases moderately good drivers.
  • Microsoft releases DirectX 11 for XP along with Vista and Windows 7.
  • OpenGL spec gets a overha….. well forget it!
  • Linux gets a single package-management/installer system that everyone across the board adopts, and most importantly is easy to use and deploy.
  • The economic downturn ends.
  • All people in the world become sane and killing of innocent people stops completely.

That all for now, 😀

Once again a Happy New Year.

Finally some free time.

Ah, finally some free time on my hands these days, and you bet I am putting it to no use! 😀 For the past couple of days it’s no coding, no reading, no working on the game, I must say it’s something really strange for me and I feel almost guilty being away from the computer and doing nothing. However, it’s something I am forcing myself to do. Yeah, forcing myself to do nothing 😀 ! Also I am making it a point to sleep on time, early to bed and early to rise, late to rise will perhaps make me into a total slob soon, but I am kinda enjoying it for the moment.

It’s strange that I am currently not playing any games either. I know, Fallout and Fable II are out, not to mention Left 4 Dead; and I have a new HD 4850 on my PC, but I haven’t gathered up the interest to go out and play anything. Very strange, I must be coming down with something. The only thing I am currently enjoying is listening to music. Knowing myself, this probably wont continue much further. I am sitting on a pile of games that are just shouting out to be played and I will probably start with something very soon.

Hmm.. which one should I pick? Maybe an FPS, haven’t played that in quite a while. Maybe shoot some brains out of some poor alien bas***ds. Would be kinda nice end to the year!

…7..6…5..4..3.2.1……Launched!

Doofus Longears The Game.

😀 Yes we have launched the game. Find it’s downloadable demo at it’s very own website (www.doofuslongears.com).

10…9…8…

A lot has happened on the game front as well. First let me start off by letting people know…. I have launched 3D Logic Software. That will be our business name under which the Doofus game/s will be released.

As far as the game release goes, the countdown has begun! I have not had too much time to update the blog since we were all working hard at the final push towards the finish line. Yes, the Doofus Game is to be released very soon. Keeping my fingers crossed.

Sorry for the long absence.

First, apologies for being off the blog radar for a few days. I don’t think I have been this silent on the blog for so long. I guess people must be wondering what I was up to. I was very busy with a lot of stuff on the game and business front. Well, a lot has happened in the past two weeks. That’s really an understatement considering I live in Mumbai and I think the world knows what the city has  seen in the past 2-3 weeks.

The city is still coming to terms with the attacks on 26/11 and is still on edge, but as with everything else life goes on. Maybe for nothing else, but for mere necessity; because it has to. I only hope and pray that the authorities and the people in charge take due and adequate measures to prevent such dastardly acts from happening again. Condolences to the families who lost their loved ones.

Supercomputer@Home

The past few weeks has seen a spat of newswires from leading GPU manufacturers showcasing GPUs with potential supercomputing capabilities. Some even going as far as saying that a powerful GPU could be used to achieve the power of a supercomputing cluster. Building a supercomputer at home might sound like an impossible task, but the fact is, very soon you could well have one sitting on your desk. No, this is not one of those machines that is just a bit faster than the previous or current generation PC you already have. This machine could well have a mother load of computing power and it might not feel all that fast while running your GNOME, KDE or Vista UI; or for that matter your Office applications. In fact it might not feel any different at all to the average user. However, hidden beneath the UI exterior will be a system that could be a power monster, at least for applications that have been designed to take advantage of what has become the latest buzzword in programming; parallel computing.

There has been a lot of talk about how a supercomputer could be built using top line GPUs, but up until very recently GPGPU was a difficult thing to achieve. This was due to the fact that most GPGPU solutions in the past involved tweaking already existing graphics APIs (OpenGL and Direct3D) for GPGPU tasks. This method was fraught with problems, mainly due to the fact that graphics APIs were never designed with general purpose computing in mind. That’s why GPGPU technologies like CUDA were developed and with the advent of those and an almost exponential increase in GPU power, the Supercomputer@Home has become a reality.

No, this is not another entry that praises GPGPU, well it is, but not entirely. Let’s be fair, GPGPU is only a part of the solution. As I have repeatedly said in my earlier entries, GPGPU has tremendous potential when applied to the correct problem. If you were to believe GPU manufacturers, it would seem that the GPU is the answer to all the problems out there. The fact however remains that the GPU is only a part of the solution. GPUs are designed to address data parallelism very efficiently. The roots of this obviously lie in how graphics is processed. Graphics, or should I say modern game graphics generally require a lot of parallel data processing and GPUs have evolved to address precisely that.

The current generation GPUs have immense computing power, and most of us already know that. However, GPUs are not all that great when dealing with task based parallelism. Simply put, GPUs are not designed to run several different tasks concurrently. So for problems involving a lot of separate tasks that can benefit from simultaneous execution, GPUs are of little help. However, if you have a large set of data comprising of a lot of smaller data units, and want to perform same operations on those data units, then using the GPU could give you a huge performance enhancement. The solution would involve streaming the entire data on to the GPU, and executing the operation on all the units simultaneously. GPUs excel in such situations.

Which, brings us to  data streaming. Streaming data on to the GPU can be a costly operation and is best done for large data chunks. Streaming data too often and in smaller data chunks can usually offset the performance benefit the GPU has to offer. While this is not a limitation of the GPU per say and more of a limitation of the bus, data transfers to and from the GPU should be kept as little as possible to achieve maximum performance throughput. Graphics programmers are already aware of this. OpenGL and Direct3D both encourage programmers to send maximum data across to the GPU all at one time in what is called as “batching”. Both APIs advise the programmer to stall the GPU as little as possible, and in many instances even go as far as adding a memory/resource overhead to achieve a better throughput.

After reading the above paragraphs, one must invariably ask the question, “Can all problems be efficiently data parallelized?” The answer is no. Not every problem is solved by efficient data parallelism. A lot of problems can be, but many computing problems can’t effectively take advantage of data parallelism and therefore can’t take advantage of the GPU in general. Also if you have a small set of data and want to perform a lot of different operations on that same set, the GPU is of little help. Actually it is you good old CPU that is designed for task parallelism. While a lot of hype has been going on about the latest generation GPUs as potential replacement for supercomputing clusters, the CPU has been left on the sidelines. Or has it?

The next question to ask is, “What role will the CPU play in your Supercomputer@Home?” Well make no mistake about it, a very significant role indeed. In the time the GPU has been growing from strength to strength, the CPU has been having a transition of it’s own. While things have moved on from the days when clock speeds were treated as marketing tools, the CPU has itself seen some significant development. Although nothing to get exited about, the CPU cannot be ignored especially if you have to work on problems which involve multi tasking or multi-threading. Most non trivial programing problems require a mix of programming solutions. Some parts may require data parallel solutions, others may require task parallel. Then there are problems that can’t be solved by either. Therefore the CPU will still continue to play a critical role in any supercomputing system you may build.

It’s completely possible to build a supercomputer at home today. But, building a supercomputer is the easy part. Taking advantage of all this power is whole another story. It would involve an approach where the programmer will have to split his/her design so that data parallel parts could be executed on the GPU; at the same time multiple parallel tasks are executed on the CPU. Yes, it would imply a design that is similar to modern game engines, where data is sorted, batched and the entire data is sent across to the GPU to be processed by the rendering system. While this happens parts, or rather tasks of the application can be executed in parallel by conventional multi-threading or using parallel programming (OpenMP) to achieve maximum performance.

A machine with say a Tesla GPU or 2 HD 4870 GPUs connected via crossfire with a new and upcoming Core 7i CPU from Intel and oodles of memory could very well be a machine with supercomputing capability. But, where would one use so much power? Obviously computer games is one area. For a game developer like me, you just can’t have too much power. I always manage to stress out everything I have on my machines, however high end it maybe. But seriously, where else could you use all this power? Maybe for someone who wants to do heavy duty scientific calculations, could indeed benefit from such a machine. Another use could be compression algorithms, especially video and audio compression. People in video/audio editing business could also benefit from such setups. However, for the joe user so much power is all but useless.

Gamepads, Joysticks! How do you play with those?

I integrated Joystick support into the game engine a long time ago but I never actually played the Doofus game using a Joystick or a gamepad up until now. One of the testers logged an issue last week saying that the game’s camera movement was a bit slow for game controllers in general. So I decided to play the game out myself with a joystick. For the record I never play any game with any accessory other than the keyboard and mouse and after my recent experience with the gamepad, I must say I missed the mouse quite a bit. Maybe it’s just me or I have taken a strong disdain towards any kind of game controllers ever since my days with God Of War, and though I am a total fan of the GOW series, the experience with game controllers while playing that game has been more than a little unpleasant. I think I have been playing games with the mouse for too long. Maybe so much so that I have grown too accustomed to the Keyboard and Mouse. I truly don’t know. However, and this could very well just be me, I find controlling the camera using the mouse far simpler and more intuitive than a Gamepad or a Joystick axis.

I tried a lot of different games this week with a gamepad, which  for the better part of this year, has sat inside the cupboard. I told myself, “It’s just a matter of time before I get the hang of this thing.” No chance! With every game I try it’s the same story. I just give up after struggling with the controller for about 10 mins. It’s been like 3 days and I still can’t control the Doofus game’s third person camera, which by the way is not at fault 🙂 . For me, controlling Doofus’ third person camera just seems a lot more natural with the mouse than with the Keyboard. Not that I can’t do it, it just feels a lot more comfortable with the mouse. Fortunately for people that dislike the mouse, Doofus does run perfectly well on any game controller.

Some people say game controllers are great for flight simulators and maneuvering vehicles. Sorry, I haven’t had time to play those. I can tell you, FPS games are almost impossible to play. You can’t aim with these things and get fragged pretty easily. Maybe combat games fare better, but again I haven’t had time to play those either. I ran Tomb Raider demo I have on my system and even there I found my gamepad to be more than a challenge.

So, after this bout of testing, the gamepad goes right back in the desk from where it came. Ok maybe I have ranted enough for one post!