Beyond C++

It’s a fact that C++ has been the most successful and probably the most widely used programming language till date in the gaming industry. Even if you were to look at the software industry as a whole, a near countless software projects owe their existence to the language and probably many more will eventually be made using C++. Universities around the world still churn out thousands of CS grads with C++ knowledge and there will be no shortage of programmers who know C++, at least for some time to come. So why is C++ so popular? The reasons may be many (, I am sure there are other near infinite blogs which touch on this,) but at the end of the day it just boils down to the simple fact, “it gets the job done”! While it made good business sense to employ C++ (some years ago), it doesn’t make all that much sense when we consider scalability looking into the future. C++ in it’s current form is — well, simply inadequate. Most people will agree with me that C++ has probably outlived it’s time, and it was time the programming community as a whole moved away from the language. Easier said than done though, but the question to ask is, what real options do we have which provide radical new changes to the way C++ operates? Very few approaches I would say. Before you raise your hand and give out the name of <insert your favourite language here>, let’s look at some of the challenges facing future game development and/or rather future software development as a whole.

Lets first look at the C++ language itself. It’s well known that C++ has it’s faults. It’s not an easy language to learn and an even more difficult language to master (, compared to other languages). It takes a substantial amount of time and experience to understand the intricacies of the language and takes even more time and effort to fully grasp the quirks and subtleties involved in software creation using C++. Typically it takes quite a lot of time before an average programmer can become truly productive with C++. The learning curve for the language is high and not for the faint hearted. Besides this the language has increasing come under flak for allowing seemingly undefined behavior without complaining too much. The language does very little to deter flawed assumptions regarding some (very) basic constructs that contradict how things actually work. Even with proper planning, strict development cycles and stringent coding practises, software development using C++ is difficult. Memory management is a bane and can cause unexpected and unwarranted catastrophes which are known all too well in the industry.

There are a growing number of languages that address the shortcomings of C++. Be it Java, C#, Python or most new(er) languages, all try to fill in the gaps left out in C++. As a matter of fact most languages do a good job at it. However with all it’s faults, C++ still stands out as a viable game development choice. That’s because of 2 primary reasons; a) it has vastly more libraries, code dumps (, I am talking about game development only), engines, examples and everything really, that it simply wins over the argument right there. True many libraries have bindings to other languages, but most of them seem rather inadequate or poorly maintained. Besides there are a lot more examples on cutting edge technologies (,especially graphics) written in C++ than there are in all other languages put together. and b) It’s easier to get programmers for C++ than any other programming language, Java being the exception. Things are changing though and there are some concerted efforts being made to promote other languages and platforms (XNA, PyGame) as viable game development alternatives. However all those remain some distance away from challenging C++ for the number one position.

The above mentioned points in support of C++ are non trivial. They go a long way in weighing out the demerits of building a game using the language. So the question really is, do we really have any viable options beyond C++? The answer is somewhere in between a complete YES and a total NO. As we stand today the best scenario is probably building the core engine using C++ and then having a scripting system on top of it. Be it Lua, Squirrel, Python, or whatever. That way you can always find a middle ground between having to reuse existing code and at the same time allow rapid development and prototyping capabilities. Many engines/games take this route and there is little doubt that such a process proves to be advantageous in the game building process. There are already a lot of games out there that use scripting language for rapid prototyping and in some cases building large sections of the game. Having a scripting language on top of the engine core is clearly a step in the right direction.

Scripting languages solve but some of the problems. They do a part of the job and they do it pretty well. However, there are issues related to game development which require newer and more radical approaches. The challenge facing game development in the future is building an engine which can effectively and efficiently use parallel programming/computing techniques (Invasion of the multi-core machines). Current generation programming techniques fall short of addressing the issue effectively. More importantly most newer approaches to effectively address multi-core problem are just way too complicated to be implemented effectively in C++. Efforts are on to find radical new solutions (C++ STM), but thus far they look good only on paper and still seem too cryptic to be put in production use. The issue of effectively using multiple cores of a CPU will  probably be the biggest challenge for the next generation engine designer. The more natural choice for addressing the multi-core and parallel programming issue is the the use of functional programming languages. Since functional programming approaches are typically side effects free, parallelizing functional programming code is easier than imperative programming. However mixing functional and imperative styles can be an equally daunting task. As my argument in the above paragraphs suggest, there will still be a lot of code in C++ that will need, someway, of interacting with any functional language that may be used.

It’s debatable if going “strictly functional” will solve all the challenges the future will throw at us. A more plausible scenario would be to have specific portions of the engine/game parallelized either by using a functional language or by having a subsystem level parallelism. This obviously would allow existing (C/C++) code to be reused, however there are still challenges to overcome even with such approaches. Sub-system parallelism means having each subsystem (physics, renderer, logic, AI, sound, network…) run in a separate thread/s. This however is a very optimistic approach since sub-systems tend to overlap and in some cases critically depend on each other. Such a system could be achived with existing code also, however I remain very skeptical whether such an approach will actually work on the ground. Another approach is to have job based parallelism. Divide your most CPU intensive tasks into jobs and then have a kernel system to marshal them based on priority. This is something similar to what an OS does and seems the best way to shoot for parallelism with existing mechanisms. This approach however requires you to split your design into a job based approach and that could prove challenging. Having a functional language as a script system (on top of a C/C++ based engine) is another idea to think about. I am not really sure how helpful this would be and can’t really comment on this (, since I myself haven’t tried such a radical approach, maybe I ought to give it a shot). But it seems very much possible to have a functional language as a scripting system, and could in theory be used to parallelize sections of the game/engine.

So it would seem C++ might just survive in the game engine of tomorrow, although in a less prominent form compared to it’s position today. It may happen that over the years C++ may eventually fade out, however it’s part can’t be total ruled out. Transition from C++ to any other language will be slow and may be tumultuous, with teams opting for a hybrid approach than just downright building existing functionality from scratch. As new technologies progress and CPUs with 100s of cores start being commonplace, we sould see the popularity of C++ waning and been replaced by some other (maybe functional) language. As time progresses C++ might well become increasingly irrelevant as more and more libraries get promoted to newer and more modern languages or newer more efficient ones take their place.

10 thoughts on “Beyond C++

  1. Pingback: Pages tagged "languages"

  2. I agree with Susheel. C++ is assumed to be a tough language mostly considering it being unmanaged. The world is moving towards the managed languages (namely Java and .Net languages). Though when it comes to sheer performance C++ is still preferred and chosen over the easy the use managed languages.

    Not only the gaming industry would continue using C++, but other domains too would use it. The financial industry (investment banks, traders and other investment houses) prefer C++ for their servers till date. The acceptable time latency has now been fixed to 200ms for

  3. continued….

    The acceptable time latency by the traders is now fixed to 200ms for notification of any trade in the financial industry. With such high expectations the managed languages are just not able to provide the performance due to their wrappers over the system APIs.

    With the world and younger generation turning to the managed mode of developing for the ease in memory management and security, good C++ developers may seem to be scarce but C++ will surely stay for years to come till the hardware outperform thereby reducing the execution performance of the two modes of developing – managed and unmanaged.

  4. (A general quote on distributed computing and C++, as it relates very much to what is said in the above two quotes)
    Even with technologies like asynchronous IO, fast real-time network stacks, distributed computing, we are still locked into the shortcomings of C++. As experience tells us, building a distributed, networked real-time system is non-trivial, and that’s putting it mildly. For pure server-side solutions, I would definitely look at other approaches. Any solution that uses C++ for such a task will be fraught with issues, and I am sure most of them will be C++ centric. There is a good reason for that, C++ fares very badly when it comes to concurrency; any concurrency. Distributed computing is a form of parallel computing, so my arguments mentioned above on parallel computing hold verbatim to distributed computing. I would strongly recommend Erlang (website) for anything that requires a distributed model. It’s far (far) easier to make highly efficient distributed solutions in Erlang than it is in C++. Why? Simple, Erlang is designed from the ground up to address distributed computing issues that are a bane in C++; and yes it is a functional language, thus allowing it to be parallized more easily. Things like symmetric multiprocessing are also very natural in Erlang. So as CPU cores scale, so does the efficiency of your application, and yes it’s all for free!

    @ Vaionline
    Yes, you are probably correct, in most real-time distributed and/or networked applications latency is probably the biggest issue. That is exactly why I mentioned Erlang for any server-side/distributed solutions. For a GUI client (like the ones we used to work on), a hybrid approach of C++ integrated with a scripting language or C# can prove beneficial. Most scripting languages can be tied to C++ (and so can C#) and this proves beneficial in the long run. Integrating scripting languages is not very difficult and you could do it in stages, so the entire application need not be fully scripted at one time.

    As you say a fully managed-code application might not be an ideal way to go. However, it would be wrong to discount managed code totally. While I don’t really deny that efficiently written managed code could be slower than efficiently written C/C++ code, the benefits provided by technologies like managed code, JIT or for that matter many scripting languages are not something that can be ignored. Especially since such technologies and languages allow for rapid prototyping abilities and faster development cycles.

    That is exactly why I lean strongly on hybrid approaches; an approach where the solution uses more than one technology. Instead of ideologies fighting with each other (C++ vs C# or C++ vs Java …), we use them to complement each other. So it’s basically simple, build the regions of your solutions where speed is paramount with something like C++/Erlang (which eventually will be a small fraction) and other parts (like GUI for instance) using a scripting language (Python, C#, Lua, whatever). If ever you find a section of your program/GUI to be slow (say the update window showing real-time data), nothing prevents you from implementing that in C++ to get a speed enhancement. It’s like a “prototype, build and enhance system”. Such things are also excellent where you need to churn out rapid prototypes for demonstration purposes. The best of both I would say.

    In the end we also have to learn to adapt. As you rightly point out really good C++ programmers are scarce. With the demand for software increasing everyday, this is only going to get worse. We eventually have to live with that, and I think such an (hybrid) approach is probably a partial solution to the problem of programmer scarcity, albeit not a complete one.

    Nice to see your comments on the blog! 😀

  5. Bingo!! The hybrid approach will prevail till we get a definate solution. Hardware should soon play an important role in the determination. What’s your say on the same?

  6. It’s interesting with hardware as well. I have a couple of points here. Well we still haven’t reached a firm upper limit with clock speeds when it comes to CPUs, but we are pretty close. There was a time when h/w manufacturers would use clock speeds as marketing tools, however that has slowly waned and for good reason too. The 90’s clock rate race was meaningless and Intel dug itself into a marketing hole with that. When it comes to packing transistors, Intel is already at 45 nm with AMD hitting 45 nm technology node anytime now. As you can see (on the same wiki page on your right) we are precariously close to the 16 nm boundary beyond which effects of quantum tunneling become significant enough to deter any further reduction in transistor size. In english, we don’t know if we can go any further than 16nm. We could, but it is as yet unknown, at least to me. That is exactly why multiple cores were introduced so that chip manufacturers could, in theory, still increase computing power regardless of increases in clock speed and/or decrease in transistor size. This is not something new. The idea of having multiple processors working in parallel has been around since the 1980s (Supercomputers) and this is what the chip manufacturers are advocating for the future as well. So you could very well see a processor with 16+ cores in the near future. What we don’t have is the way to scale our software development processes using languages like C++ to take advantage of all that power. There was a time when upgrading the CPU would immediately provide a speed enhancement; for instance when we were using 330 MHz P IIs and we upgraded to 500 MHz P IIIs, everything just magically sped up. Now however it’s not quite that simple. When it comes to multi-core machines, having more cores does not guarantee an automatic speed enhancement, not even for traditional multi-threaded applications. I have written a more detailed blog entry on this, so I won’t go into it too much in this comment.

    Clock speeds and transistor size can be a little bit deceptive and are maybe only one of the benchmarks for computing power. OK, which is the most powerful chip out there…? If you answered CPU xyz you are dead wrong! The most powerful chips out there are GPUs. As of 2007 the most powerful Intel CPU, the quad core could perform something over 30 GFLOPS (theoretical). Whereas at the same time the NVIDIA Geforce 8 series GPU, the Geforce 8800 ultra could perform over 576 GFLOPS (theoretical) on 128 stream processors. Sorry I don’t have latest data, but I am sure you get the general idea. You get so much computing power because computations on the GPU can be very efficiently parallized by using shaders, which is again a form of parallel computing. Graphics manufacturers have already successfully harnessed the power of parallel computing. This awesome power of the GPU can also be used to do general purpose computing although not as generically as a traditional CPU (read more on CUDA). GPGPU is whole another story (maybe later). A more interesting fact is, even with such power, latest generation games like Crysis will make the 8800 bite the dust!

    Well it’s clear then, isn’t it. Hardware manufacturers have already taken the multi-core approach. So it is pretty clear what they want to tell us (developers). Intel is actively experimenting with technologies like STM and yes, Larrabee. The future is just shouting “parallel computing” from all directions.

    A word on C++ STM; I got the distinct feeling reading some comments on other sites that many developers feel C++ STM is somehow a magic silver bullet. Some programmers think that C++ STM is as simple as just compiling your existing C++ program using a C++ STM compiler. Bad news guys, STM is a form of parallel computing and it advocates the break down of a programming problem into parallel units, and that is very difficult with C++ or for that matter with any imperative approach.

  7. We know C++ is better in performance. Why? The main reason is the output is in the OS native form (ready for execution) where as the .NET and Java languages compile the code to an intermediate language code which is further compiled at runtime thereby reducing the startup speed. Few other reasons are the managed heap offering garbage collection and wrappers for security, threading and other system APIs and features.
    Talking contrary to the above comment .NET does provide utilities like NGEN to compile the code directly OS native form. There should be something similar for Java.
    This provides simplicity to the developers to use .NET or Java and may be maintain the performance. It would be interesting to know the impact on performance when this is done. This would any interesting topic to blog.

  8. @ C++ being (faster) better in performance
    Probably true to an extent. You may be correct. Interestingly people have argued otherwise.
    http://www.idiom.com/~zilla/Computer/javaCbenchmark.html.
    Similar arguments can be made with C# and .NET too.

    I generally refrain from evaluating a language based on speed because most of the times benchmarks give you an incomplete picture (and yes because I don’t intend to start a language flame war on this blog). There are a countless resources on the Internet that delve into tests which support both sides of the argument. Both sides have valid and contradictory arguments. It doesn’t matter. Doing those 10 liner test that benchmark timing results is actually meaningless. I had a run in with this a couple of months back and it really points out how benchmarks can be misleading.

    The Java is Faster than C++ and C++ Sucks Unbiased Benchmark.
    The Java (not really) Faster than C++ Benchmark

    I don’t intend to take sides, because frankly I don’t have enough data to make an argument to either support or debunk any of that. It just goes to show benchmarks depend on “who writes them”. In a full fledged application there are so many more issues that have to be considered, even while considering “raw application speed”. Then there are other issues like inherent limitations of the language that also have to be considered.

    As you can see there are claims and counter claims on both sides. Shootout’s and language wars are all but common on many public forums/boards. I don’t really care. I have stopped reading them since most of them are very subjective at best. Both sides are strongly convinced that they are the ones who are correct. According to me most of this is hubris on the part of the programmers (on either side of the argument). I have touched on this extensively in “My programming language is the best”.

    I have learned to evaluate every programming problem objectively and I do use different languages for different projects. Yes, the Doofus 3D game is written almost entirely in C++, but there have been times when I have yearned for something simpler like Python, especially while programming game AI where speed wasn’t paramount. There was a project I was working on about 8 months back where I used Python (wxPython) for making a GUI app and it was almost trivial using it to make a cross-platform application.

    @ generating native code
    Yes it would be interesting to evaluate that on both Java and C#. Would it lead to a speed enhancement?…That’s an open question. Personally I think it may cut the runtime compilation cost and, yes you are right, to some extent may increase the speed during startup, but I have no data to back this claim up. The rest shouldn’t make too much of a difference. There was some discussions on speeding up Python code as well by compiling it to native code and there was some research done on that front as well (, can’t seem to find the link).

    I was trying out the D language the other day and was pretty impressed by it. It allows you to compile to native code at the same time has all the makings of a modern language. Even better is the fact that D is very similar to C++ and there are some features of the language that have lead me to believe it could be great for an extension language as well. Don’t want to comment more till I have something substantial.

  9. Pingback: Can parallel processing really cut it? | Susheel's Blog

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.