The war of the graphic APIs.

OpenGLSo it’s finally here! I mean OpenGL 3.0 update, formerly known as Longs Peak, has officially been announced at Siggraph. Read more about it here. It’s been like way too long already. I know the announcement was made a while back but I didn’t get time to read the details and proposals till this weekend. The Birds Of a Feather(BOF) presentations are interesting and if you are an avid OpenGL follower then they are a must look at. While true and detailed specs are not yet released, the discussion threads do provide some interesting insight of things that can be expected. If you are already a an OpenGL programmer, expect significant changes to the way you worked with OpenGL. The update removes some redundant and archaic practices that have existed in OpenGL for way too long which makes me happy. The last time OpenGL was rewritten was in 1992 and has survived this long arduous journey of hardware updates pretty much unchanged. Some would argue that that the current mechanism of extensions have cluttered the API just too much and can be a pain to work with. I don’t blame them. Lets just hope the new update addresses these and other issues better than the previous versions. Personally I never found extensions too difficult to work with. Generally complaints are from DirectX guys who have switched over to OpenGL and have to adapt to the new way of working with OpenGL.

DirectX 10Talking about DirectX, it’s been quite a while since the release of DirectX 10. It is apparent that DirectX has a clear advantage at the moment on OpenGL with regards to exposing newer functionality. I don’t buy the argument that DirectX is in anyway superior or that matter inferior to OpenGL. Many people have argued this over the years, but the argument is far from conclusive. While DirectX may be able to expose newer and latest functionality faster than OpenGL, the functionality eventually does get exposed via OpenGL too. So to all nay sayers, my response will be, it’s not so much the graphics API but the underlying hardware that determines how well your card performs and how much graphics throughput you get. The O2 Engine abstracts both APIs and through the years of developing the engine I have seen that both APIs are neck to neck when compared to speed. There is no clear winner when comparing the two APIs from a purely technical perspective. While DirectX is more favored one in game development circles, OpenGL is the API of choice in CAD and Science & Research circles, but that is clearly for very different reasons and not speed or performance. What people fail to realize is that fact that DirectX and OpenGL are very similar, more similar than you would think. With the release of the 3.0 update, I think (and this is my personal opinion) that the two APIs will end up being even more similar. The DirectX 10 update fixed shortcomings of DirectX 9 and previous versions and OpenGL 3.0 has addressed issues that were problematic with legacy OpenGL. So in the future, and I hope, there will be little to choose between the two APIs.

This is like a zoomed up API level and technical level picture. Let’s stand back a bit and look at the bigger picture. So DirectX was first of the block, so is it wining the race. At the moment it would seem so. Programmers and developers have already begun working on the API and getting their hands dirty. Hardware for shader model 4.0 has been available for quite a while and is now considered reasonably cheap. It simply means that development houses have already come up with good engines. But there is a catch to all of this. DirectX 10 is only available for Vista machines that could prove to be somewhat of a stumbling block. Now don’t get me wrong, there is nothing wrong with Vista as such, but I can’t understand the sanity in not releasing DirectX 10 for XP! The driver architecture reason given by Microsoft is crap! If they had released it for XP, that would have meant even larger market for games that used DirectX 10. The only reason I can think of as to why this decision was made was because Microsoft wants to push it’s OS on to end user’s desktops. Gabe Newell of Valve Software touched this point recently where he emphatically said that DirectX 10 for Vista was a mistake! I don’t buy into his argument completely. I think users will eventually buy Vista anyways and we have seen this arm twisting by Microsoft before, and it does eventually force people to buy their OS.

The question to ask is, does DirectX 10 offer truly unique graphics than 9.0? The answer is at the moment NO, but eventually YES. You could get similar graphics with 9.0 or OpenGL 2.0. That said the XBox 360 does not support DirectX 10 yet, so cross platform developers are going to stick with 9.0 for a while. While OpenGLversion 3.0 promises a lot, it’s true potential will only be realized when it comes on to our desktop. Also the spec for Mt Evans update that promises geometry shaders is somewhat still to be fully operationalized. So I think it will be after, probably much after, the 3.0 release. The OpenGL committee seems to be slow again as usual. While DirectX 10 games are here, DirectX 9 is not history as yet.

What’s going on with the game?

Doofus game.
Doofus Game

I have had a lot of questions from people on the status and progress of the game. The favorite question is “When is the game going to get released?” I have a separate blog open for this but since I don’t put any technical stuff on that blog, I decided to throw in some more details on the things that are happening with the game. First, the game is on track and we are doing great thus far. Most of the team is contributing little to nothing at the moment. Except the beta testers, who are busy at work. So for the time I am pretty much pulling the programming side of things alone. There isn’t much left anyways but there are just too many small small things to take care of.

First the AI. It is not yet up to my satisfaction. Enemies sometimes get stuck into walls, fall off cliffs or behave weird. I don’t think this is going to be too much of an issue. Such issues are generally solved by clever heuristics and eliminating obvious redundant conditions. All entities in the O2 Engine’s AI module are handled by individual state-machines and are fully modeled on state transition logic. These redundant conditions are in fact eliminated quite nicely with the state logic, i.e. having a specific state to handle a specific condition. Also some tweaks here and there are due. I had initially feared that the AI would take significant CPU cycles, and since I am doing all shadow silhouette calculations on the CPU, the game would become significantly CPU bound. That is clearly not the case. The game does not go CPU bound except for some low end single core machines and you generally do not have any problems with CPUs 1.8 GHz and above. With shadows turned OFF, the game can run on a P III 1 GHz easy.

In fact I am finding things are quite the opposite. The game has difficulty pushing geometry on lower end graphics cards and is probably more fill rate intensive than I would have liked. Of course this is when running with shadow volumes ON. That was expected since stenciling, especially for stencil shadows, is the most time consuming activity for the GPU. The engine does aggressively cull out shadows when it can, and just yesterday I finished a new optimization technique that decimates shadow volumes for static geometry at the cost of some CPU overhead. Even so, stenciling can be suicidal on lower end cards with limited fill rate, especially when occlusion geometry has high polygon count. Having said that, I am getting an FPS of about 30-34 on a card like the old Geforce 6200 TC with everything from glow effects to full screen bloom turned on. For any higher level card and for that matter even a mid-range card, pushing Doofus 3D level geometry along with volume shadows is a walk in the park.

However, we are marketing the game as a casual game and as with most casual games, it must run with bare minimum hardware. Most casual gamers don’t have a clue what a graphics card is, or what an update of a driver means. They have nothing more than an on-board card. True, these days mother-boards do ship with ATI and NVIDIA chipsets, bit unfortunately not all. My main concern is the game running on on-board Intel cards. Intel cards are lousy, and that is putting it mildly. These cards can sometimes exhibit ridiculous behavior and are giving me maximum headaches as of now. A few days back I was running the game on a machine with an on-board Intel card and the game used to run at 10 FPS at a resolution of 640×480, and at a nice 34 FPS at a resolution of 800×600, then used to drop again for higher resolutions. Explain that! No it wasn’t some program running in the background. I stopped all programs and some services and disconnected from the internet. Still this wired behaviour.

The fact is the game needs to scale to hardware so that it can run even on crappy GPUs. This is easier said than done and my current task is to see that the game detects and adjusts itself to even the most basic graphics hardware. There are also issues which are too specific to cover here like driver issues, render-to-texture issues and things like that. I also need to work on further optimizations to juice out higher FPS values. In the meantime the beta testing is also throwing up new bugs. Fortunately there have been no major bugs like GPFs or crashes reported by any of the beta testers. The game ran on most of the systems tested thus far. Sometimes a little slower than expected, sometimes with a lot of z-fighting of shadows, but the main thing is it ran.

That’s about it. The march continues, and I hope I can wind things down slowly and come out of this almost infinite beta.

Extending the code documentation generator.

When it comes to documenting code, there is nothing better than a documentation generator, and when you talk about document generators, none is perhaps more widely adopted than Doxygen. I am using Doxygen in my current project and I have been using it in on the last 3 projects. Doxygen is pretty good when it comes to generating documentation and I must add, does the job pretty well too. But besides it’s conventional use, I decided to try and experiment with it a little bit more for this project. I have always believed that unnecessary and excessive documentation is counterproductive to a project life-cycle. Mindlessly making Word documents and adding it to a folder often has no value addition what-so-ever to a project. This is simply because this documentation never gets properly referenced by the developer, designer or the team-leader especially during the time of intense project pressures.

Before you brand me as a development process heretic, or deem me as a free willed do-what-you-like vagabond, let me make things clear. I am neither of those. I am very much in favor of proper and detailed documentation and I myself follow strict processes (, too much sometimes,) in every cycle of the development process. Regarding documentation, the thing I find inconvenient is that fact that, documentation gets fragmented into different files and often into different locations. Some teams are careful never to let this happen and often do control this by using very specialized software (Rational products do ofter such software). But products like these can leave a hole in the pocket of large corporations, and for budget teams like ours this is simply not a viable solution.

From the very beginning I had this gut feeling that documentation should be at one single place and accessible via one single interface. I will go a step further and say that I, as a developer, would like to see design plus the documentation plus the code at a single place at any given time. This is immensely helpful when you are into heavy development. You can link back to the design at a click of a button and if you are intuitive enough to use the right tool, quite literally link to the exact spot you want to go. This to-and-fro movement from documentation to design to code and vice versa can serve as a powerful counter check mechanism. So the docs, the code and the design have to be in perfect sync for it to work. When one changes, everything needs to be updated. I found this very helpful in managing the current project. Kept me and the team always focused on macro and micro level design from the very beginning.

We started off with this idea in our project and decided to implement it via Doxygen. Since Doxygen builds docs in HTML, extending it to incorporate such functionality was, well, trivial. Believe it! I haven’t spent more than about 1 day (total) on this stuff. We have made no code changes to Doxygen program. All our changes are superficial needing only to use already built-in functionality of Doxygen and some minor (but intuitive) HTML tweaks. During this release we went a bit further and decided to link our bug-base to the design-documentation-code nexus. It has proved to be even better than expected. Not all things have been ironed out yet, infact a lot remain, but it already seems like a nice way to handle things.

Design documentation.

Design documentation

Documentation.

Class Documentation

Class Diagram.

Class Diagrams

I am not at a liberty to show more diagrams for the design but rest assured they are there.

The challenge of an open-ended gameplay.

Ever since I delved into the misty world of Oblivion (The Elder Scrolls IV: Oblivion), I could not help but admire the game and its design as a whole. Now I feel my initial comments and observations (except on the technical aspects) were somewhat misplaced. I had seriously miscalculated the depth of the gameplay for a game like Oblivion. Though Oblivion looks like a first person hand-to-had combat game, something like the Riddick game in a medieval backdrop, it is nothing of that sort. It is very much an open ended game. Let me come clean, this is my first true experience with a game that has an open-ended or sandbox gameplay. I am a FPS-RTS fan and my initial experiences with Oblivion were frustrating. I was like, “Why the hell do I have to talk to some many people, give me something to slash and hack”. But that is where an open ended game differs from a normal run-of-the mill game. An open ended game often makes you build a true unique identity for yourself as you play the game. Meaning, in the world of Oblivion you could end up being a hero, a thief, a vampire, a magician or a complete nobody if you so choose.

I really started to enjoy the game when I began to forgo preconceived notions on how a game should be played. While most FPS-RTS game allow the player to make micro level decisions, ie. where to hide, how to attack, which weapon to use, an open-ended game will demand the player to make decisions that will affect the progress of the entire game. Based on your decisions, the game will play differently. The game is modeled on a decision-consequence behavior, which is perhaps why I initially found it difficult to adapt to. It is very difficult to explain exactly what I am trying to convey, maybe you can’t make head or tail out of what I am trying to say. It’s basically the experience and you have to play the game to understand it. The game is radically different from a pure goal scripted type game. It is not something totally like a MMO, say for example like Second Life, or for that matter nothing like Sims, Oblivion is very different. The only game that I have played and could say was somewhat close, is Heretic II. For one, you could play Oblivion for almost like “forever”. I have heard people compare it to WoW and Everquest, but I have never played WoW or Everquest so I don’t know.

The game has had a profound impact on me. That’s rare. As I said before it is not the very best of the games I have played, but it’s not the game as such but the whole concept that has made me look back and wonder. It is very different from the regular “Hey! Here is a monster we have seem so many times before, pump his guts full of lead! Oh OK, we know there is nothing new to that, we have like done that 10000000 times before, but look look, he has shining eyes, and see the bump mapping and the parallax mapping and did you notice the shadows and look at the his tail and the x y zee graphics that we have put in,,, and then there is next monster behind the next bend,, and and the next,,, woooh!…” bah! boring! Done that, been there, not once, but again and again. I am tired of games that follow a stereotypical gamplay. Give me something more. Give me more experience. Let me explore regions where I have never been before. Let me experience something new. OK the FPS game genre was great 10 years ago, but move on guys! Putting new graphics on top of old gameplay is just like having remixes of old songs with dancing half nude women; phooey, maybe even worse. I find Oblivion interesting because it allows me to experience something like I have never experienced before. Not some mundane redundant crap dished out on a graphically attractive platter.

I find open-ended gameplay both fascinating and challenging from a designer’s point of view. To give a open ended experience, the game design needs to have far more scalability. The designer needs to plan out far greater sets of unknowns than are possible in a scripted style game. In a scripted game you generally have a single unfaltering goal. You have to complete the goal before you proceed to a new one. Conversely in an open ended gameplay you are allowed to approach your goal in infinite possible ways. You can have smaller goals or smaller sub-goals which too can be as non-monotonous as your ultimate goal. While this may be easier said than done, it really got me thinking as to how one could approach designing such as game from a game designer’s point of view. Interesting, since every unknown you place in a game will increase design complexity substantially. In Oblivion the entire game is divided into Quests, where the quests can be thought of as goals. While there is a main quest, there are an almost infinite series of sub quests that branch off the main quest. You have a choice of doing the quests at your discretion. But the game does play very very differently depending on the choices you make and the quests (goals) you complete. The game doesn’t force you to do anything particular, it’s just one giant simulation, which plays on you just as much as you play on it!

Having played Oblivion (, there are still 100s of quests left), I can tell you I have become a fan of the open ended style of gameplay. Don’t be surprised if you find more ramblings on similar topics on this blog. I can now see why people hijack their lives to play something like WoW incessantly. These games are truly a new experience and if you have played them before you will understand what I mean. If you haven’t, you should!

Bah! My Flatron finally went dead!

After almost 5 years of continuous service, my trusty old LG Flatron CRT monitor finally bid farewell yesterday. I have been having problems with it for the past 3 months or so and only recently I got the thing fixed. But I had this wired feeling that it was on it’s final lap. The display had gone visibly blurry and I was using the monitor on my test PC for the past 9 months. Too bad, I had played some really cool games on it and most of the development of the current engine and game was done using the same monitor. (A little bit emotional, I know :'( ).

So now I am running the PC with an old 15 inch monitor which is a decade older. I know I have to get a new one soon, but which one is the question? Should I go for a CRT or for an LCD? After seeing the new LCD 19 inch monitors you can’t help but drool. They do leave a hole in the pocket but they do give bang for the buck.

wxWidgets + Code::Blocks, are we there yet?

wxWidgets wxWidgets is a mature and very powerful cross-platform GUI framework with a very liberal non-restrictive license. It has a very big community behind it and a also has a huge support base. There are tons of tutorials, excellent examples and many companies large and small have successfully used wxWidgets to develop their applications. When it comes to GUI frameworks, wxWidgets is among the best out there.

Code::BlocksCode::Blocks is more of a “new kid on the block” Integrated Development Environment when it comes to IDEs in general. If you see the design and layout of the IDE, you can unmistakably see traces of Microsoft Developer Studio and KDE Development environment in it. It reminds me more of the old Visual Studio 6.0 IDE. Programmers who have worked on the VC IDE v6.0 and Code::Blocks will know what I mean. Code::Blocks is written using wxWidgets and shares a very close relationship with wxWidgets framework.

wxWidgets is very similar in design to MFC (Microsoft Foundation Classes), so developers who have worked on MFC tend to gravitate towards wxWidgets more than any other GUI when they have to develop a cross-platform solution. I don’t blame them the least. Learning a brand new framework from scratch can be intimidating even for experienced developers. Being a long time MFC programmer myself, I personally like wxWidgets more than any other GUI out there.

So where does Code::Blocks fit into all of this? Code::Blocks has an integrated GUI editor and code manager (just like the famous “ClassWizard” of VC++ 6.0) for wxWidgets called wxSmith and its is very good for rapid GUI development. Code::Blocks is licensed under GNU GPL so it is the perfect solution for budget development solutions and for programmers that have to pay a bang and a buck for an IDE. The combination of wxWidgets + Code::Blocks takes rapid cross-platform GUI development to the realm of possibility.

A GUI that is very similar to MFC and an IDE that is custom made for it, and not mention all this for ziltch/nada/nothing, seems too good to be true. But wait a minute before you go and make your brand new mega budget office suite, there are some things that are not up to the the mark just yet. As I said earlier Code::Blocks is pretty new and it has some way to go. The IDE has had more than a jagged development path. It’s official 1.0RC release is doggy to say the least, not to mention it was released more than a year ago. The IDE has had a major re-design since then, why, your guess is as good as mine. New projects don’t need redesign this early in their life cycle. Maybe the dev team thought that the IDE could not stand up to others like eclipse, or maybe it was the plug-in system that they added or some other feature, I don’t know.

There hasn’t been a major release of the IDE in a long time and that’s a fact. What the dev team does release, are nightly snapshots and I must say these days the builds of Code::Blocks have been pretty impressive. The product has been in beta for way too long and I wouldn’t put my production projects on it just yet. There are also some issues with stability that the dev team is working on. The last time I checked the IDE was great while using GCC but MS compilers need some work. Though the IDE claims to work with many more compilers, I haven’t checked them all out yet. For Linux development I would recommend this IDE hands down, but for Windows I think I will stick with Visual Studio Express a while longer.

I suspect there is going to a major release of the IDE soon, I hope there is. I am eagerly looking forward to that. But one thing is clear, wxWidgets + Code::Blocks is a win-win combination. I think that wxWidgets will also greatly benefit from a stable Code::Blocks IDE. More people will use it, I am sure. More companies will adopt wxWidgets as a development environment of choice. In any case what Code::Blocks team has achieved is commendable, especially since most other similar project have long gone cold.

References

wxWidgets
Code::Blocks

Fear factor.

Fear factor.Just yesterday I was rummaging through my article archives when I had a run in with my Doom 3 DVD. It’s been a while since I played Doom 3, actually it’s been almost a year and a half since I abandoned the game half way. Interesting, since I hate leaving games mid-way. I generally have two particular reactions to games, a. I don’t like the game in the first few levels and never look at it again. This can be due to several reasons like bad controls, bad graphics, crashes and generally things like that will put off any normal person, or, b. I take an interest in the game and play it till the very end. However Doom 3 is an exception and that got me thinking as to what it was about the game that made me turn away from it.

I got my hands on the game a while back, about 6 months from its release. I was excited about the graphics and wanted to see what all the talk about stencil shadows and Carmack’s new and then controversial algorithm was. Unfortunately a series of accidents (with hardware failures) and incidents (job) kept me busy. When I did finally fire up the game, it was mind blowing. Most of id’s games are, nothing new about that. (For the record I have played every one of their games excepting Commander Keen series.) However after playing a couple of levels, I was already turning away from the game. For one, the game is set in a very dark backdrop and the whole gameplay revolves around getting the player shit scared. Dark alleys, claustrophobic environments and very little to no light. Everything is designed to play on basic human fear. It’s like the whole game is designed keeping only one intension in mind, FEAR.

Along with the environment there are some subtle tweaks done to the FPS style of play that aliments to the overall fear factor. Small things that you may or may not have noticed. For example, notice when you hold a torch, you can’t hold your gun at the same time. This is just an immensely powerful physiological disadvantage that has been designed in. It leaves you virtually helpless against an attacking enemy. Take away the light and hold the gun, and you can’t see what is approaching. The game tries to lead you into a dark void of the unknown! This “unknown” and the “fear of the unknown” is the gameplay of Doom 3. The best way to generate fear is having a set of unknowns. The game is very smartly designed to have you believe, of course, subconsciously that the environment is fearful. You start off on a Mars base where everyone is jumpy and fearful. Again no one knows anything (an unknown there). The base is flush with rumors and information is intentionally kept vague. The PDA you like so much, ah huh, again just used to play on your fears. The PDA messages are intentionally vague and misleading all done deliberately to keep the unknowns pilling up.

The overall game progress is slow and sometimes very slow. You might not have noticed this, but try adding up the amount of time you spend reading your PDA, listening to conversations, switching between the torch and the gun, and loading up saved levels and you will see what I mean. This is very contrary to most FPS style games where the action is fast paced and certainly for id’s games which are known for their shoot and frag style games. All well and good, so what had me dump the game. For one, I got tired of getting scared. Yes I did. I mean after playing the game for like 6-7 hours (total) that fear factor kinda gets boring. All due respect to the game designers, but th game has very little and too few “cool off” periods. A cool off period is an interval just after a intense fight sequence where the player is given time to cool off. This is an integral part of gameplay design especially for action oriented games. There are small and large cool off periods and their placement in the overall game flow is critical. While there are a number of small cool off periods, the game misses out on larger periods which generally occur after a level is completed. Yes, they are there, but they are short.

Another thing that bugged me a lot is the spawning of the enemies. In later stages of the game, enemies get spawned when you pick up a health pack or an ammo kit. This is somewhat controversial. Ammos, health packs, and other goodies are there to give a the player a reward, and often a well deserved one. To have a monster spawn up when you pick up a health pack makes me feel cheated! Period! In the end I just feel that the game lacked variety. Doom 3 is an old run of the mill fire and forget FPS style game, but then again we have played such games too many times already. I was coming over from playing Half-Life 2 and so the comparison couldn’t have been starker. I switched over to Quake 4, completed it and loved it. That’s because the game has a well balanced gameplay, great variety and never got me bored once. Maybe I had expected too much from Doom 3 since I am such a great fan of the old legacy Doom and Doom 2 series of games. They were like the ultimate technology adrenaline boosters when I played them. Maybe my expectation from Doom 3 was a bit too much. Maybe it’s just me getting older or maybe my tastes have changed over the years. I don’t know.

Selecting a scripting engine – Part 2.

My quest for a scripting system for the O2 Engine continues. This blog is a continuation of the “Selecting a scripting engine” series. You can check out the first part here. I tried a couple of different approaches to try and attack the problem. I am currently in prototyping phase, and things and decisions can and do change drastically. Also I have to test a lot of different techniques and scripting languages before deciding upon a final solution.

Python

I continued my experiments with Python with trying to integrate Python into a moderately sized project. Something that I had worked on before beginning with the O2 Engine. It has similar structure to the O2 Engine but is comparatively far smaller. Python is easy to bind (, than what I initially thought) but the party ends there. I must say I am a bit disappointed, extending the classes is turning out to be far more difficult that I had anticipated. As I mentioned in my earlier post, to build a game successfully with the engine you not only have to work with the engine classes but also engine design. Exporting the design was not very intuitive while working with python bindings . I am not ready to give up just yet, I have decided to look into how others are approaching the problem. Maybe look into how bindings for wxWidgets are written (wxPython).

New scripting language

I have no experience in scripting language creation but recently after a long conversation with an old friend of mine I was forced to look into ANTLR. ANTLR is a parser generator which could be used to create a brand new scripting language. Before I proceed I must warn you though that I am no expert in Lexers or Parsers and have no clue how to make a scripting language. My knowledge in this area is limited and I am just about getting my feet wet in this arena. Treat my comments in this area with a pinch of salt.

OK, coming back to the main topic. Some people I chatted with recently were of the view that it would be better off in the long run to implement a new scripting language from scratch. Let me say, this remains an issue of debate between us, and I have not reached a conclusion yet. Having said that, a scripting language specific to the engine will be excellent for scalability. Grammar to tackle game development specific issues could be built into the scripting language itself. This would avoid archaic constructs that often arise out of bindings for languages like Python or Lua. With off-the-shelf languages you do have to make design compromises to get all of your functionality in into the scripting language. Again, I am not making tall claims. As is said, I am still working with this whole concept of bindings and scripting languages.

Tried my hand at ANTLR and could not make too much out of what needs to be done. But I am very much a newbie at this, and besides, I didn’t spend a lot of time with it. Just a couple of hours on a Sunday afternoon aren’t really enough to make conclusive claims. I did a lot of googling around and I can see people have good things to say about it.  So, lets see how this turns out after some more experimentation. ATM I have very little time to devote to this.

Busy busy week.

It’s been a very busy week. Not only on the coding side of things, but had to finalize some major business related issues. The progress on the game has been a tad bit slow on the coding side, but nothing that can’t be picked up in the coming week. I also had a run-in with a couple of senior colleagues/friends and had some interesting discussions on the future of the engine in general. (Things like an Editor/Scene-Builder for the Engine, future projects and things like that. Maybe in my coming posts I will elaborate further on that.) I generally take the Sunday off, but I am working today on some long remaining issues with the game.