Maintaining and refactoring C++

Last week was my last day working with C++ (for a while). It’s been quite fun to revisit both the programming language and source code which kicked of my development career over 12 years ago, and I have enjoyed the experience a lot. There is also a few things to note so I put together a short list of things I found interesting during this short maintainance assignment.

Introducing a source control system

The code was originally written in 1999 and the executable files have been running in production ever since. Today the programs are owned by a group in the enterprise operations team. Their focus is to keep the systems up and running and they have little interest in the development process. There was no source control system available when I originally developed the code so,  before making any changes to the existing source, I was determined to correct that fact. A few months ago I taught myself Git and have never looked back since. Git is an excellent tool and this was an appropriate opportunity to introduce Git as suitable source control system for this code base. Being the sole maintainance developer of these programs I was happy just to add Git to aid my own productivity and give me the ability to safely abort a change should the need arise (and it did), but it will also pay off in the long run.

Updating to new IDE

Once a source control system was in place, the next step was to pick out the correct file candidates from each project to be checked in to the repository. I didn’t want every project file source controlled and this was a good occasion to get a bit more familiar with some of the lesser known project files used by the IDE, and also how to configure Git to filter file names/paths. Originally, the projects were all developed using Microsoft Visual C++ version 6 so the first step was to get them updated to a newer C++ IDE, which just happened to be Visual Studio 2008. Once the project files I needed were identified, these were checked in to the repository and tagged as the base version. Safe and ready to go!

Automatically updating the projects from Visual C++ 6.0 projects to Visual Studio 2008 solutions went ahead problem free – the IDE handled it all. My job was then to rid myself of the unnecessary project files only used by the old IDE. The (newer) Visual Studio C++ compiler has grown a lot “smarter” so a few syntax bugs had to be ironed out before the old code would build. There were also warnings due to calls to C++ standard library functions that now were deemed unsafe. In most cases a safer alternative was suggested.

Visual Studio 2008 is not unfamiliar to me, and those following this blog will know that I have used it for C# development, but never for C++. I was surprised how it lagged it’s C# cousin in functionality. Among other things there is little or no support for MSBuild and the IDE has no refactoring functionality. The latter was a real let down since refactoring C++ proved to be notoriously more difficult than any other modern language I have encountered. However, a few things made the update worth it: a better compiler and some IDE features like folders for structuring the source files. Visual Studio 2008 also has line numbering which I’m pretty sure was missing in the Visual C++ 6 source code editor.

Documentation and getting familiar with the source code

By chance, it just so happened that I came across Doxygen when googling for free C++ tools. Since Doxygen can be used for C#, Java and Python (untried, but according to the documentation) I thought it would be worth the time to take a closer look at this tool and that proved to be a wise decision. Doxygen is brilliant! I have not used it for the other languages it supports, but I plan to for my next project.  It’s syntax may remind you of JavaDoc, but with the correct dependencies installed it can create useful illustrations for viewing code and dependencies. Also, when creating the documentation you can configure it to include the source code in the documentation. For me the output was html and I actually found it easier to browse through the generated Doxygen documentation with my web browser than the source code itself using the IDE! Also useful is the fact that Doxygen can tell you which functions a particular function calls, and which functions your function calls. This proved to be useful when looking for things to refactor while attempting to simplify the code.

Beautiful code

I had never really had the need for a beautifier before, but this time I wanted to make the source easier to read, and also replace tabs with spaces and a few other things. I found a beautifier named UniversalIndentGUI which also works with more than one programming language, which I think is a plus. I fed all the source files to it and out popped “beautifully formatted” C++ source code. Voilà!

Unit testing and mocking framework

In Java development, unit testing is part of everyday life and has been for quite some time. However, where JUnit is the defacto standard for unit testing for Java, there is no similar single tool which has widespread adoption for C++ development. There are many tools available, but I had a hard time picking the one which I thought had the most potential and most active user community. In the end my choice fell on Google Test which proved to be a useful tool. Along with Google Mock, a mocking framework for C++, they provide functionality for unit testing and creating mock objects.

I spent a lot of the project time trying to refactor the code to use these tools. Unfortunately the code was riddled with references to a third part library, Lotus Domino C++ API, which I could not get working with GTest. Therefore a lot of the work was trying to narrow the usage of this library to only certain parts of the code. Although this was always in my plans, I never got quite that far and ran out of time, which was a shame. Refactoring can be time-consuming…

Project improvements

I added a simple readme file and change log to each project and moved any comments referring to a changes from the source code to the change log. I hope this will prove useful to any future developers for getting a head start and saving them from starting off with the source itself. With a simple attribute, Doxygen let me include the contents of each of the files in to the generated Doxygen documentation, which I though was a nice touch.

Lasting impressions

As I said earlier, I will miss working with C++. That said, I feel I can better appreciate the syntax improvements of languages such as C#, Java and Python. I think these languages better facilitate the creation of object-oriented code without syntax getting in the way, so to speak. C++ does make you work harder, but supplies more power in return (if you need it!). It is useful to keep in mind that trying to write C++ code in a Java or C# style may well provide you with unwanted memory leaks. In C++ you use the new and delete operators to create object instances on the heap, whereas Java and C# provide garbage collection to handle the deletion of objects no longer being referenced, as you probably know. Take this example, a Java method for fetching a bucket of water could look something like this:

public Bucket createBucketOfWater() {
    Bucket b = new BucketImpl();
    return b;

Inside the method a new instance of a Bucket class is created and initialised. The memory used for this object will be reclaimed by garbage collection once the myBucket reference to the object is invalidated. The caller does need to think of this – it happens automatically.

// someObjectInstance creates and initialises the Bucket class, the garbage collector handles the memory when the myBucket reference goes out of scope
Bucket myBucket = someObjectInstance.createBucketOfWater();

Doing something similar in C++ may not be a good idea. You may end up with something like:

// create a new Bucket of water, return a pointer to the memory on the heap
Bucket* CreateBucketOfWater() {
    Bucket* b = new BucketImpl();
    return b;

This code works, but will burden the caller to delete the memory used for the Bucket when done. If, for some reason, the caller should forget, the memory will be lost once the pointer variable is invalidated. We then have a memory leak.

// create a new Bucket of water, return a pointer to the memory on the heap
Bucket* b = CreateBucketOfWater();

// must remember to delete memory on heap
delete b;

A useful rule of thumb to remember is that objects should be created and deleted by the same part of the code, not spread around. In other words a function or method should not create an object on the heap and then leave it up to the caller to tidy up when done. So how do we avoid this scenario? A more suitable C++ approach could be something like this:

// function body not relevant
void FillBucketWithWater(Bucket*);
// create a Bucket instance and pass an object pointer to the method, remember to delete the memory when done
Bucket* b = new WaterBucket();
delete b;

So to conclude, where in Java you would ask the method for a bucket of water, in C++ you would supply your own bucket and then use another method to fill it with water! When you are done with the bucket you are responsible for deleting it since you created it.

However, although this is a clear division of responsibilities, it does make me wonder how to properly create a factory method without burdening the caller to delete any created heap objects that the factory creates.

The Apple iPad will rock your world

On a recent trip to London I purchased an Apple iPad. I have been using an iPhone for years so I kind of knew what to expect, but the iPad is really something else. It is such a useful tool and has really changed my life to the better in many ways in only a short space of time.

The MobileRSS app and a Google Reader account enables me to follow my RSS news feeds more easily and frequently. The Read It Later app allows me to follow-up on links that I previously have marked for later viewing when I come across them on my PC using the Firefox plugin. I have been using Read It Later for PC/iPhone for a while now, but until now have never be able to find the time to read. However, the thing that has impressed me the most about the iPad is it’s usefulness as an e-book reader.

I started off reading pdf books using the iBooks app – which was OK, but that was before I discovered the ePub format which really makes the whole digital reading experience a lot more enjoyable. iBooks offers more functionality when using ePub including backlight and font adjustment functionality, animated paging, bookmarking (also available for pdf’s), text highlighting, notes, a dictionary and more. Now that I have discovered that both O’Reilly and Manning use ePub for a lot of their books I really don’t see the need for buying paper books anymore. Yes, it really is that good. I am going full digital from now on and hopefully saving a few trees in the process – maybe also a bit of money and some shelf space 🙂

O’Reilly also have a pretty good offer in place to buy digital formats for a reduced price if you already own the paper version of the book and have registered it online at their website. It’s an offer which I have been using to “upgrade” some of my most frequently used O’Reilly books to the ePub format and have them easily accessible on my iPad… or iPhone should I become really desperate. They also have an ebook deal of the day offer in place which I follow.

(99% written and posted using the WordPress application on my iPad)

Remember me? I’m your old C++ code…

Just recently I was called upon to fix some code that I had written while working as a consultant “way back” in 1998. It’s not that long ago really, but a lifetime in software developer years. At the time I was fresh out of university so this was my first proper assignment as a professional. I was the sole developer on this project and the code was written in C++ using the Lotus Notes/Domino C++ API. This was kind of the norm back then.

In essence this old code was broken in to four programs. They were all server based batch type jobs, run at scheduled intervals. Their common goal was to maintain the people groups in the Lotus Notes Name & Address book to reflect the structure of the organization (which was, and still is, a large organization). That means creating new groups, removing empty groups and adding/moving group members to and from groups. For a user to gain access to the Lotus Domino servers (for mail and other databases) you had to be a member of a group in the hierarchy, since only the top node (and thereby it’s children) was granted access to the servers. The groups were also used as mailing groups to parts of the organization. It would be kind of “critical” for a user if the program made a mistake and it goes without saying that with so much application business logic you wouldn’t choose C++ for this type of task today.

I was amazed that these old programs were still running!! Sure, one program had been altered by someone else a few times some years ago, but the remaining three were running just as I last compiled them back in October of 1999. I thought that was kind of fun and also made me a bit proud. Of course, I believe there are two reasons why this code has run unaltered for so long:

  1. It was written properly and there was no need to alter it
  2. Nobody understood the source code and therefore dared not make a change

I choose to believe reason one. I guess that’s a shocker! However, I was actually able to confirm this fact when I started to work with the source code once again after all these years. It was tidy and easy to read, although I was amazed just how much of the C++ syntax was now strange to me after many years of programming Java, Python and C#. I would not have made all the same choices that I did back then regarding the architecture, but in general I was kind of impressed. There was also valid documentation, written by me, which I found very useful when trying to get back in to the problem mindset. Not bad!! 🙂

It was strange to use Visual C++ 6.0 again which was the IDE/compiler I worked with originally. I did actually try to upgrade the project to Visual Studio 2008, but the Visual Studio C++ compiler wouldn’t compile the original source code so in the end I gave up trying. It was never part of the new assignment and the C++ syntax was just too unfamiliar to me. The customer didn’t care so I stuck to VC++ for the time being. Maybe in the future if I get the opportunity again I will give it another attempt.

Of course, it goes without saying that the actual source code – released in 1999 – was lost, but luckily enough I found a copy on a CD-ROM at home, which was a relief. It made the job a lot easier, but I guess it also shoots to bits my reason one (above) a bit 🙂

Encouraging signs for web development on the Microsoft ASP.NET 4.0 platform

This really seems like a good time to be working with Microsoft web technologies. Not only has ASP.NET 4.0 just shipped along with a new version of Visual Studio, but there seems to be a focus on more openness and willingness to adhere to web standards and co-operate with the community. Coming from an open-source world this is a familiar mindset to me, and although I only recently have crossed over to the Microsoft platform, the idea of community driven development still appeals to me. I just downloaded the 2010 Express versions of Microsoft Visual Web Developer and Microsoft Visual C# and my initial impressions are good.

I prefer doing my client-side scripting using jQuery and have done so successfully for a few years now. Followers of this blog will know that I recently completed my ASP.NET 3.5 certification. What I found a little annoying when studying for the exam was having to delve in to the details of the Microsoft AJAX library knowing full well that I would probably never use any of it. Yesterday I came across Stephen Walthers article regarding Microsoft’s contribution to the jQuery project. I was encouraged to read that Microsoft will be further shifting their investment to contributing to the jQuery project and moving away from Microsoft client-side Ajax. However, although I will probably never use the Microsoft AJAX library in any of my projects, I consider it a benefit that I am aware of the “old ways” of doing client browser scripting from a ASP.NET perspective. I’m sure there will be plenty of code that will need to be refactored and upgraded to jQuery in years to come :-).

An encouraging project that seems very interesting is Microsoft’s ASP.NET MVC project. The ASP.NET MVC templates are now part of the Visual Studio 2010 IDE, and from what I have been reading, this will be the preferred way forward for web development on the Microsoft platform in the future. Coming from an open source Java based web development world, this is music to my ears and something I am looking forward to learning more about in the months ahead.

With the release of ASP.NET 4.0, my understanding is that there has been a focus on getting the generated ASP.NET xhtml to adhere to web standards and therefore simplifying CSS styling. This applies to both MVC and Webform development. I think this is good news since there have been a few times in the last few months where my jaw has dropped to the floor when viewing some of the xhtml source code generated from the ASP.NET 3.5 controls – especially for the data bound controls. In today’s world of correct web semantics I’m glad this finally is on the agenda and look forward to reaping the benefits in the future.

Long time, no see?

So what gives? It’s been seven months since my last posting. Have I really been all that busy that I couldn’t find the time to create a new posting?

Well, I guess it’s partly true. I have been busy, but I’m sure I could have found the time if the motivation was there. Rest assured that my guilty conscience has been forever weighing me down for not following up my ‘promising introduction’ to the bloging world. However, I think it’s fair to say that the main reason for my absence can be best described as “self inflicted censorship” – if we can call it that. I’ve been a little uncertain of what to write that would be of interest to others, and also try to avoid pissing off the people I work with. I’ve also been a little rundown at work at times so my eagerness to share my views has not been at it’s peek. Now, a few months the wiser, I guess I’ve gained a little perspective so the picture has become a clearer.

What’s been going on?
A lot has happened during the past seven months. The department I am working for at work has grown substantially from just two people (my boss and I) to around eleven and I guess I’ve been a part of that. My new boss sets targets and does her very best to reach them. Although she is a few years my junior, I’ve learned a lot from her, and for the most part I think she is great to work with. I like her positive attitude and have found it contagious at times. I also take a personal interest in management, good management that is, so I do a little reading on the subject on the side, and have been able to share my views with her on occasion. She is keen to keep everyone in the department happy and find us work that we find interesting. It’s truly great to have a boss that cares and can relate to what I’m doing. With her coming from a Java development background herself helps tremendously and means there has been a lot more focus on Java and Open source technology than earlier. As you may have guessed, that suits me fine, and I feel I have grown a lot, both on a personal and professional level. Compared to where I was a year ago, things are looking good, although the business markets have taken a turn for the worst during the last few months, so who knows what may happen in future? Fingers crossed.

Focus change
Everyone at my present workplace seems to think I live and breath for Java, but I’ve noticed that my main interest actually lies more towards web development based on open source technology than Java development. I haven’t really been following the Java scene actively for quite some time and feel I have fallen behind on the latest API’s and frameworks. At present Java just happens to be the vehicle I use to extract content for web development. My main goal is usually the end result which usually portrays itself in the form of a web application or customer website.

For most of my career I’ve been working behind the scenes on the backend systems, but for a long time I’ve had an interest in web frontend technology. However, it seems that web frontend technology still isn’t taken all that seriously. HTML, CSS and JavaScript are considered technologies that you are expected to pick up as you go along and not really use a lot of time learning. To a certain extent this is true, but I still can’t help but find it odd that this is the case in 2008 considering just how much web development goes on in the world today. Your traditional senior programmer speaks in the language of design patterns and architecture, and although I can appreciate good backend architecture, I sometimes find the frontend a bit more fulfilling and challenging. Maybe because it’s easier to explain to to family and friends what I do for a living? Easier for them to visualize, I guess. 🙂

So, needless to say and according to my current interests, the last few months have been dedicated to working on web applications, creating company web sites and the like. At times it’s been great fun and I’ve learned a lot, especially about CSS which was something I always seemed to down prioritize and found “hard” to get comfortable with. I’m not sure what I really mean when I say “hard”, but for some reason I never really got in to it – mostly down to the fact that I had read a lot about it, but never really practiced it. I could never remember all the property names and their values, which is kind of half the point, I guess. This has now changed and within the last half year I have become more fluent in CSS and have grown to like it, and appreciated it’s power. I’ve also noticed just how bad it can get when more junior developers mess it all up, or don’t think in advance. The resulting CSS becomes a nightmare. I feel there is a great deal to be done on this frontier, but a lot of senior developers don’t want to touch it. Not challenging enough, I presume, which is a shame.

Just before the summer I got assigned to a project as a backend programmer. We were given the task of creating a company web site for a larger Norwegian gas company. The customer’s technology of choice was IBM’s Web Content Management (WCM). WCM, if you are unfamiliar, is a Java portlet based product that sits on top of WebSphere Portal Server. Although I had worked with both WebSphere Portal Server and WebSphere Application Server in the past, this was a different ball game.
We were two developers assigned to the project and luckily for me the other developer was fluent in CSS and other frontend technologies. He was a couple of years my junior, but I learned a lot from him. We both struggled with WCM at first and had to overcome a relatively high learning curve trying to find a good structure and extract our content before styling, but the end result was very good. The site looks beautiful today and the customer is happy. This was a relatively new experience for me in many ways. I don’t mean to offend anyone, but this was one of the first projects I was a part of where I actually felt I learned something of interest from someone else. Looking back, it was a great experience to follow a project from beginning to end and be part of the entire process. I’m not saying everything went smooth and we had our problems along the way, but in retrospect we did a good job. It’s just a shame the the technology, WCM in this case, is not much in use in my neck of the woods. However, HTML, CSS and jQuery parts, on the other hand, are. I also learned a bit about a few other web related things, like browser compatibility and became somewhat bemused that the tools for frontend technology development are still relatively poor. Was there really life before Firefox and Firebug?

The second project I was part of was to help refactor and expand a Java web application that is part of a company service desk for employee support. Although the technology in question was once again something odd, SAP EP using Java and SAP HTMLB in this case, I was happy to be able to introduce jQuery as a frontend alternative to help create some good looking stuff on the frontend. I was also happy to refactor some of the code, which was in dire need of some attention. Parts of it still is, I’m afraid. Old style JSP code with scriptlets etc. really do suck.

The third and final project I’ve been working on this autumn (and now nearly completed) is based on the open source Java portal, Liferay. Liferay has been a kind of baby of mine for the last 12 to 15 months. A colleague introduced me to it and ever since it seems I have been associated with the product, or at least that’s what everyone thinks at the company. In this project we created a web site for a customer to help their end users recycle materials and goods. We created a great deal of Java portlets in the process using Java, JSP and JSTL. We had to use a few of the new features of JSR-286 to get things working. In this project I also introduced jQuery into my frontend code which most certainly made some code a lot easier to both read and maintain. My CSS skills came in handy as well.

So there you have it and that’s it for now. A brief summary of what I’ve been up to for the past few months. Hopefully, I’ll have more for you soon, but don’t be surprised if it has more to with web development than backend programming, since that’s where my interests are at present. I’ve been reading a lot lately so expect a few book reviews soon. 🙂

Take care!

A few good reasons why I prefer open source software

[Ed note: I changed the title of this article from ‘Why I prefer open source software’ to ‘A few good reasons why I prefer open source software’. My boss read my posting and correctly pointed out that there are many other good reasons why someone would prefer an open source model other than just the choice/freedom point I make. In hind site I happen agree with her. Please keep that in mind when reading.]


Apparently I’m seen as a bit of an open source software advocate within my company. I admit that I can’t say I’m displeased with that description, but I caught myself asking why that is? And why am I happy as being seen as such?

Sure, I have purchased a “few” T-shirts from the Mozilla foundation and CafePress that help reinforce an indisputable image of my beliefs among colleagues, but still. Why do I prefer open source software over proprietary alternatives?

Choice and freedom

Open source software means different things to different people and there any many other good aspects of adopting an open source software strategy. However, I think one of the main reasons I like it boils down to promoting choice and freedom. In general, I don’t like being forced to do anything I don’t want to do. I like to make my own choices.

Choice is good

When developing software the goal is usually to create components that have high cohesion and low coupling. Well designed software enables you to react easily to change, and the lower the correlation between components, the easier it is to alter behavior. Choice is good, so when picking the software I want to use in my everyday life, or within the systems I want to build, I want to experience the same kind of freedom. I want the freedom to use a set of software components that match my specific needs, and not ones forced upon me because they coincidently just happen to be the ones my operating system supports. I want the freedom to replace any of these components at a later date, with better alternatives should I wish to do so, for whatever reason. And should the person, project or company, behind a particular software component on which I depend, decide to abandon support or further production then I have the freedom to carry on development on my own merit since I have the source code available. That is my prerogative. The choice is mine.

You can use the same analogy in other parts of life. If you are a car owner you wouldn’t accept having to fill petrol at only one brand of petrol station because your car happens to be incompatible with other pumps. Such a car just wouldn’t hit the market because nobody would buy it. The reason is apparent. No, you want the choice to shop for the best petrol price or just buy the first thing that comes along. You have the freedom to make that choice.

Paying for software

    It’s not about price. Yes, free sounds great and it’s beneficial to have the option to try something for free instead of paying for a trial license, but in general I don’t mind paying for software and have done so many times in the past. However, I’m finished paying for things I no longer need. For example, I have followed Microsoft Windows since 1991 and have purchased licenses for Windows 3.0, Windows 95, Windows 98 and Windows XP among other things. However, I can state with a high degree of certainty that Microsoft Windows XP will be the last Windows license I will ever buy. My company PC happens to use Microsoft Vista and there is absolutely nothing there that I feel I really need. 98% of my everyday needs are covered by using Kubuntu at home. Now, if only Adobe would consider open sourcing some of their products or at least offer their full portfolio on Linux…

    M$ basher

    So I guess this means I hate Microsoft? I don’t really. I dislike some of their business methods and the FUD they spread, but Microsoft is a corporation that exists to make money. That is it’s purpose – it is not a charity. Many people are unaware that when I left college my idea was to work for a company that developed Microsoft Win32 applications using C. I saw that as a great challenge and something I really wanted to do. I read many books on the subject. However, that never happened for me and I can’t say I lose sleep over it. I think I have gone on to better things, but I think it’s fair to say that I can see the view from both sides of the fence.
    I don’t really dislike the Microsoft software portfolio, but I think some of the people using and promoting the software need to take a good, long look at some of the great open source alternatives available out there and assess if the proprietary software they are recommending is really worth the price. Just what is the total cost of ownership for the paying customer?

    One thing that does annoy me is when people can’t distinguish between a PC, the Microsoft Windows operating system and the Microsoft Office suite. Of course, this is more down to their own ignorance than anything Microsoft has done [can be disputed]. It’s a shame, but the market for good software alternatives has been so bad for the last 10 years or so that people have become accustomed to seeing these components packaged together that they just see them as one and the same. That’s a tough nut to crack.

    Moving along

    The open source world is not what it once used to be. It’s still a movement, a rebellion in a way, but it is definitely growing. Open source software recently reached the boardrooms and more and more companies are reaping the benefits of developing products under an open source license. But let’s not beat about the bush. There is a lot more money involved in open source development today than ever before. Large corporations like Sun and IBM aren’t giving away software to be nice. It is clear that the mindset has changed and so have the business models.

    As I said earlier, there are many other good reasons why open software is preferable. However, I can only cover so much in one posting. However, I think the steady rise of open source software is good news for developers, corporations and consumers alike. For the first time in many years they now have the freedom to choose between several viable alternatives and more and more of them seem to be breaking free of their shackles.

    Why here, why now, why not?

    The short presentation

    Ever since I read chapter 37 of the book “My job went to India (And all I got was this lousy book)” I have been thinking of creating my own blog. This particular chapter of the book is titled “Let your voice be heard” and contains information on how and why you should want to share your ideas with your peers online.

    However, I read this particular book back in 2006. I have, since then, read many good books worth mentioning, so why now? No one good explanation comes to mind, but remembering some of the good advice given in the chapter, it states that you should get familiar with weblogs and weblog syndication before creating your own. At the time of reading the book I wasn’t totally unfamiliar with the concept of blogs, but since then I decided to take the author seriously and my browser’s start page has been set to DZone ever since. I therefore consider myself an avid reader of other people’s blogs when time permits. I guess I needed the time until now to digest it all.

    There is also a growing amount of quality content out there within the “blog sphere” and you have to ask yourself if there is anything you can contribute that may be of interest to others? I guess, in my case, only time will tell.

    A bit on myself

    So who am I? Generally I think I am considered a quiet person. That statement will probably cause a frown amongst my closer friends and family who, most likely, do not recognize me in that description at all. Professionally I have no problem giving my opinion when asked, but I never really push it publicly. I guess that part of me is about to change somewhat, and I am now taking the more pro-active approach. So what can a reader expect to find here?

    Mission objective

    I work as a software consultant in the Norwegian computer industry. I started my professional career back in 1998 so I am closing in on my 10 year anniversary this summer. During the past 10 years I have worked with a wide range of technology for different companies and customers. Three of my previous employers were also in the software consulting business.

    I am interested in the open source community and hope to one day be in a position to work with open source code and solutions for a living. Unfortunately up to now I have yet to really succeed in combining open source software development with my day job. Today I use a lot of open source software at home and also in the workspace, although I am not associated with any particular open source project or technology. I am simply an end user of the many open source systems and components available.

    In my neck of the woods, open source software products have still not managed to gain a substantial foothold in the market place, which is a shame for both developers and customers. Today I find myself behind enemy lines in primarily Microsoft occupied territory. Hopefully that will change sometime in the immediate future, and I am hoping to help lead a legion of troops in a brave outflanking maneuver to gain the upper hand over this evil empire [joke].

    Technology dialysis

    To cut a long story short, I started my career working with C++ and Lotus Domino servers. I didn’t really care much for Lotus Domino/Notes and could never really take part in, or understand, the unequivocal devotion for the product by it’s “fanatical” community. I guess I wasn’t convinced then and I’m still not today. I later moved on to Java in it’s many variations and forms, my focus being on both back-end integration and web/swing front-end development, sometimes running in parallel with the occasional odd Lotus Domino server or IBM WebSphere Portal server. There have been a few odd things in between, but currently I’m doing some Python development, something I have been longing to do for many years.

    With this blog I am hoping to share a bit of the information, thoughts and advice I have consumed during my professional career for others to read and hopefully learn from. Maybe professionals or other interested parties with more experience than I can leave the occasional comment to supplement, support or correct my ramblings. I hope that by putting this information in writing online I can justify that the last 10 years have not been a total waste of my time.

    Diagnosis in closing

    You may be wondering why I started reading the aforementioned book in the first place. Well, you see, my job did in fact go to India and the advice in the book helped me through that process. At the time I was working for a large oil company, and against all the odds, they one day decided to outsource their IBM WebSphere Application Server environment to a foreign contractor. Anticipating the change I had read the book ahead of the process and felt prepared. I was at the time, working as one of the administrators for this particular environment, although I was starting to get a bit restless and needed a new challenge. I think it’s fair to say that the actual outsourcing decision did not bring me to tears. The market was booming and I knew there would be other, hopefully better, opportunities.

    If you are in a similar situation today then maybe this book may serve to offer you some guidance on how to professionally deal with a tricky outsourcing situation or focus your attention on avoiding the scenario in the future. I still consider it a very good book.