Google Test (GTest) setup with Microsoft Visual Studio for C++ unit testing

Introduction

[Links now include solution files for both 2008 and 2010 versions of Visual Studio]

I’m going to be nice with you today and save you some time. What I am about to describe to you took me the better part of two (half) workdays…. with a few hours sleep in between. Setting up Google Test with Microsoft Visual Studio can be a bit tricky, but if you really want unit testing for C++ in Visual Studio (and I hope you do) then this is for you. Most of the challenges can be overcome by configuring the compiler and linker correctly.

It’s worth mentioning that before settling on Google Test, or GTest as it’s also named, I did take a look at a few of the other unit test frameworks for C++, but I don’t think things seem any easier anywhere else. GTest doesn’t seem like a bad choice: its open source, used to test the Google Chromium Projects (Chrome) and more importantly, seems to be actively maintained.

There is a fair bit of documentation available on the project site, but sometimes you just want to get a feel for something before committing yourself to it. This posting should help you do that, but if you want more, the project has good documentation. In my quest for documentation I noticed several guides, a FAQ, a Wiki and a mailing list. In other words, there are good sources of information available if you choose to dive in.

Disclaimer

I suppose a disclaimer is in order for those wondering:

  • I only work with C++ in passing. It’s not something I do much of these days and my working knowledge of Microsoft Visual Studio for C++ is limited.
  • I used Visual Studio 2008 Profession Edition for this work. I also updated the project using Microsoft Visual Studio 2010 Professional Edition (see links below). Maybe the Express versions will work too?
  • I am not affiliated with Google in any way. The reason I am looking in to this particular framework is because I am currently maintaining some older C++ programs that I wrote 10 years ago. I want to introduce unit testing for them before making changes and GTest seems a good choice.

So, in this posting I want to share with you how I configured Visual Studio 2008 to work with the GTest framework. After spending a fair bit of time getting this to work, I want to write it all down while it’s still fresh in my mind.

The GTest binaries for unit testing

First thing’s first: you need to download the Google Test Framework. I use version 1.5.0 which seems to be the current stable release. I unpacked the GTest project to a folder named C:\Source\GTest-1.5.0\ which I then refer to from other projects in need of the unit testing library. I call this directory %GTest% in the text that follows. Be aware that I think I may have read that Google recommends adding the GTest project to your own solution and building them together with your own code, but this is how I do it for this sample project.

If you are coming from a Java world then this may be where you hit your first snag. It may be a bit different from what you have grown accustomed to with Eclipse, JUnit and all, but you will have to build the unit test binaries from the downloaded C++ source code. Yes, you will actually have to compile and build the GTest libraries yourself, but before you lose heart, let me add that it comes with project files for many popular C++ IDEs, Visual Studio being one of them (older version). In the msvc/ folder of the download you will find two Visual Studio solution files which VS 2008 will ask you to upgrade when you open them.

I had no trouble building the binaries. In fact, I can’t remember actually having to configure anything so don’t be put off by this step. However, there is an issue here: there are two solution files and you must choose the correct version to use with your project. The solution file with the -md suffix uses DLL versions of Microsoft runtime libraries, while the solution with no suffix uses static versions of the Microsoft runtime libraries. The important thing to note is that you must correctly set the C++ Code Generation setting for the Debug and Release configurations in your project to the exact same setting used when building GTest. If you experience linker problems somewhere down the line in your project then this might be the cause. Most of the trouble I have experienced while building has been due to this setting being incorrect. The project’s README file does a better job of explaining all this so be sure to have a look. For my code I am using the static versions of the runtime libraries, so for me that’s /MT for the Release configuration and /MTd for the Debug configuration. I use the GTest solution without the -md suffix.

In any case, if you plan on using both Debug and Release configurations in your own project then you should remember to also build the GTest solution for both Debug and Release configurations. Among other things, the Release configuration will build two files, gtest.lib and gtest_main.lib, and similarly, the Debug configuration will also build two files, namely gtestd.lib and gtest_maind.lib (notice the extra -d- character in the file names).

Project setup

Now that you have successfully generated the libraries for unit testing, we need to incorporate them in to a C++ project. The GTest documentation provided will show you some simple examples of how to create unit tests using the framework, but it won’t say much about how to set up a good project structure for unit testing. I guess, this is not to be expected since it could be very environment specific.

My preference is to avoid making the unit tests part of the resulting binary (EXE file), and I don’t want to have to restructure my existing project (too much) to add unit testing. I simply want to add unit tests to my project, but avoid making my existing project code aware that it’s now being unit tested. So, my solution is based on what I’ve grown accustomed to with Java development with Eclipse, or C# development with Visual Studio. Maybe this is also the norm in other C++ projects? The idea is to split the solution in to three separate projects:

  1. One project containing the base code which will function as a library for the others
  2. One project used for running main(), the application entry point, which makes calls to functionality in the library
  3. One project for running unit tests which also makes calls to the same library functionality. In GTest the main() function entry point can be optional if you use gtest_main.lib.

The screenshot below shows what this may look like in Visual Studio:

Solution view in Visual Studio 2008

This setup requires the BaseCode project to be built as a library (LIB) file. The two others projects will build as EXE files that both depend on the LIB file so their project’s dependencies must be set up to both individually depend on the BaseCode project. When attempting to build the solution using this project structure, these are the things to watch for:

  • The BaseCode project must be configured to build as library. For both configurations, Release and Debug, you must set the project’s Configuration Type to Static Library (.lib). It’s Code Generation must be set to Multi-threaded (/MT) for the Release configuration and Multi-threaded Debug (/MTd) for the Debug configuration (must be identical to the GTest project explained earlier).
  • The RunBaseCode project is used to create the EXE for the resulting application so it’s Configuration Type is set to Application (.exe) which is the default. It depends on the BaseCode library so it’s project dependency must be set to depend on the BaseCode project. The Code Generation should also be set as explained above.
  • The TestBaseCode project is also used to create an EXE, but only for running the test cases – it’s not something you ship. It also depends on the BaseCode library so it’s project dependency must be set to depend on the BaseCode project. As before, it’s Code Generation should be set as explained above.
  • Since the TestBaseCode project needs to run the unit tests it must refer to the GTest libraries. Of the three projects, it is the only project which needs this. Therefore, for both Release and Debug configurations, set the Additional Include Directory setting to refer to the %GTest%\include directory.
  • The TestBaseCode Release configuration’s Additional Library Directories setting should refer to the %GTest%\msvc\gtest\Release directory. The Additional Dependencies setting should list the libraries gtest.lib and gtest_main.lib. Similarly, for the Debug configuration the Additional Library Directories setting should refer to the %GTest%\msvc\gtest\Debug directory and the Additional Dependencies should list the libraries gtestd.lib and gtest_maind.lib (notice the extra -d- character in the file names). Of course, if you have set up you GTest libraries somewhere else then it you have to refer to these directories instead.
  • The Command Line setting for TestBaseCode‘s Post-Build Event can be set to “$(TargetDir)$(TargetFileName)” for both Release and Debug configurations. This will run the unit tests automatically and display the results in the Build output window after building the project.

If you are successful, the build output should look something like this:

Screenshot of the build log

You will notice that the unit tests are run automatically and results displayed. The build creates two EXE files as expected, one for the application and one for the unit tests:

Screenshot of running the code and tests

If you get this far you might also want to check out gtest-gbar project which is a graphical UI for the unit tests. It’s a simple, one-file .NET application. By pointing it at the unit test EXE file you can get output like this:

Screenshot of gtest-gbar

Closing

For simplification, I’m linking to the Visual Studio 2008 solution I used to create the example so you can have a look at my solution settings. If you are using Visual Studio 2010 then use this solution. Have a look, build it and see if it works for you! You will also need to download, build and refer to the GTest framework LIB files and include folder as described above. Tell me how you get on and what Visual Studio version you were using (2008, 2010, Express etc). Your feedback would be greatly appreciated!!

Now that I’ve got this set up the next step for me is to incorporate GTest unit testing in to my current C++ projects. There’s a lot to learn…

Advertisements

Javascript mouseover effects on table rows using jQuery

Introduction
Lately I’ve been getting into jQuery. On first sight the syntax can look a bit strange, but I get it now – for the most part, that is. On a related note, for a while now I’ve been “wondering” about how you do the mouseover effects you sometimes see on table rows. I’m talking about the effects where you move your mouse over a cell in table row and then the full row’s background colour changes to give the effect of highlighting the full row for selection. I really haven’t given it much thought, but I recently came across some code in a company application that got me thinking about it again so I decided time was right for to take a closer look.

Code
Originally, before the use of jQuery, your markup would look something like this:



   <table>
      <tbody>
         <tr>
            <td>some table cell content here</td>
         </tr>
         <tr style="cursor:pointer;">
            <td>some table cell content here</td>
         </tr>
         <tr>
            ...
            more rows and cells containing more of the same here
            ...
         </tr>
      </tbody>
   </table>


You’ll notice the obtrusive JavaScript event handling code on lines 5 to 8 and 11 to 14. This code repeats itself on every row of the table which is just annoying. The only difference between the code attached to the individual table rows is the obvious onclick event handler (here a simple alert message) which should simulate some functionality specific to a click on that unique row. As a whole this use of Javascript makes the markup somewhat more difficult to read and generally untidy. Of course it is usually the backend server code creating this kind of code in a for-loop or similar – nobody writes this stuff by hand, but that’s not the point.
So how can you use jQuery to replace this mess? The following shows a first attempt of a replacement using jQuery.



   
   
      $(function() {
         $('table tbody tr').mouseover(function() {
            $(this).addClass('selectedRow');
         }).mouseout(function() {
            $(this).removeClass('selectedRow');
         }).click(function() {
            alert($('td:first', this).text());
         });
      });
   
   
      .selectedRow {
         background-color: blue;
         cursor: pointer;
      }
   


   <table border="1">
      <thead>
         <tr>
            <th>First column</th>
            <th>Second column</th>
            <th>Third column</th>
         </tr>
      </thead>
      <tbody>
         <tr>
            <td>This</td>
            <td>That</td>
            <td>The other</td>
         </tr>
         <tr>
            <td>Second</td>
            <td>line</td>
            <td>here</td>
         </tr>
      </tbody>
   </table>


So – I guess a brief explanation is in order. You’ll of course need the jQuery JavaScript library available in the correct location as referred to in the script tag (line 3). Apart from that, all the custom JavaScript jQuery code is happening in the script tag at the top of the page (lines 4 to 14). We are first creating a jQuery selector to select all tr tags inside a tbody tag which itself must reside within a table tag (line 6). When these tag elements are found we bind the ‘mouseover’, ‘mouseout’ and ‘click’ events to the rows (lines 6,8 and 10 respectively). The mouseover and mouseout event just add or remove a CSS class named ‘selectedRow’ dynamically to or from the row which adds the effect. The CSS class itself is defined at the top of the page within the style tags just after the script. The actual mouse click functionality just returns the value of the first cell in the selected row, just as an example. This could be a unique row id or similar.
The jQuery selector makes sure the mouse click event is set to only work on tr tags that are present from within a tbody tag to avoid adding the effect to the thead header row.

So that’s all there is to it really – not too bad. It was a lot simpler than I first imagined and the jQuery code makes things nice and tidy once you get used to the chained decorator pattern syntax.

Long time, no see?

Introduction
So what gives? It’s been seven months since my last posting. Have I really been all that busy that I couldn’t find the time to create a new posting?

Well, I guess it’s partly true. I have been busy, but I’m sure I could have found the time if the motivation was there. Rest assured that my guilty conscience has been forever weighing me down for not following up my ‘promising introduction’ to the bloging world. However, I think it’s fair to say that the main reason for my absence can be best described as “self inflicted censorship” – if we can call it that. I’ve been a little uncertain of what to write that would be of interest to others, and also try to avoid pissing off the people I work with. I’ve also been a little rundown at work at times so my eagerness to share my views has not been at it’s peek. Now, a few months the wiser, I guess I’ve gained a little perspective so the picture has become a clearer.

What’s been going on?
A lot has happened during the past seven months. The department I am working for at work has grown substantially from just two people (my boss and I) to around eleven and I guess I’ve been a part of that. My new boss sets targets and does her very best to reach them. Although she is a few years my junior, I’ve learned a lot from her, and for the most part I think she is great to work with. I like her positive attitude and have found it contagious at times. I also take a personal interest in management, good management that is, so I do a little reading on the subject on the side, and have been able to share my views with her on occasion. She is keen to keep everyone in the department happy and find us work that we find interesting. It’s truly great to have a boss that cares and can relate to what I’m doing. With her coming from a Java development background herself helps tremendously and means there has been a lot more focus on Java and Open source technology than earlier. As you may have guessed, that suits me fine, and I feel I have grown a lot, both on a personal and professional level. Compared to where I was a year ago, things are looking good, although the business markets have taken a turn for the worst during the last few months, so who knows what may happen in future? Fingers crossed.

Focus change
Everyone at my present workplace seems to think I live and breath for Java, but I’ve noticed that my main interest actually lies more towards web development based on open source technology than Java development. I haven’t really been following the Java scene actively for quite some time and feel I have fallen behind on the latest API’s and frameworks. At present Java just happens to be the vehicle I use to extract content for web development. My main goal is usually the end result which usually portrays itself in the form of a web application or customer website.

For most of my career I’ve been working behind the scenes on the backend systems, but for a long time I’ve had an interest in web frontend technology. However, it seems that web frontend technology still isn’t taken all that seriously. HTML, CSS and JavaScript are considered technologies that you are expected to pick up as you go along and not really use a lot of time learning. To a certain extent this is true, but I still can’t help but find it odd that this is the case in 2008 considering just how much web development goes on in the world today. Your traditional senior programmer speaks in the language of design patterns and architecture, and although I can appreciate good backend architecture, I sometimes find the frontend a bit more fulfilling and challenging. Maybe because it’s easier to explain to to family and friends what I do for a living? Easier for them to visualize, I guess. 🙂

So, needless to say and according to my current interests, the last few months have been dedicated to working on web applications, creating company web sites and the like. At times it’s been great fun and I’ve learned a lot, especially about CSS which was something I always seemed to down prioritize and found “hard” to get comfortable with. I’m not sure what I really mean when I say “hard”, but for some reason I never really got in to it – mostly down to the fact that I had read a lot about it, but never really practiced it. I could never remember all the property names and their values, which is kind of half the point, I guess. This has now changed and within the last half year I have become more fluent in CSS and have grown to like it, and appreciated it’s power. I’ve also noticed just how bad it can get when more junior developers mess it all up, or don’t think in advance. The resulting CSS becomes a nightmare. I feel there is a great deal to be done on this frontier, but a lot of senior developers don’t want to touch it. Not challenging enough, I presume, which is a shame.

Projects
Just before the summer I got assigned to a project as a backend programmer. We were given the task of creating a company web site for a larger Norwegian gas company. The customer’s technology of choice was IBM’s Web Content Management (WCM). WCM, if you are unfamiliar, is a Java portlet based product that sits on top of WebSphere Portal Server. Although I had worked with both WebSphere Portal Server and WebSphere Application Server in the past, this was a different ball game.
We were two developers assigned to the project and luckily for me the other developer was fluent in CSS and other frontend technologies. He was a couple of years my junior, but I learned a lot from him. We both struggled with WCM at first and had to overcome a relatively high learning curve trying to find a good structure and extract our content before styling, but the end result was very good. The site looks beautiful today and the customer is happy. This was a relatively new experience for me in many ways. I don’t mean to offend anyone, but this was one of the first projects I was a part of where I actually felt I learned something of interest from someone else. Looking back, it was a great experience to follow a project from beginning to end and be part of the entire process. I’m not saying everything went smooth and we had our problems along the way, but in retrospect we did a good job. It’s just a shame the the technology, WCM in this case, is not much in use in my neck of the woods. However, HTML, CSS and jQuery parts, on the other hand, are. I also learned a bit about a few other web related things, like browser compatibility and became somewhat bemused that the tools for frontend technology development are still relatively poor. Was there really life before Firefox and Firebug?

The second project I was part of was to help refactor and expand a Java web application that is part of a company service desk for employee support. Although the technology in question was once again something odd, SAP EP using Java and SAP HTMLB in this case, I was happy to be able to introduce jQuery as a frontend alternative to help create some good looking stuff on the frontend. I was also happy to refactor some of the code, which was in dire need of some attention. Parts of it still is, I’m afraid. Old style JSP code with scriptlets etc. really do suck.

The third and final project I’ve been working on this autumn (and now nearly completed) is based on the open source Java portal, Liferay. Liferay has been a kind of baby of mine for the last 12 to 15 months. A colleague introduced me to it and ever since it seems I have been associated with the product, or at least that’s what everyone thinks at the company. In this project we created a web site for a customer to help their end users recycle materials and goods. We created a great deal of Java portlets in the process using Java, JSP and JSTL. We had to use a few of the new features of JSR-286 to get things working. In this project I also introduced jQuery into my frontend code which most certainly made some code a lot easier to both read and maintain. My CSS skills came in handy as well.

Conclusion
So there you have it and that’s it for now. A brief summary of what I’ve been up to for the past few months. Hopefully, I’ll have more for you soon, but don’t be surprised if it has more to with web development than backend programming, since that’s where my interests are at present. I’ve been reading a lot lately so expect a few book reviews soon. 🙂

Take care!

A few good reasons why I prefer open source software

[Ed note: I changed the title of this article from ‘Why I prefer open source software’ to ‘A few good reasons why I prefer open source software’. My boss read my posting and correctly pointed out that there are many other good reasons why someone would prefer an open source model other than just the choice/freedom point I make. In hind site I happen agree with her. Please keep that in mind when reading.]

Introduction

Apparently I’m seen as a bit of an open source software advocate within my company. I admit that I can’t say I’m displeased with that description, but I caught myself asking why that is? And why am I happy as being seen as such?

Sure, I have purchased a “few” T-shirts from the Mozilla foundation and CafePress that help reinforce an indisputable image of my beliefs among colleagues, but still. Why do I prefer open source software over proprietary alternatives?

Choice and freedom

Open source software means different things to different people and there any many other good aspects of adopting an open source software strategy. However, I think one of the main reasons I like it boils down to promoting choice and freedom. In general, I don’t like being forced to do anything I don’t want to do. I like to make my own choices.

Choice is good

When developing software the goal is usually to create components that have high cohesion and low coupling. Well designed software enables you to react easily to change, and the lower the correlation between components, the easier it is to alter behavior. Choice is good, so when picking the software I want to use in my everyday life, or within the systems I want to build, I want to experience the same kind of freedom. I want the freedom to use a set of software components that match my specific needs, and not ones forced upon me because they coincidently just happen to be the ones my operating system supports. I want the freedom to replace any of these components at a later date, with better alternatives should I wish to do so, for whatever reason. And should the person, project or company, behind a particular software component on which I depend, decide to abandon support or further production then I have the freedom to carry on development on my own merit since I have the source code available. That is my prerogative. The choice is mine.

You can use the same analogy in other parts of life. If you are a car owner you wouldn’t accept having to fill petrol at only one brand of petrol station because your car happens to be incompatible with other pumps. Such a car just wouldn’t hit the market because nobody would buy it. The reason is apparent. No, you want the choice to shop for the best petrol price or just buy the first thing that comes along. You have the freedom to make that choice.

Paying for software

    It’s not about price. Yes, free sounds great and it’s beneficial to have the option to try something for free instead of paying for a trial license, but in general I don’t mind paying for software and have done so many times in the past. However, I’m finished paying for things I no longer need. For example, I have followed Microsoft Windows since 1991 and have purchased licenses for Windows 3.0, Windows 95, Windows 98 and Windows XP among other things. However, I can state with a high degree of certainty that Microsoft Windows XP will be the last Windows license I will ever buy. My company PC happens to use Microsoft Vista and there is absolutely nothing there that I feel I really need. 98% of my everyday needs are covered by using Kubuntu at home. Now, if only Adobe would consider open sourcing some of their products or at least offer their full portfolio on Linux…

    M$ basher

    So I guess this means I hate Microsoft? I don’t really. I dislike some of their business methods and the FUD they spread, but Microsoft is a corporation that exists to make money. That is it’s purpose – it is not a charity. Many people are unaware that when I left college my idea was to work for a company that developed Microsoft Win32 applications using C. I saw that as a great challenge and something I really wanted to do. I read many books on the subject. However, that never happened for me and I can’t say I lose sleep over it. I think I have gone on to better things, but I think it’s fair to say that I can see the view from both sides of the fence.
    I don’t really dislike the Microsoft software portfolio, but I think some of the people using and promoting the software need to take a good, long look at some of the great open source alternatives available out there and assess if the proprietary software they are recommending is really worth the price. Just what is the total cost of ownership for the paying customer?

    One thing that does annoy me is when people can’t distinguish between a PC, the Microsoft Windows operating system and the Microsoft Office suite. Of course, this is more down to their own ignorance than anything Microsoft has done [can be disputed]. It’s a shame, but the market for good software alternatives has been so bad for the last 10 years or so that people have become accustomed to seeing these components packaged together that they just see them as one and the same. That’s a tough nut to crack.

    Moving along

    The open source world is not what it once used to be. It’s still a movement, a rebellion in a way, but it is definitely growing. Open source software recently reached the boardrooms and more and more companies are reaping the benefits of developing products under an open source license. But let’s not beat about the bush. There is a lot more money involved in open source development today than ever before. Large corporations like Sun and IBM aren’t giving away software to be nice. It is clear that the mindset has changed and so have the business models.

    As I said earlier, there are many other good reasons why open software is preferable. However, I can only cover so much in one posting. However, I think the steady rise of open source software is good news for developers, corporations and consumers alike. For the first time in many years they now have the freedom to choose between several viable alternatives and more and more of them seem to be breaking free of their shackles.