Remember me? I’m your old C++ code…

Just recently I was called upon to fix some code that I had written while working as a consultant “way back” in 1998. It’s not that long ago really, but a lifetime in software developer years. At the time I was fresh out of university so this was my first proper assignment as a professional. I was the sole developer on this project and the code was written in C++ using the Lotus Notes/Domino C++ API. This was kind of the norm back then.

In essence this old code was broken in to four programs. They were all server based batch type jobs, run at scheduled intervals. Their common goal was to maintain the people groups in the Lotus Notes Name & Address book to reflect the structure of the organization (which was, and still is, a large organization). That means creating new groups, removing empty groups and adding/moving group members to and from groups. For a user to gain access to the Lotus Domino servers (for mail and other databases) you had to be a member of a group in the hierarchy, since only the top node (and thereby it’s children) was granted access to the servers. The groups were also used as mailing groups to parts of the organization. It would be kind of “critical” for a user if the program made a mistake and it goes without saying that with so much application business logic you wouldn’t choose C++ for this type of task today.

I was amazed that these old programs were still running!! Sure, one program had been altered by someone else a few times some years ago, but the remaining three were running just as I last compiled them back in October of 1999. I thought that was kind of fun and also made me a bit proud. Of course, I believe there are two reasons why this code has run unaltered for so long:

  1. It was written properly and there was no need to alter it
  2. Nobody understood the source code and therefore dared not make a change

I choose to believe reason one. I guess that’s a shocker! However, I was actually able to confirm this fact when I started to work with the source code once again after all these years. It was tidy and easy to read, although I was amazed just how much of the C++ syntax was now strange to me after many years of programming Java, Python and C#. I would not have made all the same choices that I did back then regarding the architecture, but in general I was kind of impressed. There was also valid documentation, written by me, which I found very useful when trying to get back in to the problem mindset. Not bad!! 🙂

It was strange to use Visual C++ 6.0 again which was the IDE/compiler I worked with originally. I did actually try to upgrade the project to Visual Studio 2008, but the Visual Studio C++ compiler wouldn’t compile the original source code so in the end I gave up trying. It was never part of the new assignment and the C++ syntax was just too unfamiliar to me. The customer didn’t care so I stuck to VC++ for the time being. Maybe in the future if I get the opportunity again I will give it another attempt.

Of course, it goes without saying that the actual source code – released in 1999 – was lost, but luckily enough I found a copy on a CD-ROM at home, which was a relief. It made the job a lot easier, but I guess it also shoots to bits my reason one (above) a bit 🙂

Don’t make me think!

During a recent long-weekend trip with my family I finally got around to reading the second edition of Steve Krug’s book Don’t make me think – A common sense approach to web usability. Being a developer who does a fair share of front end web development during a normal work day I think it’s only good and proper to learn more about the softer side of web site development, such as graphical design, interaction design and information architecture. I don’t normally work in these areas on a daily basis, but will usually have to communicate with someone who does – for at least a small part of a project. It has become more common in the latter years.

Book cover for Don't make me thinkAs the title says this book is about web usability. Before reading the book I felt I knew a lot about the subject and this is not the first book I have read on web usability. However, this is one of the better reads. It’s a pretty quick and fun read – around 200 pages with a nice user-friendly layout. I found Chapter 9 on practical usability testing to be very useful and something I may try to practice if I get the chance. If you have the interest and need a quick primer then this could be the book for you!

The second edition was written in 2005 and much has happened since – especially on the web. One of the book’s final chapters regarding CSS and accessibility might be considered irrelevant today, meaning that CSS is now the norm for web design (or should be). Developers know that HTML tables should not be used for controlling layout and support for CSS is now well in place in most of today’s browsers. Of course there is still the odd bunch of developers who are too lazy to separate their content from the design – or just don’t know any better. It is debatable how far we have come regarding accessibility. This is still a “work in progress” for all but the major sites on today’s Internet.

Being published in 2005, the book does not mention web 2.0 or AJAX technologies. However, I don’t feel the book is missing much. I guess common sense in 2010 is still pretty much the same as it was in 2005 and this book does a good job of explaining the basics of good web usability. It still applies today.

I noticed the author published a book in late 2009 called Rocket Surgery Made Easy: The Do-it-yourself Guide to Finding and Fixing Usability Problems. Given how well I liked Don’t make me think, I think this book may soon find its way to my bookshelf. You can find more information and free chapters on both books at the author’s own site.

Encouraging signs for web development on the Microsoft ASP.NET 4.0 platform

This really seems like a good time to be working with Microsoft web technologies. Not only has ASP.NET 4.0 just shipped along with a new version of Visual Studio, but there seems to be a focus on more openness and willingness to adhere to web standards and co-operate with the community. Coming from an open-source world this is a familiar mindset to me, and although I only recently have crossed over to the Microsoft platform, the idea of community driven development still appeals to me. I just downloaded the 2010 Express versions of Microsoft Visual Web Developer and Microsoft Visual C# and my initial impressions are good.

I prefer doing my client-side scripting using jQuery and have done so successfully for a few years now. Followers of this blog will know that I recently completed my ASP.NET 3.5 certification. What I found a little annoying when studying for the exam was having to delve in to the details of the Microsoft AJAX library knowing full well that I would probably never use any of it. Yesterday I came across Stephen Walthers article regarding Microsoft’s contribution to the jQuery project. I was encouraged to read that Microsoft will be further shifting their investment to contributing to the jQuery project and moving away from Microsoft client-side Ajax. However, although I will probably never use the Microsoft AJAX library in any of my projects, I consider it a benefit that I am aware of the “old ways” of doing client browser scripting from a ASP.NET perspective. I’m sure there will be plenty of code that will need to be refactored and upgraded to jQuery in years to come :-).

An encouraging project that seems very interesting is Microsoft’s ASP.NET MVC project. The ASP.NET MVC templates are now part of the Visual Studio 2010 IDE, and from what I have been reading, this will be the preferred way forward for web development on the Microsoft platform in the future. Coming from an open source Java based web development world, this is music to my ears and something I am looking forward to learning more about in the months ahead.

With the release of ASP.NET 4.0, my understanding is that there has been a focus on getting the generated ASP.NET xhtml to adhere to web standards and therefore simplifying CSS styling. This applies to both MVC and Webform development. I think this is good news since there have been a few times in the last few months where my jaw has dropped to the floor when viewing some of the xhtml source code generated from the ASP.NET 3.5 controls – especially for the data bound controls. In today’s world of correct web semantics I’m glad this finally is on the agenda and look forward to reaping the benefits in the future.

Be careful with your jQuery selectors!

I love jQuery! On my current ASP.NET 3.5 project I have a form which contains several input fields. I wanted to disable them given a condition. I created the following jQuery code:

$(document).ready(function() {
    $("input#ASP_Generated_User_Control_Prefix_Here_uxMyNumberField").keyup(function() {
        // select a selection of inputs and links
        var elements = $(":input:not(:submit)")
                       .not($(this))
                       .add("a.fxButton");

        if ($.trim($(this).val()) != "") {
            elements.attr('disabled', true);
        } else {
            elements.removeAttr('disabled');
        }
    });
});

In short this code applies to the keyup() event of input field named uxMyNumberField (the ID is autogenerated by ASP). As we will see in a moment, it is the jQuery selector which plays the star role of this blog entry. So to explain the selector: select all input fields that are not submit buttons, but exclude the current input field. Also include links of a given CSS class (fxButton) to the selection. So if a value is entered in uxMyNumberField then the code will immediately disable every element contained in the element set, and otherwise enable them (i.e. user presses backspace).

Everything worked as expected in the browser. The UI elements were being enabled and disabled when a value was entered or removed. However, when adding a value to the uxMyNumberField and clicking on the form’s submit button, the form was not being submitted as it was previously. The ASP event handler for my submit button was never being called, but I could confirm that a server request was being generated from the browser and the ASP Page’s Load event was being triggered. Curiously, the error only applied when a value was entered in the uxMyNumberField field. The remaining form fields were being submitting correctly when uxMyNumberField was empty.

To make a long story short, what was happening was the selector was also including hidden ASP input fields in the element set – fields used for internal ASP page state and event handling.

<input type="hidden" name="__EVENTTARGET" id="__EVENTTARGET" value="" />
<input type="hidden" name="__EVENTARGUMENT" id="__EVENTARGUMENT" value="" />
<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="..." />

In practice this was disabling the submit button’s event handler. When correcting the selector to only include my form elements everything worked as expected.

This nasty mistake took me some time to locate and only with the aid of Firebug – what would we do without it! I was certain the error was due to an ASP validator control change I had made earlier, which only applied to the uxMyNumberField.

This was also my first experience with the Firebug Javascript debugger. When stepping through my jQuery function, the debugger showed me which elements were part of the resulting element set, which in turn immediately led me towards correcting the selector and excluding the hidden ASP input fields.

So the moral of this experience is be careful not to include fields used by the platform when creating your jQuery selectors! 🙂

Microsoft Certified Technology Specialist – .NET Framework 3.5 ASP .NET Applications

So I finally reached my goal. Today I completed the test 70-536 Microsoft .NET Framework – Application Development Foundation, and with the earlier completion of 70-562 Microsoft .NET Framework 3.5, ASP.NET Application Development (see my prior posting), I can now call myself a “Microsoft Certified Technology Specialist (MCTS) – .NET Framework 3.5 ASP .NET Applications”. Yes, I am really glad this is over for now – and so is the wife and children 🙂

The test today (70-536) was harder than I initially anticipated. Like last time, I prepared by reading through the Self-paced training kit and taking notes in Notepad++. After reading through the book I then took a closer look at the test’s online curriculum and used MSDN to supplement my notes with relevant information. In the end I had 4662 lines of notes to be exact! Given the chance again I would probably look at the curriculum before reading each chapter of the book since it clearly tells you which skill set the chapter is covering. You should then be able to see if it covers everything you need.

For some odd reason the book doesn’t cover every skill mentioned in the curriculum. The classes ProtectedData and ProtectedMemory come to mind, although I can’t recall getting any questions about either of them. Apart from an early run, I didn’t use the practice tests on CD-ROM since I consider the question quality lacking, some times annoying, and some times plain wrong! If I had even more time I might have used a bit more time here, but I think playing with MSDN and testing examples gives higher value. Let me add that the CD-ROM practice tests I had installed applied to .NET 2.0 – some of the answers had changed for .NET 3.5. The test itself claims not to be .NET version specific.

My overall experience of taking this exam was a positive one. I learned an awful lot, but the questions on the test are made to make you uncertain. Simple things that you thought you knew well are now the source of uncertainty. Compared to the Certified Java Programmer exam (which I took a few years ago) this test does not delve deep in to the details or syntax of the C# language, but stays in the world of the .NET framework – as the title says.

C# Delegates and Events

Readers of this blog will know that I’m in the process of learning the ins and out of C#. Coming from a Java/C++ background I was have a bit of trouble getting to grips with C# Delegates and Events. None of the examples I looked at really made sense to me. Isn’t this pretty much the same as Interfaces?

I guess the correct answer is both yes and no. I’m pretty sure you can achieve similar results using plain Interfaces, but now after playing around with Delegates and Events, I kind of see where using them might seem like a better approach.

So, how to get started? Let’s look at a simple example. You have a class that you want to enable to call back it’s caller when something happens – an event, in other words. You have a vague idea of what type of information you want to send to the caller when the event occurs. This is where the delegate steps in and should be your first stop:

    public delegate void MyDelegate1(object sender, EventArgs e);

The delegate is usually declared at the same “level” you would an interface or class. The parameters I have used seem to be the C# norm, but based on my experience playing around, this is not necessary. You can send no parameters or more parameters and they can be of any type you want, if any. The return type can also be something other than void, but as we will see below, this makes little sense for most common scenarios. For now, the important thing to notice is the delegate keyword.

So, how to use it? You need to declare an event:

public class Worker
{
    public event MyDelegate1 MyEvent1;

    public void DoIt()
    {
        if (MyEvent1 != null)
        {
            EventArgs e = new EventArgs();
            MyEvent1(this, e);
        }
    }
}

By creating an instance of the Worker class and calling the DoIt() method we trigger the event. Well, nearly. We still need to register an event handler, but more on that in a moment. For now, notice the event’s declaration. Notice the relationship between our event and delegate. Also notice also the signature of the call to MyEvent1() in the method body. It must be identical to the delegate method signature and return type. Anything else will cause a compiler error. You will also notice that we check if the event is null. A NullReferenceException will be thrown if MyEvent1 is null, and we don’t want that. Not all events will have registered handlers.

The code compiles, but nothing happens. We need to register an event handler. Let’s register two of them just to make the point:

public class Runner
{
    public void SomeMethod()
    {
        Worker w = new Worker();
        w.MyEvent1 += new MyDelegate1(RespondToEvent1Alt1);
        w.MyEvent1 += new MyDelegate1(RespondToEvent1Alt2);
        w.DoIt();
    }

    public void RespondToEvent1Alt1(object sender, EventArgs e)
    {
        Console.WriteLine("Responding to event 1, alternative 1");
    }

    public void RespondToEvent1Alt2(object sender, EventArgs e)
    {
        Console.WriteLine("Responding to event 1, alternative 2");
    }
}

We are doing a few things here. We start by registering code handlers for our event. We are registering two methods to respond to the event. Both method handler’s signature must match the delegate signature. When the event is triggered in DoIt(), the methods will be called in the order they are registered.

What happens if we declare a new delegate with a non-void return type and different signature?

    public delegate int MyDelegate2();

This delegate wants any event handlers to return an int when done processing the event. Event handlers receive no parameters from the event. We modify the Worker class to look like this:

public class Worker
{
    public event MyDelegate1 MyEvent1;
    public event MyDelegate2 MyEvent2;

    public void DoIt()
    {
        if (MyEvent1 != null)
        {
            EventArgs e = new EventArgs();
            MyEvent1(this, e);
        }

        if (MyEvent2 != null)
        {
            int x = MyEvent2();
        }
    }
}

The important thing to notice is the return type and parameters of the call to the new delegate. What happens if we also register two handlers for this event?

public class Runner
{
    public void SomeMethod()
    {
        Worker w = new Worker();
        w.MyEvent1 += new MyDelegate1(RespondToEvent1Alt1);
        w.MyEvent1 += new MyDelegate1(RespondToEvent1Alt2);
        w.MyEvent2 += new MyDelegate2(RespondToEvent2Alt1);
        w.MyEvent2 += new MyDelegate2(RespondToEvent2Alt2);
        w.DoIt();
    }

    // skipped RespondToEvent1Alt1() and RespondToEvent1Alt2() here - same as above

    public int RespondToEvent2Alt1()
    {
        Console.WriteLine("Responding to event 2, alternative 1");
        return 24;
    }

    public int RespondToEvent2Alt2()
    {
        Console.WriteLine("Responding to event 2, alternative 2");
        return 42;
    }
}

When the new event triggers in DoIt() it calls the two new event handlers in order as expected. However, the value of the local variable x will be the value of the last event handler called, here that’s 42. Returning a value from a delegate doesn’t make much sense. There is no way of telling how many event handlers will be registered, if any – which is kind of the point.

Maybe there is a way to process the return values after each event handler, but I haven’t seen this mentioned in any of the C# documenation I’ve seen – not that I’ve been looking actively 😉

Passed my Microsoft ASP.NET certification exam!

Today, after months of steady reading, running small code examples and not to mention, relevant project experience, I passed my ASP.NET certification exam. The formal title of this exam is Microsoft .NET Framework 3.5, ASP.NET Application Development (70-562) . I can’t remember being quite so happy in a long time.  I felt awfully tired when I finally finished.

During my career I have taken 14 certification tests (and failed once), but this was by far both the longest (3 hours) and most difficult…. or maybe I’m just getting old. I did find the questions a bit long (had to scroll a lot on a small monitor) and focused mainly on stuff related to WCF or/and the UpdatePanel. I wasn’t expecting such an emphasis on WCF to be honest…

I didn’t sleep well last night and this morning, although I felt prepared, I was worried about the sheer size of the curriculum. Would I remember the details under pressure? When studying for the test I used the Microsoft self-paced training kit (all 950+ pages of it!). When reading through this “bible” I took notes in my text editor. When I finally completed the book I had over 5000 lines of notes (no wordwrap)… yeah, I guess that’s a lot of notes.

I’m sure most people have seen the size of the average Microsoft self-paced training kit book, but that’s only half of it really – a basis to get you started. You have to use the online practice tests that accompany the book to stand a chance at passing the exam – that and a lot of clicking on MSDN, which by the way, is a service I have come to appreciate an awfull lot – now that I’m more used to its layout.

This was my first Microsoft exam. It is supposed to be the second part of a two-part certification program to gain the MCTS title. I kind of skipped past the first framework exam since wanted to delve into ASP quickly to help me learn more about it in my current project at work. Looking back now, I think this was a smart move.

So, the next stop for me is the Microsoft .NET Framework – Application Development Foundation (70-536).  I’ve just got a hold of the self-paced training kit for this exam and it’s just over 700 pages so I’m guessing there will be a few notes here too…

Let’s hope it will be worth it. I’m lucky to have a patient wife and family. 🙂