Using Fiddler as a simple http development server

Fiddler’s AutoResponder

Lately I’ve been playing around with Fiddler (version 2.3.4.4). Fiddler it is a free packet analyzer for web debugging HTTP and HTTPS traffic. It acts as a proxy between your web browser and a web server so you direct your web browser to it using a plugin (for Firefox) and it enables you to analyze and tamper with the HTTP requests and responses between the server and client. Very cool and very useful.

Screenshot of Fiddler user interface

Although Fiddler is packed with lots of useful functionality for analyzing and tampering with HTTP traffic I also found a new use for it when working with jQuery and JSON.

Just recently I was looking in to the jQuery UI Autocomplete plugin and wanted to play around with the functionality that uses JSON returned from the server. To simplify, my idea was to test the Javascript client code without writing any server-side code. Of course I could have simulated something similar by creating JSON in the Javascript code, but that’s not what I wanted here since I it was important to get familiar with the code making the JSON request over the wire. The client code that I ended up creating with can easily be deployed to a proper web server without modification.

Fiddler’s AutoResponder functionality

Fiddler includes something called AutoResponder. As the name gives away, you use it to automatically send a response to a calling browser client. The idea is to get Fiddler to intervene and return something useful when the browser makes a request for a particular URI. In my case I was aiming at making Fiddler return static JSON to the browser when making a call to http://server.anywhere.com/json. I created the contents of a HTTP response that I wanted to be returned to the browser in a file and stored it to disk. Then I redirected the AutoResponder to return this file when the browser request was made. I made no modifications to my hosts file.

Screenshot of the configured AutoResponder in Fiddler

Very simple really. As you can see from the screenshot above. I have configured two URIs. One is for the index.html file and the other for the JSON “service”. When one of the URIs is hit by the browser the corresponding file on the file system will be returned. The AutoResponder also lets me set latency so I simulated a 3000 ms sleep for the service before responding.

The tricky part was actually making a valid HTTP file for the JSON service. In my case it looked as shown below and saved as UTF-8. For this purpose I found Notepad++ to be very useful. When selecting the actual HTTP content text in the file it tells you exactly how many bytes are needed as value for the content length header. In my case it was 87 bytes.

A screenshot of calculating the number of bytes in the html response with Notepad++

Now, when opening the browser and making a call to http://server.anywhere.com/json the AutoResponder will step in and return the JSON to the browser. The code I used for invoking the call to return the JSON is shown below. Of course this code ignores what is typed in the input field and results the same JSON regardless, but for my purpose that’s okay.

<html>
    <head>
        <title>jQuery Autocompletion with JSON call</title>
        
        <link rel="stylesheet" 
              type="text/css" 
              href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.5/themes/ui-lightness/jquery-ui.css" />

        <script type="text/javascript" 
                src="http://ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.min.js"></script>
              
        <script type="text/javascript" 
                src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.5/jquery-ui.min.js"></script>
        
        <script type="text/javascript">
            $(document).ready(function() {
                $( "#tags" ).autocomplete({
                    source: function(request, response) {                                               
                        var url = "json";
                        var param = "";
     
                        $.getJSON(url, param, function(data) {
                            response(data);
                        });
                    }
                });
            });
            </script>        
    </head>

    <body>
        <div class="demo">
            <div class="ui-widget">
                <label for="tags">Tags: </label>
                <input id="tags" />
            </div>
        </div>
    </body>
</html>

Same origin policy

When requesting data using Javascript there are some security limitations that the browser enforces, one being the same origin policy. This policy restricts Javascript from accessing JSON from a different domain than the one hosting the script making the call. So if I want to request JSON from http://server.anywhere.com/json then the Javascript which makes the JSON call needs to originate from the same domain, http://server.anywhere.com/.

Again, AutoResponder to the rescue. I set up the AutoResponder to call my html page (the file which includes my Javascript) and mapped this file to a known URI from the same domain as the simulated JSON service. When the browser makes the initial request to http://server.anywhere.com/index.html, Fiddler’s AutoResponder intercepts the request and redirects a file from my local drive to the browser. The browser thinks it’s getting files from the web, but in fact Fiddler is just redirecting files from my local hard drive. When the time comes to trigger the script requesting the JSON from the simulate service at http://server.anywhere.com/json the AutoResponder steps in and returns my static JSON file. Notice the host name in the screen shot below and the jQuery autocomplete plugin in action.

Screenshot of the resultsing page in browser

Maybe a little clumsy to set up, but once done you can tweak everything in the files and no need to deploy any code or install any servers. I thought it was a nice touch that the AutoResponder can simulate latency so you can test any timeout functionality on the client side without having to add thread sleeps which usually is the case for service development.

Conclusion

I am really happy with Fiddler. I have used WireShark in the past, but for working with HTTP traffic it is a little too heavy. Fiddler has a lot of interesting features for web development and analysis work.

Advertisements

Jumping over LDAP hurdles

LDAP is nothing new, but until recently I had never had the need to create LDAP lookups from an application to a directory server. Most of today’s platforms are usually LDAP compliant and handle this themselves, thereby abstracting developers and administrators from the LDAP server internals. Under normal circumstances you just have to read the platform/framework configuration documentation, create a system account for the application to use to bind to the LDAP server, and the platform takes care of the rest itself… well more or less :-).

Context

In the past I have read a lot about LDAP, and for some reason I had got the idea in my head that working with LDAP was difficult. In practice it turned out to be pretty simple once you got past a few hurdles. The ASP.NET application I was maintaining needed to switch from LDAP services supplied by Lotus Domino to Microsoft Active Directory. The application was using Active Directory for user authentication, but for all the other services it was using the LDAP directory supplied by Lotus Domino. There were historic reasons why this was the case, but it didn’t make sense anymore. To make matters worse, the data in two directory services were not synchronized so users were complaining that their user data was displayed incorrectly in the application UI. Of course this was true since it was being read from a mix of directory servers.

From my brief experience from working with this technology there are three basic hurdles you need to jump over to get something working.

Binding

The first thing you need to do is bind to a LDAP directory which is LDAP jargon for authenticating to the LDAP server. It sounds simple enough… However, when creating a URL I usually write the protocol specifier using lowercase characters. Doing this was giving me a very cryptic error from the Microsoft .NET runtime since I was using a protocol specifier that looked similar to ldap://server/query… Luckily Google was my friend on this occasion and I soon found out why I was getting complaints from the .NET runtime environment. It seems the .NET System.DirectoryServices.DirectoryEntry class does not like the protocol specifier in lowercase characters when connecting with Active Directory, so ldap:// has to be converted to LDAP:// which I still find to be a bit strange. Is this a platform specific bug? I could not find any information in any LDAP documentation that specifies that this is necessary…

Directory structure

The next thing to do is get familiar with your specific directory structure. I used the free Softerra LDAP browser to help me here. It will let you query the directory and display the results. It will also let you traverse the tree using the GUI which is useful to get a feel for the LDAP tree. Your tree will probably be site specific and you will need to know where to look to create a meaningful and efficient LDAP query. For lookup efficiency you should avoid starting your query at the directory’s root node. I was working for a large enterprise with thousands of users and groups. Starting a query at the wrong place would kill performance. In my case I was using an auto-complete function on the UI to call upon a backend lookup service so it had to be fast.

LDAP query syntax

Building the LDAP query is where you probably will spend most of your time. The query syntax itself may seem difficult to read at first, but you get used to it fast. If you’ve ever worked with a scientific calculator then I guess you can think of it working much that same way, only backwards :-). So to find an object that has objectclass=person and a shortname attribute starting with “joe” you could write (&(objectclass=person)(shortname=joe*)). The “&” works as a logical AND and the objectclass is the LDAP object type to return (there are other types of standard LDAP objects). Also note the wildcard. To expand our search example to find people with a shortname starting with “joe” or “kent”, the query could be written as (&(objectclass=person)(|(shortname=joe*)(shortname=kent*))). Notice the “|” symbol for the logical OR. Also note the parenthesis in the query limiting the AND and OR functionality. It may be hard to read at first. This is where a LDAP query tool as mentioned above may come in handy when testing your query.

Maintaining and refactoring C++

Last week was my last day working with C++ (for a while). It’s been quite fun to revisit both the programming language and source code which kicked of my development career over 12 years ago, and I have enjoyed the experience a lot. There is also a few things to note so I put together a short list of things I found interesting during this short maintainance assignment.

Introducing a source control system

The code was originally written in 1999 and the executable files have been running in production ever since. Today the programs are owned by a group in the enterprise operations team. Their focus is to keep the systems up and running and they have little interest in the development process. There was no source control system available when I originally developed the code so,  before making any changes to the existing source, I was determined to correct that fact. A few months ago I taught myself Git and have never looked back since. Git is an excellent tool and this was an appropriate opportunity to introduce Git as suitable source control system for this code base. Being the sole maintainance developer of these programs I was happy just to add Git to aid my own productivity and give me the ability to safely abort a change should the need arise (and it did), but it will also pay off in the long run.

Updating to new IDE

Once a source control system was in place, the next step was to pick out the correct file candidates from each project to be checked in to the repository. I didn’t want every project file source controlled and this was a good occasion to get a bit more familiar with some of the lesser known project files used by the IDE, and also how to configure Git to filter file names/paths. Originally, the projects were all developed using Microsoft Visual C++ version 6 so the first step was to get them updated to a newer C++ IDE, which just happened to be Visual Studio 2008. Once the project files I needed were identified, these were checked in to the repository and tagged as the base version. Safe and ready to go!

Automatically updating the projects from Visual C++ 6.0 projects to Visual Studio 2008 solutions went ahead problem free – the IDE handled it all. My job was then to rid myself of the unnecessary project files only used by the old IDE. The (newer) Visual Studio C++ compiler has grown a lot “smarter” so a few syntax bugs had to be ironed out before the old code would build. There were also warnings due to calls to C++ standard library functions that now were deemed unsafe. In most cases a safer alternative was suggested.

Visual Studio 2008 is not unfamiliar to me, and those following this blog will know that I have used it for C# development, but never for C++. I was surprised how it lagged it’s C# cousin in functionality. Among other things there is little or no support for MSBuild and the IDE has no refactoring functionality. The latter was a real let down since refactoring C++ proved to be notoriously more difficult than any other modern language I have encountered. However, a few things made the update worth it: a better compiler and some IDE features like folders for structuring the source files. Visual Studio 2008 also has line numbering which I’m pretty sure was missing in the Visual C++ 6 source code editor.

Documentation and getting familiar with the source code

By chance, it just so happened that I came across Doxygen when googling for free C++ tools. Since Doxygen can be used for C#, Java and Python (untried, but according to the documentation) I thought it would be worth the time to take a closer look at this tool and that proved to be a wise decision. Doxygen is brilliant! I have not used it for the other languages it supports, but I plan to for my next project.  It’s syntax may remind you of JavaDoc, but with the correct dependencies installed it can create useful illustrations for viewing code and dependencies. Also, when creating the documentation you can configure it to include the source code in the documentation. For me the output was html and I actually found it easier to browse through the generated Doxygen documentation with my web browser than the source code itself using the IDE! Also useful is the fact that Doxygen can tell you which functions a particular function calls, and which functions your function calls. This proved to be useful when looking for things to refactor while attempting to simplify the code.

Beautiful code

I had never really had the need for a beautifier before, but this time I wanted to make the source easier to read, and also replace tabs with spaces and a few other things. I found a beautifier named UniversalIndentGUI which also works with more than one programming language, which I think is a plus. I fed all the source files to it and out popped “beautifully formatted” C++ source code. Voilà!

Unit testing and mocking framework

In Java development, unit testing is part of everyday life and has been for quite some time. However, where JUnit is the defacto standard for unit testing for Java, there is no similar single tool which has widespread adoption for C++ development. There are many tools available, but I had a hard time picking the one which I thought had the most potential and most active user community. In the end my choice fell on Google Test which proved to be a useful tool. Along with Google Mock, a mocking framework for C++, they provide functionality for unit testing and creating mock objects.

I spent a lot of the project time trying to refactor the code to use these tools. Unfortunately the code was riddled with references to a third part library, Lotus Domino C++ API, which I could not get working with GTest. Therefore a lot of the work was trying to narrow the usage of this library to only certain parts of the code. Although this was always in my plans, I never got quite that far and ran out of time, which was a shame. Refactoring can be time-consuming…

Project improvements

I added a simple readme file and change log to each project and moved any comments referring to a changes from the source code to the change log. I hope this will prove useful to any future developers for getting a head start and saving them from starting off with the source itself. With a simple attribute, Doxygen let me include the contents of each of the files in to the generated Doxygen documentation, which I though was a nice touch.

Lasting impressions

As I said earlier, I will miss working with C++. That said, I feel I can better appreciate the syntax improvements of languages such as C#, Java and Python. I think these languages better facilitate the creation of object-oriented code without syntax getting in the way, so to speak. C++ does make you work harder, but supplies more power in return (if you need it!). It is useful to keep in mind that trying to write C++ code in a Java or C# style may well provide you with unwanted memory leaks. In C++ you use the new and delete operators to create object instances on the heap, whereas Java and C# provide garbage collection to handle the deletion of objects no longer being referenced, as you probably know. Take this example, a Java method for fetching a bucket of water could look something like this:

public Bucket createBucketOfWater() {
    Bucket b = new BucketImpl();
    b.fill();
    return b;
}

Inside the method a new instance of a Bucket class is created and initialised. The memory used for this object will be reclaimed by garbage collection once the myBucket reference to the object is invalidated. The caller does need to think of this – it happens automatically.

// someObjectInstance creates and initialises the Bucket class, the garbage collector handles the memory when the myBucket reference goes out of scope
Bucket myBucket = someObjectInstance.createBucketOfWater();
myBucket.DoSomething();

Doing something similar in C++ may not be a good idea. You may end up with something like:

// create a new Bucket of water, return a pointer to the memory on the heap
Bucket* CreateBucketOfWater() {
    Bucket* b = new BucketImpl();
    b->FillWithWater();
    return b;
}

This code works, but will burden the caller to delete the memory used for the Bucket when done. If, for some reason, the caller should forget, the memory will be lost once the pointer variable is invalidated. We then have a memory leak.

// create a new Bucket of water, return a pointer to the memory on the heap
Bucket* b = CreateBucketOfWater();
b->DoSomething();

// must remember to delete memory on heap
delete b;

A useful rule of thumb to remember is that objects should be created and deleted by the same part of the code, not spread around. In other words a function or method should not create an object on the heap and then leave it up to the caller to tidy up when done. So how do we avoid this scenario? A more suitable C++ approach could be something like this:

// function body not relevant
void FillBucketWithWater(Bucket*);
// create a Bucket instance and pass an object pointer to the method, remember to delete the memory when done
Bucket* b = new WaterBucket();
FillBucketWithWater(b);
b->DoSomething();
delete b;

So to conclude, where in Java you would ask the method for a bucket of water, in C++ you would supply your own bucket and then use another method to fill it with water! When you are done with the bucket you are responsible for deleting it since you created it.

However, although this is a clear division of responsibilities, it does make me wonder how to properly create a factory method without burdening the caller to delete any created heap objects that the factory creates.

Useful Visual Studio keyboard shortcut

I found myself doing a lot of refactoring today. I was working through some terrible code with multiple if’s, else’s and everything else bar the kitchen sink!! Maybe there are tools that can help out for this kind of thing, but a simple keystroke came in very handy! Pressing Ctrl+Å (Norwegian keyboard) jumps to the starting/closing parenthesis of a code block (check this link for other keyboards). In my case some of the methods are hundreds of lines long so this saved me a fair bit of scrolling …

Remember me? I’m your old C++ code…

Just recently I was called upon to fix some code that I had written while working as a consultant “way back” in 1998. It’s not that long ago really, but a lifetime in software developer years. At the time I was fresh out of university so this was my first proper assignment as a professional. I was the sole developer on this project and the code was written in C++ using the Lotus Notes/Domino C++ API. This was kind of the norm back then.

In essence this old code was broken in to four programs. They were all server based batch type jobs, run at scheduled intervals. Their common goal was to maintain the people groups in the Lotus Notes Name & Address book to reflect the structure of the organization (which was, and still is, a large organization). That means creating new groups, removing empty groups and adding/moving group members to and from groups. For a user to gain access to the Lotus Domino servers (for mail and other databases) you had to be a member of a group in the hierarchy, since only the top node (and thereby it’s children) was granted access to the servers. The groups were also used as mailing groups to parts of the organization. It would be kind of “critical” for a user if the program made a mistake and it goes without saying that with so much application business logic you wouldn’t choose C++ for this type of task today.

I was amazed that these old programs were still running!! Sure, one program had been altered by someone else a few times some years ago, but the remaining three were running just as I last compiled them back in October of 1999. I thought that was kind of fun and also made me a bit proud. Of course, I believe there are two reasons why this code has run unaltered for so long:

  1. It was written properly and there was no need to alter it
  2. Nobody understood the source code and therefore dared not make a change

I choose to believe reason one. I guess that’s a shocker! However, I was actually able to confirm this fact when I started to work with the source code once again after all these years. It was tidy and easy to read, although I was amazed just how much of the C++ syntax was now strange to me after many years of programming Java, Python and C#. I would not have made all the same choices that I did back then regarding the architecture, but in general I was kind of impressed. There was also valid documentation, written by me, which I found very useful when trying to get back in to the problem mindset. Not bad!! 🙂

It was strange to use Visual C++ 6.0 again which was the IDE/compiler I worked with originally. I did actually try to upgrade the project to Visual Studio 2008, but the Visual Studio C++ compiler wouldn’t compile the original source code so in the end I gave up trying. It was never part of the new assignment and the C++ syntax was just too unfamiliar to me. The customer didn’t care so I stuck to VC++ for the time being. Maybe in the future if I get the opportunity again I will give it another attempt.

Of course, it goes without saying that the actual source code – released in 1999 – was lost, but luckily enough I found a copy on a CD-ROM at home, which was a relief. It made the job a lot easier, but I guess it also shoots to bits my reason one (above) a bit 🙂

Encouraging signs for web development on the Microsoft ASP.NET 4.0 platform

This really seems like a good time to be working with Microsoft web technologies. Not only has ASP.NET 4.0 just shipped along with a new version of Visual Studio, but there seems to be a focus on more openness and willingness to adhere to web standards and co-operate with the community. Coming from an open-source world this is a familiar mindset to me, and although I only recently have crossed over to the Microsoft platform, the idea of community driven development still appeals to me. I just downloaded the 2010 Express versions of Microsoft Visual Web Developer and Microsoft Visual C# and my initial impressions are good.

I prefer doing my client-side scripting using jQuery and have done so successfully for a few years now. Followers of this blog will know that I recently completed my ASP.NET 3.5 certification. What I found a little annoying when studying for the exam was having to delve in to the details of the Microsoft AJAX library knowing full well that I would probably never use any of it. Yesterday I came across Stephen Walthers article regarding Microsoft’s contribution to the jQuery project. I was encouraged to read that Microsoft will be further shifting their investment to contributing to the jQuery project and moving away from Microsoft client-side Ajax. However, although I will probably never use the Microsoft AJAX library in any of my projects, I consider it a benefit that I am aware of the “old ways” of doing client browser scripting from a ASP.NET perspective. I’m sure there will be plenty of code that will need to be refactored and upgraded to jQuery in years to come :-).

An encouraging project that seems very interesting is Microsoft’s ASP.NET MVC project. The ASP.NET MVC templates are now part of the Visual Studio 2010 IDE, and from what I have been reading, this will be the preferred way forward for web development on the Microsoft platform in the future. Coming from an open source Java based web development world, this is music to my ears and something I am looking forward to learning more about in the months ahead.

With the release of ASP.NET 4.0, my understanding is that there has been a focus on getting the generated ASP.NET xhtml to adhere to web standards and therefore simplifying CSS styling. This applies to both MVC and Webform development. I think this is good news since there have been a few times in the last few months where my jaw has dropped to the floor when viewing some of the xhtml source code generated from the ASP.NET 3.5 controls – especially for the data bound controls. In today’s world of correct web semantics I’m glad this finally is on the agenda and look forward to reaping the benefits in the future.

Visual Studio 2008 default keyboard shortcuts and customisation

There is an old “saying” in the world of software development that you should learn to “use a single text editor well” in order to obtain maximum efficiency [see The Pragmatic Programmer]. In the case of Microsoft development the only real choice is Visual Studio and that’s what I’ve focused my energy on the last couple of months.

I have forced myself to find, use and hopefully remember keyboard shortcuts for Visual Studio 2008. On my project I’m in the process of getting familiar with some legacy code so my focus has been on things that help me understand and view code. There are a lot of shortcuts in Visual Studio 2008, but here are a few I find myself using a bit:

Comments:

  • Comment code: ‘Ctrl’+’K’, ‘Ctrl’+’C’
  • Uncomment code: ‘Ctrl’+’K’, ‘Ctrl’+’U’

Bookmarks:

  • Set/unset bookmark on line: ‘Ctrl’+’K’, ‘Ctrl’+’K’
  • Go to next bookmark: ‘Ctrl’+’K’, ‘Ctrl’+’N’
  • Go to previous bookmark: ‘Ctrl’+’K’, ‘Ctrl’+’P’

Navigation:

  • Navigate backward: ‘Ctrl’+’-‘
  • Navigate forward: ‘Ctrl’+’Shift’+’-‘
  • Go to line: ‘Ctrl’+’G’
  • Go to definition: ‘F12’
  • Go to declaration: ‘Ctrl’+’F12’
  • Find all references: ‘Shift’+’F12’
  • Find symbol: ‘Alt’+’F12’

Collapsing/Expanding code regions:

  • Expand all regions: ‘Ctrl’+’M’, ‘Ctrl’+’L’
  • Collapse all regions: ‘Ctrl’+’M’, ‘Ctrl’+’O’
  • Collapse / Expand current region: ‘Ctrl’+’M’, ‘Ctrl’+’M’

Breakpoints:

  • Set/unset breakpoint: ‘F9’
  • Enable/disable breakpoint: ‘Ctrl’+’F9’
  • Show breakpoints window: ‘Ctrl’+’Alt’+’B’

stepping through code:

  • Step over: ‘F10’
  • Step into: ‘F11’
  • Step out: ‘Shift’+’F11’
  • Run to cursor: ‘Ctrl’+’F10’

I’m not sure if these are global shortcuts for Visual Studio 2008 IDE irrespective of development language, but they work for me when coding ASP.NET using C#.

You can also download a nice pdf for the Microsoft Visual C# default keybindings here. However, I didn’t get all of them to work “out of the box”.

Customising the editors keybindings

If the default keybindings aren’t covering your needs then you can customise the editor to your own. The “Tools | customize” menu will open the customize dialog box which in turn let’s you customise the default keybindings.

I found myself building my project often, but found no default keybinding for it apart from the one which applied to the full solution which was too much in my case. According to the pdf mentioned above the ‘Shift’+’F6’ keybinding should have done this, but in my case it wasn’t assigned. The screenshot below illustrates what I did to set it to adhere to my needs.

The cusomize keyboard options