Archive for the ‘Development’ Category

Ordering pizza online – baking the code that made it happen

For the past 2-3 months we have been working hard to deliver (no pun intended) an online ordering system for Debonairs. Basically, we we were tasked to produce a website and mobi site which would enable Debonairs’s customers to order a pizza using their favourite internet browser. The sites went live a few weeks ago and I thought it would be useful to put together a short blog post describing the solution from a technical perspective.

Debonairs Pizza

Both the website and mobi site were developed using Microsoft’s ASP .NET MVC 2.0 framework with some slight modifications to the vanilla setup. We chose to use Spark as our view engine. Spark’s syntax is declarative and resembles HTML which enables us to integrate code seamlessly (i.e. our views don’t evolve into spaghetti code). This is not the sole advantage of using Spark, I would recommend taking a read through the documentation to realise the more important benefits. This is not the first project we’ve integrated Spark with, and we continue to enjoy working with it.

In terms of client side scripting, we used jQuery as our JavaScript library. Combres (an open source project hosted on Codeplex) was implemented at the backend to ensure that resources such as JavaScript and CSS were minified and compressed before being served to a customer’s browser. Furthermore, Combres enables us to easily update our JavaScript/CSS references which ensures that the latest version of the relevant resources are always loaded.

For the mobi site, we stuck with MVC but ensured that the markup produced by the server was XHTML-MP compliant. Given the nature of the mobile landscape, we were required to support a large number of internet enabled devices ranging from smart phones (such as the iPhone) to the hugely popular Samsung E250. For Mobile Researcher we use a library called Device Atlas which enables us to detect the type of device accessing the site and provide information regarding key capabilities. Given our experience with the product we used the same library on the Debonair’s mobi site. Images are automatically scaled and cached for devices accessing the site which ensures the site’s look and feel remains consistent no matter what resolution the relevant device sports. In time, we may begin enchancing the site for specific devices such as iPhones and Blackberrys.

For persistence we chose SQL Server 2008 R2 Web Edition. We have extensive experience working with both Microsoft SQL Server and MySql, however, we wanted to utilise Windows Workflow 4.0 as part of the solution and we didn’t have time to look at writing a custom MySql persistence service for the workflow engine. In any event, our selected ORM (LLBLGEN) enables us to target any number of database technologies so theoretically, we could switch over to MySql in the long term without too much work. From a code perspective, we implemented the repository pattern using custom LLBLGEN templates. At the repository level, queries were developed using LLBLGEN’s LINQ engine.

Windows Workflow 4.0 was used to handle longer running processes such as those invoked when placing an order. Once an order is validated, the order is serialised into XML and posted to a third party POS gateway which performs further validation and propagates the order down to the relevant store. There are number of failure points involved in this process hence the use of Windows Workflow. If the order is not successful (whether it reached the third party gateway or not) an SMS is sent to the relevant customer informing them of the failure. As you can imagine, there are a number of issues that could cause this to happen including network connectivity failures to the the third party gateway, connectivity problems between the stores and the gateway etc. We wrote a custom Workflow Application Manager that manages persistence and execution of Workflows which means the system is resilient against system reboots etc.

A Windows Service was developed to host the workflow runtime and provides the host for workflow execution. Besides order processing, the service handles menu synchronisation, store connectivity tracking and communication (email/SMS) services. To facilitate communication between the web and mobi-site, we used NServiceBus which leverages MSMQ. NServiceBus enables us to deliver messages to the Windows Service in a transactional context which is obviously critical.

As far as unit testing goes, we used a number of frameworks and libraries to facilitate the unit testing process. For the core testing framework, we used NUnit. For our mocking library we chose Moq. Moq simplifies the mocking process considerably as it exposes a fluent API. If you haven’t heard of Moq I seriously recommend checking it out.

Right now both sites are humming away on IIS 7.5. There is a lot more to come with a number of key features still in development/testing, the main being the ability to pay for an order using your credit card. All in all, the
implementation process has has been an exciting experience which has allowed us to leverage a number of new and useful technologies. Without resorting to butchering the idiom ‘the proof is in the pudding’, please think about placing your next order for a Pizza using one of the sites mentioned – we would really appreciate your feedback.

If you have any questions, comments or suggestions please free to give me a shout.


Workflow 4.0 – Custom activities not appearing in the VS2010 toolbox

I thought this particular issue warranted a blog post because it took me a good couple of hours to track down the problem. Essentially as the title suggests, my custom Workflow 4.0 activities were not appearing in the Workflow 4.0 toolbox on a few projects I am working on.

If I created a new Workflow 4.0 project in Visual Studio 2010, everything worked as it should. However, if I integrated Workflow 4.0 with an existing project I ran into problems. After numerous Google/Bing searches I found a few examples of forum/stackoverflow posts about the problem but usually in the VS2010 Beta context.

It tuned out that the problem in my case is that I had two projects in my solution where the project files (*.csproj) were in the same directory. VS2010 did not like that for some reason. After moving the two projects into separate folders, everything worked! Absolutely bizarre.

If you are interested, the reason I had two project files in a single directory was because it’s the default configuration when using LLBLGEN (adapter mode). It generates two projects, one database specific and the other generic. It is possible to get LLBLGEN to generate the projects in separate folders, you can find the instructions here.

Feel free to play the sad trombone if you wasted as much time as I did.


Backup your Subversion repository offsite (Windows Guide)

If you work in a development environment, theres a good chance you are using Subversion as your code repository of choice. If thats the case, the usual suggestion for backing up is to dump the repositories onto a DVD or external drive to be stored offsite. We have been doing this for a while and have found the process painfull (to say the least!). If you run subversion and don’t have your data backed up frequently offsite, you might find yourself pushing this button sooner or later!

Near the end of last year I started looking at offsite backup options that didn’t require user intervention and was very excited to discover the svnsync command. The benefit of svnsync is that only new revisions are mirrored and not the full repository each time. This is absolutely critical if you have a repository that is quite active. Needless to say, I decided to forge ahead and try my hand at implementing automated scripts to take care of backing up our repositories online utilising the svnsync tool. As a reference I have posted the setup process here.

Its important to note that this guide assumes you are working in a Windows environment and that you have access to a server offsite. I have referenced a few articles and other blog posts I discovered along the way to help you if you are working in a Linux environment.

Step 1 – Setup Subversion on your remote server

Create a Windows user account on your remote server which you will use to remotely access the backup repository from your main Subversion server. Take note of the account name and password you use. Once you have created the account, install Visual SVN on the server where you want to host your mirrored repositories. Ensure you select Windows Authentication on the security dialog during the installation process. Once completed, ensure that Subversion is running correctly on the remote machine by opening the VisualSVN manager and clicking on the repository address displayed. Now ensure you can access the repository from your host Subversion server. If your backup server’s name is not addressable from your host server, use the remote servers IP address or simply add an entry to your DNS server or Windows host file. If you opted for setting up a DNS entry, you should be able to ping your backup server using the server’s name. Try access the repository again. When prompted for username and password use the credentials setup for the user account you created.

Step 2 – Configure permissions

Before setting up the repositories etc. we need to define which users have access writes to the backup repository. To configure this, open VisualSVN manager on the remote server and right click on the Repositories folder, choose Properties from the drop down menu. Revoke all access for the BUILTIN\Users role and then add the user account you setup in Step 1. Ensure this user has full Read/Write access.

Step 3 – Create the destination repository

Now that you have full configured the Subversion server hosted on your remote server we can start the process of setting up the synchronisation process. To do this we need to ensure that we have a destination repository to mirror your existing repository to (if you have more than one you need to create a destination repository for each repository you want to mirror). To keep things simple, I gave the destination repository the same name as the source repository. Take note that any repository you create on the destination server should be empty (i.e. do not tick the “Create default structure” checkbox when creating the repository.

Step 4 – Configure the repository

The next step involves setting the Pre-revision Property Change Hook. This is an important step. Right click on the repository you created on the destination server and select All Tasks > Manage Hooks. Click on the “Pre-revision property change hook” entry and click Edit. Enter a few blank lines and click OK and Apply.

Step 5 – Configure SSL

We need to configure the client server to accept the SSL certificate generated by the VisualSVN installer. If you wish to use a properly signed certificate or already have one, follow this guide and ignore the rest of this step. If you want to continue using the auto generated certificate, follow Mark Wilson’s guide on how to trust the default certificate.

Step 6 – Initialise your repositories for synchronisation

Before you can synchronise your repository, you need to initialise it. To do this, you need to run the following command on the host server (note that you need to replace the keys in CAPS to the relevant object names):

svnsync init PATH_TO_REMOTE_REPO PATH_TO_LOCAL_REPO –sync-username REMOTE_USERNAME –sync-password REMOTE_PASSWORD –source-username HOST_USERNAME –source-password HOST_PASSWORD

Step 7 – Initialise remote repositories from a previous backup

Only run through this step if you have a relatively large repository and don’t want to have to mirror it (the sync process is quite slow) from revision 0 all the way to revision xxxx. If you are running through these steps for a brand new repository you want to have mirrored, ignore this step. Also, please take note that if you are using Powershell to execute all these scripts “>” is equiv to | Out-File -encoding Unicode (thanks Keith). If you don’t be careful, you might end up with the Malformed dumpfile header error. To be safe, use the command line interface.

Dump your existing repository on your host machine by running the following script:

svnadmin dump “FILE_PATH_TO_REPO” > “REPO_NAME.db”

Once the repository dump has completed, upload it to your backup server and then run the following script on the backup/mirros server:

svnadmin load “FILE_PATH_TO_BACKUP_REPO” < "REPO_NAME.db"

Now, the next step is critical. You need to update the last-merged-rev property on the remote repository to the existing revision number of the repository (you can get this information by running “svn info REPO_PATH”). To do this run the following script:

svn propset svn:sync-last-merged-rev –revprop -r0 REV_NUMBER “PATH_TO_REMOTE_REPO”.

Step 8 – Synchronise!

Basically you are done, you simply need to run the following script on a frequent basis (best to setup as a scheduled task in Windows):

svnsync sync PATH_TO_REMOTE_REPO –sync-username REMOTE_USERNAME –sync-REMOTE_PASSWORD –source-username HOST_USERNAME –source-password HOST_PASSWORD

Hope you found this useful. I might follow this post up with another blog entry on steps I took to setup an automated script to email me when a repository on the host machine is missing its mirrored counterpart. This is really helpful to detect cases where a repository was setup locally but not configured for synchronisation, furthermore the ability to automatically generate the relevant scripts is quite useful :)



Implementing the Repository Pattern with LLBLGEN

This post has been sitting in draft for a while but finally managed to get round to completing it. I started it back in 2009. Apologies for the delay :)

It was the start of 2009, and I was investigating ORM tools for a new project we were working on at Clyral. We had been using Linq 2 SQL as our core database access layer for some time but felt we had outgrown it and were looking for something a bit more powerful and flexible. It didn’t take long for us to discover LLBLGEN. Whilst not the most intuitive acronym for an O/R mapping framework, LLBLGEN (Lower Level Business logic Layer Generator) impressed the team from the outset. After downloading the demo version and playing around with it on a test project we committed to purchasing it and since then haven’t looked back.

We started off using the Self Servicing model of the framework as it was earmarked for beginners, in time though, we began to see that we would get more mileage using the adapter model and began using this model as the defacto standard for projects. It was at this point we began looking at ways to implement the repository pattern which simplifies the testing process and ensures the implementation (which is often technology specific) does not get mangled with the domain model. To achieve this we needed every entity to implement an interface (or contract if you will). The problem with this of course, is that in C# generic variance is not supported. This posed a bit of a problem because we still wanted the full representation of a given entity graph to be available using our defined interfaces. To get around this, we needed to update the LLBLGEN templates to allow us to inject our own custom implementation of collections which would match our interface definitions. I have provided a few example snippets to illustrate what I am talking about. Essentially we added properties which took a Todos collection (property of a TodoList entity) such as defined below:

public virtual EntityCollection<TodoEntity> Todos
			_todos = new EntityCollection<TodoEntity>(EntityFactoryCache2.GetEntityFactory(typeof(TodoEntityFactory)));
			_todos.SetContainingEntityInfo(this, "Todolists");
		return _todos;

and added the code below to support out interface definition:

public EntityList<ITodoEntity, TodoEntity>  TodosCollection
		if (_TodosCollection == null)
			_TodosCollection = new EntityList<ITodoEntity, TodoEntity>(this.Todos);
		return _TodosCollection;

private EntityList<ITodoEntity, TodoEntity>  _TodosCollection;

where “EntityList” is a custom wrapper we wrote to get around the generic variance issue (note that EntityList understands that a TodoEntity is an implementation of ITodoEntity). This allowed us to define our entity contracts as such:

    /// <summary>
    /// Interface for the entity 'TodoList'.
    /// </summary>
	public partial interface ITodoListEntity 
		EntityList<ITodoEntity, TodoEntity>  TodosCollection {get;}		

		System.Int32 Id {get;set;}
		System.String Title {get;set;}
		System.String Description {get;set;}
		System.Int32 ProjectId {get;set;}
		System.Int16 Position {get;set;}
		System.Boolean Billable {get;set;}
		System.DateTime CreatedOn {get;set;}
		System.DateTime ModifiedOn {get;set;}
		System.String CreatedBy {get;set;}
		System.String ModifiedBy {get;set;}
		System.Guid CreatedUserId {get;set;}
		System.Guid ModifiedUserId {get;set;}

As you can see, a standard was implemented where the original list’s name was simply extended with the word “Collection”. After modifying the adapter’s templates and generating our templates to create the entity contracts, everything fell into place and we had our repository pattern implemented. Our repository definitions ensured that only interfaces were passed round (of course implemented using LLBLGEN’s entities) which in turn ensured that our UI (or business logic) knew nothing about the underlying implementation. One benefit of doing this is that the chaps working on the UI never had to deal with the copious number of properties and methods that hang off an LLBLGEN entity by default. Of course, these properties and methods are useful in some cases and can still be used within the repository itself.

I have attached a zip file to this post with the implementation of the EntityList class as well as the templates that we modified and added to make this all happen. Let me know what you think, any comments or suggestions regarding the implementation are certainly welcome!


Templates and supporting files

C# HTML Diff Algorithm

I have finally launched my first Codeplex project, very exciting :) I was inspired by to find some way of implementing an HTML difference viewer in an internal application I was developing. Essentially, I was looking for a way to take two blocks of HTML and compare them in a way that highlights what the differences are. This is extremely useful for CMS type systems where WYSIWYG/Textile/Wiki markup is used to populate content. In most web systems where content is authored dynamically, a history of the content is tracked over time. When collaborating with a few people, this feature is critically important. What makes it extremely useful is the capability to detect what has changed between versions. This post focuses on a project I have launched to do exactly that – track the difference between two versions of HTML markup.

The application I was building was developed on ASP .NET MVC (C#) so naturally I was looking for some C# code I could use to implement the difference algorithm. In searching, I could not find any libraries that were worth implementing. I did come across one or two command line utilities but nothing spectacular. I widened my search to other languages and came across a neat implementation in Ruby. The algorithm was developed by Nathan Herald who generously made the code available to everyone via the common MIT license.

So, I had the algorithm I was looking for, but I didn’t speak Ruby! This was an excellent opportunity to roll up my sleeves and learn some Ruby so I fired up my browser, downloaded the Windows one-click installer and got a simple environment up and running. After toying with code for a bit, scratching my head at one or two alien Ruby constructs I got the gist of how things worked. I fired up Visual Studio, created a new project and began the process of porting the algorithm. I must admit that the process was relatively painless and I got something working in a few hours. It took about another hour or two to iron out some bugs I picked up but essentially, in a relatively short space of time, I had the C# diff library that I was originally looking for! Below is a demo of how it is used, followed by one or two screenshots demonstrating the functionality when rendered to your browser.

            string oldText = @"<p>This is some sample text to demonstrate the capability of the <strong>HTML diff tool</strong>.</p>
                                <p>It is based on the Ruby implementation found <a href=''>here</a>. Note how the link has no tooltip</p>
                                <table cellpadding='0' cellspacing='0'>
                                <tr><td>Some sample text</td><td>Some sample value</td></tr>
                                <tr><td>Data 1 (this row will be removed)</td><td>Data 2</td></tr>

            string newText = @"<p>This is some sample text to demonstrate the awesome capabilities of the <strong>HTML diff tool</strong>.</p><br/><br/>Extra spacing here that was not here before.
                                <p>It is based on the Ruby implementation found <a title='Cool tooltip' href=''>here</a>. Note how the link has a tooltip now and the HTML diff algorithm has preserved formatting.</p>
                                <table cellpadding='0' cellspacing='0'>
                                <tr><td>Some sample <strong>bold text</strong></td><td>Some sample value</td></tr>

            HtmlDiff diffHelper = new HtmlDiff(oldText, newText);
            string diffOutput = diffHelper.Build();

Using the sample web application provided with the project in Codeplex, the following is rendered based on the code above:



Updated HTML

Updated HTML

HTML diff output

HTML diff output

You can see that the algorithm as originally developed takes care of the nasty HTML parsing to figure out how to highlight the differences. The changes are marked up using “ins” and “del” tags. You can easily style these tags as I have done. The CSS below is responsible for rendering the differences as per the example.

ins {
	background-color: #cfc;
	text-decoration: none;

del {
	color: #999;

I hope you find the library useful. I wish I had more time to add tests and more documentation to the Codeplex project, but for now I think the implementation is reasonably solid and easy to follow. If you spot any bugs, let me know and I’ll try and attend to them. Given that I am not responsible for the original implementation as developed in Ruby, it might be a bit tricky to solve some of the fundamental issues with the algorithm but I will certainly have a crack at it since I have quite a good understanding of how it works after porting it.

Link to C# implementation:
Link to Ruby implementation: