I was bored yesterday and haven’t had the time to write much code recently, the result is Fogshot – a FogBugz plugin for Greenshot. Greenshot is an awesome screenshot tool which allows you to annotate, highlight and obfuscate your screenshots. It’s open source and free.

From what I can tell, the latest builds of Greenshot don’t seem to stem from code in the live SVN repo for the project. With no documentation on how to develop a plugin I relied heavily on ILSpy to get the job done. In addition, I used Fiddler to reverse engineer how FogCreek’s screen capture utility uploads screenshots to new and existing cases. FUN!

Configuration

There is a configuration screen for your FogBugz account settings which you can get to by clicking Edit > Preferences > Plugins. Don’t stress, your password is encrypted in the Greenshot INI file. The screen looks like this:

2w24ae4c.odo

Take a screenshot (hit PrintScreen):

wbr4agla.fmy

Click File > Upload to FogBugz:

kkbwb20g.nt2

Choose whether to generate a new case or attach to an existing case:

f0uzozlf.ihm

Manage the case in FogBugz!

amtrwt51.usd

And that’s all there is to it. I’ve uploaded the source code to GitHub (https://github.com/Rohland/Fogshot) if you’re interested in adding new functionality or developing your own Greenshot plugin.

You can grab the latest version to install from here: https://github.com/Rohland/Fogshot/downloads. To install, simply extract the archive into your Greenshot installation directory.

Caution:

I’ve only tested this plugin with the following versions of Greenshot

  • Greenshot-INSTALLER-UNSTABLE-0.8.1.1486.exe
  • Greenshot-RC7-INSTALLER-0.8.1.1427.exe

Which you can grab from here: http://getgreenshot.org/version-history/

Hope this is useful.

FogBugz – Email edge detection

Tags: , ,
Posted in Development, Tips and tricks

A while ago I was involved in a project at Clyral where we developed a web based support ticketing system (similar to Zendesk). One of the things we focused on was dealing with the detection of new content in an email conversation. Typically, emails would bounce back and forth between a customer and an agent and we wanted to ensure that our web interface wasn’t cluttered with the full email thread for every reply the customer sent in (the original conversation was always included in outbound emails for convenience).

When we looked at implementing the mechanism which would only extract new content from an inbound email, we found it surprising that there was no real standard for demarcating the beginning/end of an email in an email conversation. It seemed obvious that it would be useful for email clients to stick to a common standard in terms of how they would format existing content when replying to an email, if only to be able to separate the conversation view when viewing an email after a number of exchanges. Off the top of my head, both Outlook and Gmail support this functionality. If you use one of the newer versions of Microsoft Outlook, you’ll notice that the client is capable of detecting the boundaries between emails (take a peak at the screenshot below). The Gmail web client also provides a mechanism to collapse an email conversation into a logical group of messages by detecting new content in an email conversation.

image

While there didn’t seem to be a standard way of achieving this, we managed to get a decent solution in place for our ticketing system. Since we had control over the format of the outbound email, we could standardise the email format such that we could easily detect new content when a customer responded. This was not a full proof solution, so in the end, we implemented a relatively simple heuristic method that could deal with most of the common mail clients out there. Implementing this feature made it far easier to manage conversations with customers. Obviously, we always kept the original email as customers would sometimes reply with changes to the original email content (such as answering questions inline).

Last year, we started using FogBugz as our general case and project management system. What we discovered was that FogBugz isn’t that smart at managing email conversations. Outbound emails do not include the full email conversation and if you don’t use the web interface to respond, the boundary between the new content and previous communication isn’t detected at all. This usually leads to a very cluttered case view where you need to scroll over copious amounts of duplicate text.

Thankfully, FogBugz has a nifty feature which allows you to customise the front end with JavaScript and CSS.  To deal with this problem, I implemented a very simple JavaScript customisation which scans over the content in a case and hides any email text which is superfluous. You always have the capability to toggle the content if you need to inspect it. I’ve included the code for the customisation below. We use Microsoft Outlook (and most of our clients do as well), so the solution works well for us. Replies from Gmail should be supported as well. The code simply scans email for a new line starting with ‘From: ..’ and splits the email there. It’s not rocket science.

$(function(){
    $('.emailBody').each(function(index, element){
        var body = $(element);
        var edgeIndex = body.html().indexOf('\nFrom: ');
        if (edgeIndex == -1){
        	return;
        }
        var mainBody = body.html().substring(0, edgeIndex);
        var quotedBody = body.html().substring(edgeIndex);
        body.html(mainBody);
        body.append('</pre>
<div class="showQuote" style="padding-top: 5px;"><a class="dotted" onclick="$(this).parent().parent().find(\'.emailThreadBody\').toggle();" href="javascript:void(0);">- show quoted text -</a></div>
<pre>
');
        body.append('</pre>
<div class="emailThreadBody" style="display: none;"></div>
<pre>
');
        body.find('.emailThreadBody').html(quotedBody);
    });
});

We’ve recently started using FogBugz to track the work we do. It’s early days, but we’re hoping FogBugz’s Evidence Based Scheduling (EBS) feature will be able assist us to schedule our sprints better. One thing I found particularly annoying when we first started using the product is that there is no easy way to customise the email notifications you receive. I believe it is possible to update the template when using the self hosted option but that requires fiddling with their code. This is the typical notification one receives if someone edits a case that’s assigned to you (or in the case someone notifies you of a change to a case you are potentially not assigned to):

image

The notification has all the information you need, however, it’s pretty hard to pick up the message someone may have included. I’m not a fan of plain text emails. When you start receiving a number of these notifications every day, it gets a bit annoying trying to scroll and find the meat of the notification. On Saturday I spent a few hours developing a plugin for Outlook 2010 that could assist with the issue. While this was not my only option (I could have developed a FogBugz plugin), I was intrigued to see if it was indeed possible to update the format of a message in Outlook. As it turns out it’s pretty easy.

This is the result:

image

I’m no designer (and Outlook’s HTML support is crap) but I’m relatively happy with what I was able to achieve in a short time frame. The plugin will update the email’s format as you open the email so basically, you never have to deal with the plain text version outlined in the first screenshot. Obviously, the downside to this approach is that notifications received on your phone won’t render in the format above (unless you are using Exchange or similar and have opened the email at least once in Outlook).

Other features I was able to add relatively easily included setting the priority flag on the email depending on the priority of the case. If the case is really urgent (priority 1 or 2), the priority flag is set appropriately.

image

If for some reason the parsing fails, the default email format is kept.

I’ve included the source code as an attachment of this post. Feel free to customise it for your needs. It should be relatively easy to update the template as it is bound to a model which is instantiated as a result of parsing the plain text email.

Hope this is useful.

Rohland

It’s inevitable. If you are a web developer, at some point in time, you are going to be stuck with the problem of migrating a system between internet service providers. At that point you will discover (if you haven’t already) that it’s a bit of a pain in the ass. Not just because you have to move stuff, but because of DNS management and the infamous propagation lag.

(image courtesy of: Michael Witthaus: localrhythms.wordpress.com)

I’ve given this quite a bit of thought lately as we were in the process of migrating many of our systems to a new hosting provider. There are two main areas you need to focus on in minimising downtime for your clients:

  • Time required to take the site down, backup, move and restore everything to the new server/s.
  • The DNS record update.

The second item on this list is the major issue. You have little control over the amount of time your DNS update is going to take to propagate to your clients due to DNS record caching. Reducing the TTL is not a full-proof solution since many ISPs won’t obey unusually low TTL settings that you may configure. People have come up with various strategies for dealing with the issue such as implementing a temporary sub-domain while the DNS record update slowly propagates around the globe. Either of the options above may be your best bet in many circumstances, however, if you are making use of a load balancer or proxy (such as Nginx) migrating your site is less of an issue.

Before migrating to our new service provider, we were not making use of a load balancer such as Nginx/HAProxy, however, we had one setup at our new hosting centre (we went with Nginx) before we began the migration. Whilst the rationale for implementing Nginx was more out of the need to create a resilient hosting infrastructure for our clients, we also gained the ability to have more control over where our content is served from.

For those of you not familiar with the concept of a load balancer in the web context, it simply allows you to proxy requests through one endpoint to a dynamic number of back-end web servers. In the Nginx context, the back-end servers are configured as upstream servers. When an HTTP request is received, Nginx forwards the request to one of the back-end webservers. For each successive request, a round robin balancing algorithm is employed to fairly distribute the load. You can control the weighting of each upstream server to deal with a mismatch in server capabilities (think of 3 webservers where 1 is far more capable than the others due to better hardware). While Nginx sufficiently handles balancing in our case, HAProxy does have far more capabilities in this area.

Back to the DNS update issue (if the solution hasn’t become apparent yet, it will now). In our new environment, we configured our Nginx load balancer to point to all our existing webservers (i.e. not our new webservers). Once we completed the configuration of Nginx to do this, we updated all of our DNS settings. We then waited a few days. As you can imagine, over the course of time, each site began resolving to our new hosting provider but users experienced no downtime since Nginx was simply proxying the updated requests to the original servers. This meant that clients who had not yet received the updated DNS record were hitting the original servers, while clients that did receive the DNS update we also doing so but through Nginx at our new hosting provider. Once we were comfortable that everyone was hitting the various sites through our new setup we simply turned everything off and performed the real migration (i.e. physically move the databases etc. across).

Given this approach we minimised downtime for our clients. The 2-3 hours for the physical migration of databases and systems was acceptable to our clients, 2-3 days for DNS resolution was not. This solution mitigated days of potential downtime and worked well for us. The only real downsides to the approach was that our sites operated marginally slower (due to added latency between new and old host) and that we had to pay for bandwidth twice during the transition period. Both of these issues were tenable given the alternatives.

That’s it basically. I have documented the steps below for anyone who’s interested.

Cheers,
Rohland

Setup


- A. Existing web server (DNS currently points here)
- B. New server (Nginx server. This server proxys requests to C, D, E which are the new web servers).

Migration


Step 1. Update the Nginx configuration (Server B) in our new setup such that the upstream servers were set to A (not the new webservers C,D an E)
Step 2. Update the DNS record to point to B.
Step 3. Wait 2-3 days while DNS resolves to B.
Step 4. Take your app offline and install the relevant systems on C, D and E. (presumably there is also a DB server).
Step 5. Update Nginx configuration such that the upstream servers are set back to C, D, E.
Step 6. Bring application back online

Notes:


1. Parts of Step 4 could be achieved before you even start the process. With the exception of the database, the rest of the application could be deployed and tested in the new environment to minimise downtime (only the database needs to be backed up and restored in the final step).
2. As described earlier, during the two to three days while your DNS update is propagating, all requests to B will be proxied to A which means the latency between server B and A will be added onto every request. This shouldn’t be a big problem depending on what the latency between your hosts are.
3. If you make use of SSL, you need to setup your SSL certificate on the Nginx machine before you switch your DNS settings. I’ll probably need to cover this procedure in a separate blog post.

If you’re an ASP .NET developer (Webforms or MVC) and haven’t picked up on the recent undercurrent of negative sentiment towards the Microsoft development platform then I’m not sure where you’ve been. In any case, welcome back. To fill you in, there have been a number of developers who have raised various concerns about the efficiency of an ASP .NET web developer, and how as a group, they compare to those developing with other languages (such as Ruby) and frameworks (such as Rails). I thought I would add my thoughts to the conversation.

I want to be as objective as possible about my thoughts so as to add value to the discussion, but I guess I need to confess that my development experience is heavily weighted to the .NET platform. Over the years I’ve developed web, mobile and desktop applications using various languages, platforms and frameworks. That said, by far, I am most proficient in C#.

Firstly, I think it is important to note that it is somewhat difficult to accurately compare development contexts. It’s easy to confuse platforms, frameworks and languages. I once overlooked a developer typing this into Google:

Say what?

Huh?

What makes Ruby on Rails so popular? Can we attribute its success to the language (Ruby) or the framework (Rails) or both? Some people may say that what makes developing using Rails so awesome is Ruby itself. Others may say it’s the framework, in which case the same developers could be as efficient on other similar frameworks such as CakePHP. If I had to venture a guess, I think that most experienced developers would go with option 3. It’s the entire ecosystem that makes Ruby and Rails so popular, as much as it’s the .NET ecosystem that makes ASP .NET a popular option as well. This makes it difficult to compare aspects of each.

I have never had the opportunity to develop an application using Ruby on Rails but I think I have enough experience on various other web platforms to raise the following point. No single language or framework defines a developer, especially a web developer. Right now, most of the applications I develop are web applications and this means I need to be fairly proficient at a number of languages, not just C# ( HTML/XHTML, CSS, JavaScript and SQL to name a few). When it comes to assessing efficiency of developing on a platform, we need to address all of these aspects. Sure, Rails addresses the SQL aspect (Active Record) right off the bat, but how does a Rails developer approach HTML, CSS and JavaScript development? As web developers, we spend a lot time in our server side language of choice, developing code to dynamically output HTML. That said, a lot of time is also spent on the client side as well – designing, developing and debugging JavaScript. How does the Rails environment stack up in this department? We make use of Spark as our view engine on MVC projects, and I guess Rails developers use something as slick (or better?). I’m not saying Visual Studio offers a better story for developers, I’m just interested to hear more about the Rails context. From a server side perspective, what is the debugging experience like in Rails? I’ve used Eclipse and a few other popular IDE’s and I’m yet to find one that matches Visual Studio’s debugging capabilities. Within a minute, a developer can write some code, set a breakpoint and have the breakpoint hit within seconds. No drama. The real point I am trying to make is that there is far more to just the server side language, or the framework that impacts on the efficiency of a developer in the web context. There are a lot of other factors which include the IDE, view engine and development web server.

I mentioned earlier that I haven’t ever had the opportunity to develop a Ruby on Rails application. That said, I have started learning Ruby and have recently developed an application that routinely synchronises our Basecamp data with a local SQL Server database. I specifically wanted to learn Ruby out of the Rails context first. I was delighted that within a relatively short amount of time I had some functional code. It bodes well for my continued personal investment in Ruby. I took a crack at developing my Ruby application using NetBeans on Windows. While I enjoyed developing the code (and learning more about chunky bacon), I really didn’t enjoy the environment. Debugging was a pain in the ass and I resorted to debugging by logging. I hate this approach. I’m not sure if my experience is unique, and would really appreciate feedback from seasoned Ruby/Rails developers on this.

One of the topics that constantly repeats itself in these types of conversations is that .NET developers are stuck. They aren’t good at venturing out of their comfort zone and learning about new languages on other platforms. While this is a gross generalisation, I fear that the recent converts to Ruby from C# are right (for the most part). I think familiarity plays a key role in this and the problem does not only apply to developers using .NET. In the book The Pragmatic Programmer, Hunt and Thomas recommend that a developer should strive to learn a new language every year. I must say that when I read this, I was astounded. Every year? Thats crazy! Then I realised that not only is it possible, but they are right. You see, developing on a single platform using the same language and framework, year in and year out has some advantages, but in general, your solutions to problems tend to take the general form of your environment. My recent foray into Ruby has proved this. Your mind becomes boxed into approaching problems in a single way where there may be another number of better solutions that expose themselves wonderfully when you are forced into a different pattern of programming. All in all, I would say that even though I haven’t been using Ruby for long, I’m a better C# developer for it. Note, that I am not suggesting that Ruby itself was the reason, you could replace Ruby with Python in the paragraph above and I believe the same would hold true.

When it comes to comparing .NET (and Microsoft solutions generally) to alternative free and open source solutions, cost and licensing are often cited as reasons to migrate away from the Microsoft platform. This seems to be an important part of the discussion. The argument that always seems to be tossed around is that as a startup, you don’t want to have sink your limited cash into technologies when the alternatives are free. I think this argument is flawed for a few reasons. If you have no experience using the alternative language, framework and platform then the comparison between cost only makes sense if your time is free. Lets face it, in the startup world, your time is not free. Whether being first to market is important or the fact that you only have a small window of opportunity where you can sustain yourself in the back of your garage, you will want to focus on getting your application deployed as soon as possible. Having to sit and learn something new and foreign when you should be focusing on building your app seems to be a weird approach to me, especially considering the factors described above. Even if you could get up to speed with the new language/framework/platform relatively quickly, once you have your application up and running for the world to see and use, you will have no experience dealing with runtime operational issues that usually crop up as people start using your app. Even when you have experience on certain platform, dealing with edge case operational issues can be tricky. In an ideal world, time wouldn’t be a factor. You could perform a systematic analysis on all languages, frameworks and platforms and assess the pros and cons of each. Unfortunately, I doubt anyone has the luxury of doing this.

So given that it makes sense to build your app using technologies familiar to you, what are you in for? Microsoft haters will most certainly start quoting licensing costs for SQL Server Enterprise Edition, Windows Server Enterprise Edition and how the costs will sink you in a month (or less). Again, I think one needs to look at licensing objectively. I just deployed an environment with 4 servers.

  • 1 x load balancer running Nginx on Ubuntu
  • 2 x web servers running Windows Server 2008 Web Edition
  • 1 x database server running Windows Server 2008 Web Edition and SQL Server 2008 Web Edition

Hosting environment

In terms of costing for bandwidth, hardware, offsite backup etc., the licensing costs associated with the Microsoft operating systems and servers accounted for just 6% of the monthly cost. I would hardly call that prohibitive. Sure, if I had to use the Enterprise (or even Standard edition) versions of the software, the costs increase quite dramatically. That said, I don’t see why a web startup would require these versions. In any event, if you want to compare apples with apples, then one should consider that the enterprise version of MySQL is not free. At this point, NoSQL fans will probably be butting their heads against a wall muttering that neither SQL Server or MySQL are good options. That’s fine. Use what works. The likes of MongoDB, CouchDB and RavenDB are available on the Windows platform (the latter developed in C# exclusively for the platform).

What about Visual Studio? Isn’t it quite expensive? Well, unfortunately, yes it is. That said, I think the one thing people don’t mention when discussing the cost of Visual Studio is what you are actually paying for. Have a look here. The Professional Edition with MSDN Essntials ($799) unlocks the following software for you:

  • Visual Studio 2010 Professional
  • Windows 7 *
  • Windows Server 2008 *
  • SQL Server 2008 *

* Per-user license allows unlimited installations and use for designing, developing, testing, and demonstrating applications.

So for essentially $800, you should be sorted for all the development software you will need to develop, test and demonstrate your application. It’s not free, but the deal isn’t too bad either.

This post is getting a bit long so I’m going to end here by saying this. While I’m a proficient C# developer, there is no reason why I should not dedicate time to learning about alternative languages, frameworks and platforms. It’s only going to make me a better developer, whether I stick to the Microsoft platform or not.