Slick inline trace logging in ASP.NET

I'm going to show you a slick way to configure log4net to your trace log, and then make it extremely simple to view the trace log for a page while viewing that page. Ultimately, we'll end up with something that looks like this:


It may be difficult to see from the screenshot, but I'm looking at a standard ASP.NET page, but there is tracing information at the bottom. This tracing information is the same information you would see if you configured tracing your web.config, and then used the trace.axd page. However, we're displaying it right along with the page request. To allow tracing to be enabled with a simple URL parameter, you can add the following code into your Global.asax file:

protected void Application_BeginRequest(object s, EventArgs e)
    if (Context.Request.QueryString["trace"] == "true")
        Context.Trace.IsEnabled = true;

Of course in a public environment, it may be wise to add security or at least obfuscate the parameter. The trace information contains server information that may be helpful to someone trying to compromise the server.

Now that we have tracing information, how can we get our log messages to show up? Log4net provides an appender for logging to the ASP.NET tracing feature:

<appender name="AspNetTraceAppender"
    <layout type="log4net.Layout.PatternLayout">
            value="[Thread #%thread] %-5level - %message%newline" />

Don't forget to wire up the adapter so that log4net knows to use it:

    <level value="DEBUG" />
    <appender-ref ref="AspNetTraceAppender" />

Now, in your code, you can simply log messages as you normally would:

_log.Debug("Loading User Data");


I've found this type of configuration very useful in my ASP.NET applications. It lets me analyze how long each portion of the page generation is taking so that I can find bottlenecks. It also motivates me to write a fair amount of logging, since I'll see a benefit during development, as well as after deploying it into the wild.

Like this post? Please share it!

See a mistake? Edit this post!

Azure – Performance, IoC, and Instances

Ever since the Google App Engine was released, I've been fascinated with cloud computing frameworks. The vision is to have a website that can scale from nothing to infinity, without having to worry about servers, viruses, uptime, etc. I've finally gotten a chance to play around with Azure, and I must say that I'm in love with the concept, but disappointed by the current reality.



I've taken a site that I consider a "playground site", and converted it over to run in Azure. One of the metrics I wanted to look at was the responsiveness of the deployed application. I run the main version of the site on a dedicated server, and I don't think it's unreasonable to use that as a baseline. After all, the purpose of Azure is to have the advantages of all the different types of hosting, yet have less to worry about.

To gauge performance, I used the Firefox add-in called Firebug. This let me see the amount of time that each requested element took to be transferred from the server. It also gives some insight into the amount of time it takes for the page to render. In the future, I'm going to use some server tracing to find specific operations that may be taking longer.

This is the baseline data from As you can see, the page is served up very quickly. The page takes less than 100ms to render (1/10 of a second), and the entire page comes through in less than half of a second.

Now take a look at the same code running on Azure:

To render the page, instead of 89ms, it now takes ~650ms. It takes a full second for the entire page and its elements to be sent down to the client.

Running both pages several times started to give me interesting results. The dedicated server was giving me extremely consistent results (even with other users hitting it). Azure however, was all across the board. It was typically around 1 second for the entire page to render, but would spike up to 5 seconds occasionally. Personally, I think this is completely unacceptable performance. Hopefully this is not indicative of the performance I can expect once it's released.


Azure is designed so that if you have an application that runs in medium trust, it shouldn't require any conversion to run straight in Azure (in most cases). If you're using a database, there are other restrictions because Azure doesn't use a standard SQL database. In addition to these obvious issues, a non-obvious issue is that if you're using an IoC container, it probably won't run in medium trust.

My application uses the IoC container Spring.NET, which immediately failed. I suspected (incorrectly) that Windsor might have worked better, but couldn't tell from the documentation. To make it easy to plug in different IoC containers, I started using the Common Service Locator. If you're doing IoC without the common service locator, I really recommend you check it out.

I was then fortunate enough to find this page, which has great information on the different IoC containers and their Azure compatibility:

  • Castle Windsor - My preferred IoC container, but it won't run under medium trust. Out!
  • StructureMap - My second favorite IoC container. Runs under medium trust locally, but not under Azure. Submitted bug report to Jeremy Miller. Reading through the StructureMap user's group, it looks like he's going to try to fix that early this year.
  • Ninject - I didn't really monkey around with Ninject much. The sample code I saw was riddled with [Inject] attributes, which kinda turned me off. Apologies to @nkohari if I dismissed it too early.
  • Autofac - Works great in medium trust under Azure, easy to configure, but doesn't support registering arguments for constructor injection at configuration time. You have to specify them when you resolve the service.
  • Unity - No problems at all! Worked great in medium trust on Azure, easy to configure, supports everything I need! I gotta say I'm really impressed by how far Unity has come in such a short time.

My only reasonable option was Unity, which is Microsoft's IoC container. After another fun conversion, I was up and running! I honestly don't have any complaints about their IoC offering.


The Azure team decided to introduce the concept of "Instances". You have to decide how many virtual instances of a web server that you want running. I really don't understand the logic here. Their sales pitch is all about handing unpredictable traffic patterns, yet an instance based approach just gives me another aspect of the application that I have to worry about. They're promising that a configurable heuristics system will eventually be in place to handle the management of the number of instances. In effect, they are putting a band aid on a problem that they've created even before release.

Contrast this design with the Google App Engine. With their system, you don't have to worry about configuring instances at all. It automagically scales from nothing to infinity automatically.

Instances on the worker roles make sense. Worker roles are not public facing, they are there to process data. By configuring the number of worker role instances, I can change the rate at which my data gets processed.


I realize that Azure isn't even in beta yet, so I shouldn't expect the world. I had my fingers crossed that their CTP would be production quality (wouldn't that be nice?). I think that Microsoft will eventually have a great cloud platform on their hands, it's simply a question of timing. Personally, I really don't want to have to worry about uptime, scaling, RAID, drivers, viruses, etc. so I think cloud computing is the inevitable solution.

Like this post? Please share it!

See a mistake? Edit this post!

Convenient Synchronization with Mesh and DropBox

A couple of weeks ago, I finally signed up for DropBox. If you're unfamiliar with the service, it's a file synchronization service. You install a client on multiple machines, and you get a special folder (aka a dropbox). When you make changes on any computer, it's synchronized with a central server, as well as the other clients.


Now that I've gotten the chance to put DropBox through its paces, I have to say that I'm very impressed. I've done a lot of operations that can sometimes choke file monitoring software like moving and renaming files, copying files while synchronizing, and in-use files. DropBox powered through like a champ, never giving me any errors, and without any noticeable mistakes.

In addition to simply synchronizing your files, their service also keeps a copy of your files on their server. Better yet, it automatically revisions the files. It seems to be fairly efficient, even considering all my files and revisions. Right now I'm only using 7.8% of the 2GB of space they give you for free.

One of the applications that I use the most is OneNote. Pretty much all of my disconnected thoughts go into OneNote until I can get them organized. I figured it was a great application to test the responsiveness of DropBox. I opened OneNote on two different computers. When I changed the text on one machine, the changes showed up on the other in 10-15 seconds. Perfect for keeping my notes in sync!

My one and only complaint about DropBox is that I can't create multiple DropBoxes. A single DropBox is simple and efficient, but it would be nice to have a little more flexibility.

Live Mesh

A few nights ago, I got a demo of the Azure platform by a Microsoft Evangelist. Azure is a huge blank term for a group of confusing technologies. Even the name itself is confusing, since Azure is a cloud computing platform and is also the color of the sky when there are no clouds.


More importantly, one great thing to come out of the "Live Services" portion, is a free product called "Live Mesh". It's essentially a competitor to DropBox. The nice thing about Live Mesh is it's flexibility. I can make any number of synchronized folders, and they all seem to be as reliable as DropBox. Thanks to a sophisticated permissions system, you can even share folders with other people. For example, you can have a folder set up to distribute your photos to your family.

The Microsoft Azure Evangelist showed us a demo with the client installed on his laptop, and another client installed on his Windows Mobile phone. When he takes a picture on his phone, it's immediately pushed over to the other clients. It's a neat trick, and does make my mobile device more useful.


As far as I can tell, Live Mesh doesn't have plans to support a revision system like DropBox. I think this is a horrible, horrible mistake. Having a file on multiple machines provides nice redundancy, yet if you accidentally delete a file on one computer, Live Mesh will happily delete every copy of it. It even happened to Scott Hanselman. In my opinion, this completely destroys any hope it has of competing with DropBox (at least for me). I'm hoping that they'll add a backup feature, or someone will use their API to add it for them.


One service I have yet to try is SugarSync. It looks promising because it syncs multiple folders, stores revisions, and even has a Windows Mobile version (although it's missing real-time sync). On paper, it looks like it has all the options you would expect from this type of service.

Syncplicity looks respectable, but with so many alternatives, I'm just not sure if they have anything unique that sets them apart.


I think this type of application is going to have a huge market. This is one of those few killer app's that if done well, will be on everyone's computer. Obviously Microsoft's offering will be positioned to dominate, but we all know that they don't always have the absolute best product.

For now, I'll be using DropBox for my main document folder. It suits my needs, and until it messes up, I won't need to look elsewhere.

Like this post? Please share it!

See a mistake? Edit this post!

Jason Young I'm Jason Young, software engineer. This blog contains my opinions, of which my employer - Microsoft - may not share.

@ytechieGitHubLinkedInStack OverflowPersonal VLOG