Author Archives: christianmeilke

Tracking Code Base Over Time With NDepend

Recently I started out exploring NDepend. After looking into the basic stuff I wanted to explore the comparison features. Being able to report on how the quality of your code base changes over time would be very nice. Luckily, NDepend lets you do this very easily.

Let us assume you have already set up an NDepend project and finished your first analysis. I included not only the assemblies I wanted to be analyzed but also added a dotCover code coverage report (thus being able to track code coverage as well). Under Project Properties you choose the analysis result as Baseline for Comparison:

baseline_comparison

Once you have done that you sit back and wait for your code base to change. I waited a few days and luckily it did! I re-built the assemblies (which are referenced in the NDepend project so I did not re-include them) and also created another code coverage report. After that you just run a new analysis and the dashboard, in addition to giving you a brief overview of the analysis, now shows some interesting details about what changes have taken place:

overview

In my case it informed me about:

  • increased number of lines of code, namespaces, types, methods etc.
  • a slight increase in average method complexity
  • about a decreased code coverage (I have more lines of code covered now but more uncovered lines of code were also added)
  • new rule violations

When I looked at the rule violation results there was an interesting category named API Breaking Changes. It informed me about possible problems because some public interfaces had been changed. It marked them as critical which I do not think they are (still have to find some way of ignoring certain rules). If you develop a closed product and you are analyzing all there is to it, it should not pose a problem. Would I be developing a framework that others are using, then I can see the point. NDepend has no way of knowing. But there were also some rules in there which were clearer like the one checking for changes to serializable types.

So the comparison feature is really nice. I guess if you do this analysis frequently and observe the results over time, NDepend can give you both

  • a meaningful and quick impression of your code base changes to the better or worse as well as
  • tools to find out more about the areas where the problems grew or shrank that are important to you.

Just Wrote My First Javascript Unit Tests

I have written a lot of unit tests in C# and finally even came to terms with test-driven development (which I think is awesome – after being afraid of it for such a long time, I admit). It was never my task to write much Javascript code and I only tried bits and pieces here and there. So, naturally, I only cared about getting the job done (and get back to my shiny C# world) and not think too much about readablity, maintainability and code coverage. Only recently Javascript became a bigger part of my developer life (mostly client-side, browser-interpreted). I never liked (reading) the jQuery soup and also had the advantage of having access to frameworks like AngularJS or Backbone.js, so luckily I was in a very fortunate position to not just write the code but also divide it into modules and, greatest of all, easily test it.

After I wrote a couple of smaller and bigger Angular apps, this post finally pushed me in the right direction. The approach described in it uses Jasmine and the Angular Mock Module.

Let’s say we have this silly little service:

my.namespace.services = angular.module('my.namespace.services', []);

my.namespace.services.MenuService = function () {};
my.namespace.services.MenuService.prototype = {

    entries: [],

    updateEntries: function (entries, selected) {
        // ...
    },

    chooseEntry: function (entry) {
        // ...
    }
};

my.namespace.services.factory('$menuService', function() {
    return new my.namespace.services.MenuService();
});

It helps me creating and updating a dynamic menu as well as communicating menu selections between different apps (for instance, the actual menu app and whatever app is the main app). To test this, I write something like this:

describe('MenuService Tests', function () {
    var menuService;

    beforeEach(function () {
        module('my.namespace.services');
        inject(function ($menuService) {
            menuService = $menuService;
        });
    });

    it('should initially have no entries', function () {
        expect(menuService.entries.length).toBe(0);
    });

    it('should have some entries after adding some', function () {
        var newEntries = [{ title: 'some title' }, { title: 'some other title'}];
        menuService.updateEntries(newEntries);
        expect(menuService.entries).toBe(newEntries);
    });
});

The describe function call sets up the whole suite of tests, here it is called MenuService Tests. The beforeEach function call sets up the menu service using Angular Mocks. The it function calls are the actual tests, nicely named using the actual function name and the first string argument, creating a more human-readable touch.

The first test just checks for the entries array being empty at the beginning, again using the very human-readable assert call expect(something).toBe(somethingIExpect). The second test checks for entry updates to the service being actually available on the service.

You run the tests via an HTML page like this (just open it in a browser and the tests will be run for you via Jasmine and the results will be nicely presented to you – like at the bottom of the Jasmine website):

<!DOCTYPE html>
<html>

  <head>
    <meta charset="utf-8" />
    <title>MenuService Tests</title>
    
	<!-- Jasmine -->
	<link rel="stylesheet" href="path/to/jasmine.css" />
        <script src="path/to/jasmine.js"></script>
        <script src="path/to/jasmine-html.js"></script>
    
	<!-- Angular -->
	<script src="path/to/angular.min.js"></script>
        <script src="path/to/angular-mocks.js"></script>
    
	<!-- Code under test -->
	<script src="path/to/menuService.js"></script>
    
	<!-- Test code -->
	<script src="path/to/menuServiceSpec.js"></script>
    
	<!-- bootstraps Jasmine -->
	<script src="path/to/jasmineBootstrap.js"></script>
  </head>

  <body>
    <div id="HTMLReporter" class="jasmine_reporter"></div>
  </body>

</html>

The Jasmine bootstrap code looks like this:

(function () {
    var jasmineEnv = jasmine.getEnv();
    jasmineEnv.updateInterval = 250;
    var htmlReporter = new jasmine.HtmlReporter();
    jasmineEnv.addReporter(htmlReporter);
    jasmineEnv.specFilter = function (spec) {
        return htmlReporter.specFilter(spec);
    };

    var currentWindowOnload = window.onload;
    window.onload = function () {
        if (currentWindowOnload) {
            currentWindowOnload();
        }
        execJasmine();
    };

    function execJasmine() {
        jasmineEnv.execute();
    }
})();

Voila!

Starting to work with NDepend

I got my hands on a copy of NDepend. Don’t worry, it is a legitimate professional license… NDepend is a .NET code analysis tool that offers a lot of metrics and information on your code base. Here are some of my first impressions.

It starts out easy enough. You have the choice of using it from Visual Studio (it comes with an AddIn installer), from the command line (for analysis during automated builds, I guess) or from a standalone desktop application. I chose the desktop application and was prompted to specify the code base. It analyses assemblies and you have multiple ways of importing them, including some wizards. This is were it started to get tricky because the wizardry did not work that well so I had to import the DLL’s by hand. This is a tool for the sophisticated developer and it shows already at the beginning. After I imported the main assemblies of a project I am working on, I generated my first report.

Here are a few items from the report.

I have seen NDepend before and the thing that scared me then and still scares me now is a thing called Treemap Metric View (which is presented very prominently):

Image

It is supposed to provide you with a single visual on the complexity and state of your code base. The report provides a few other visuals on relationships between namespaces and assemblies which are nice but overly complex and too much 10000 foot view.

The main part of the report consist of results of checks applied to your code base. Don’t freak out: You will see a lot of rule violations! If you can understand it for what it is supposed to be, fine. These rules represent opinions. But they have been found to be very helpful by a lot of developers over a long period of time and thus might have become hard facts. Here are a few examples.

There is this critical warning called Types too big of which the tool found two violations, both being around 800 lines of code long. It is common knowledge that big types should be broken down into parts that each are small and ideally just do one thing. It makes your components way more maintainable and reusable but the main advantage is increased readability, in my opinion. So good job, NDepend! There are many other useful rules, like Methods that could have a lower visibility.

Rules often make use of metrics like Cyclomatic Complexity or Nesting Depth. They give you a lot of insight but they are complex and hard to understand at times.

Some rules are debatable, especially ones that are declared as Critical but only in the eye of the beholder. Yes, it is not good to have a violation of the rule Potentially dead Methods, but should it be critical? Who out there does not have unused methods or types sitting around waiting to be reenacted? An interesting one is Constructors of abstract classes should be declared as protected or private. I agree and my Resharper lets me change that in a heartbeat but is it critical issue? Rules come with some sort of source code example of the rule alogorithm which also shows comments so that you can learn more about the rule. In this case it it says the very fact is of a public constructor in an abstract class is useless. Useless is not critical, at least not to me.

I think there is a dilemma here. You need to be a sophisticated developer to correctly use and interpret NDepend. But the very fact that you are of such advanced state means that you already know what you are doing wrong and what you are doing right. The bad developer will get way more violations but does that point him in the right direction? Does that change him?

I like the tool. It gives me something so that I can get into an informed discussion about the state of my code.

Things I would like to check out in the future:

  • Baselining. Even if you do not buy into some of the rules and metrics you can see your code base evolving.
  • You can incorporate unit test code coverage result into your reports.
  • Creating own rules
  • Disabling/enabling/configuring/adjusting existing rules

Running Pushqa push messaging service in stand-alone application

Introduction

Recently, I was asked to look into a .NET push-capable network messaging framework at work. We already had an in-process system working that helped us monitoring progress on long-running processes (using IObservable/IObserver). This just needed to be improved so that you could listen to updates from outside processes.

Naturally, I looked into SignalR. But we wanted to be able to do server-side message filtering (for instance, to have a central messaging server but only receive updates on certain long-running processes in a given listener application). And maybe also have LINQ-capable filtering. Core SignalR does not bring these things to the table. But I found Pushqa which uses SignalR under the hood and adds the functionality we were looking for. Pushqa introduces itself the following way:

Pushqa is a .Net library that allows the filtering of incoming push events from a server to be performed server-side.

It allows the consumer to define queries over event streams so that events that are being emitted server side can be filtered and constrained by client side code. The queries are serialized and executed server side rather than sending all the events to the client for client side filtering.

Pushqa uses Microsoft’s Reactive Extensions (Rx) expressions over an HTTP connection with the queries serialized using the oData URI specification.

A Pushqa service at this point is designed to be working within an ASP.NET MVC web application (see Pushqa website or this blog post for examples). This was not the kind of environment I was looking for. SignalR itself is not limited in this regard so I went looking for a way to create a self-hosting setup for Pushqa (the full source code for the following demo can be found here: https://dl.dropboxusercontent.com/u/31759146/ProgressUpdate.zip).

Service

I created a simple console application that uses in-process OWIN hosting. It uses the following NuGet packages:

Here is the simple setup:

using (WebApp.Start<Startup>("http://localhost:8080/"))
{
    Console.WriteLine("Server running at http://localhost:8080/");
    Console.ReadLine();
}

The Startup class defines the actual service. Here we map Pushqa’s own PersistentConnection implementation (QueryablePushService) to our endpoint:

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        app.MapConnection<QueryablePushService<UpdateContext>>(
            "/update",
            new ConnectionConfiguration { EnableCrossDomain = true }
        );
    }
}

The Pushqa connection needs a context in which we define the observable property that clients want to access. For this demo I am simply using a Subject (which implements both IObservable and IObserver) into which my demo service puts messages into while running:

public class UpdateContext
{
    internal static Subject<Update> InternalUpdates = new Subject<Update>();

    public IQbservable<Update> Updates { get { return InternalUpdates.AsQbservable(); } }
}

Listener

For the client I also created a simple console application. It mainly uses the Pushqa.Client NuGet package. The client takes a long-running process’ ID and uses it to filter messages by this ID. It is also just interested in the next 10 messages. This filtering is all done server-side! This is great because now you can have a ton of messages going through the service but only the interesting ones actually get sent to clients. For comfortable client-side filtering and processing you can use Reactive Extensions.

var inputProgressId = Int32.Parse(args[0]);
var updateProvider = new UpdateEventProvider();

var subscription = updateProvider.Updates
    .Where(u => u.ProcessId == inputProgressId)
    .Take(10)
    .AsObservable();

using(subscription.Subscribe(WriteUpdateToConsole))
{
    Console.WriteLine("Listening to updates from process {0}", inputProgressId);
    Console.ReadLine();
}

Here is the glue code for connecting the local listener observable property to the server-side one:

public class UpdateEventProvider : EventProvider
{
    public UpdateEventProvider()
        : base(new Uri("http://localhost:8080/update"))
    {
    }

    public EventQuerySource<Update> Updates
    {
        get { return CreateQuery<Update>("Updates"); }
    }
}

Conclusion

You can now start a service and have multiple listeners being updated in a push-manner and using the comfortable IObservable/IObserver pattern! The service is hosted in a simple console application (which at this point is not officially supported by Pushqa) and the message filtering can be done server-side!

I am not totally happy with Pushqa, though… Here is why:

  • It uses a LINQ-like interface but it is not LINQ! You have simple functionality like Where and Take but you cannot use the full LINQ spectrum.
  • Not many LINQ expressions are supported. For instance, a simple Where(() => true) is not supported!
  • I would have expected Pushqa using some official OData library (ODataLib?) to do the OData plumbing but it relies on its own implementation.
  • Passing the context to QueryablePushService is kind of awkward

But all in all it is a good framework that

  • Gets the job done!
  • Uses more or less well known technology under the hood
  • Is in good shape source-code-wise

Again: The full source code for the following demo can be found here: https://dl.dropboxusercontent.com/u/31759146/ProgressUpdate.zip.