Testing the IoC Container

Monday, May 31 2010         No Comments

Previously On My Blog

In my previous post, I altered a simple ASP.NET Web Forms application so that instead of using Poor Man’s Dependency Injection and making constructor calls directly to concrete types to an application depending solely on interfaces by using an Inversion of Control (IoC) container. You can download the source for the application here. In this post I am going to add a couple of testing projects to the application: a unit test project and an integration test project. Using IoC is great, but how can one be sure that it is working correctly? Running the application and making sure it does not get any resolution-related exceptions is one option, but is really not the best way to test the IoC. The preferred method would be to have automated tests assure that one’s classes are working properly with the usage of the IoC container. So it would be best if tests were created in one of our test projects.

Where Do the IoC Container Tests Go?

The next question that comes to mind is which test project do you test the IoC container? Putting them in unit tests is one option, but presents several problems. The IoC container in this instance is a static class. Static classes in general are much harder to test, as injecting fakes or mocks for the static class usage is difficult. When it comes to unit testing, all external dependencies should be mocked, faked, or somehow eliminated from the context of the test. In this project’s business logic implementation, all classes expose their dependencies through a single constructor. The IoC container automatically resolves these dependencies at runtime, but at test time it would be ideal to mock these dependencies out.

It sounds like unit tests are not the place to test the IoC Container. This leaves the developer with only one option: integration tests. Integration tests are a perfect candidate for IoC container testing. A typical integration test makes sure that a particular method or class works when all of the real pieces are in use (i.e. all of its dependencies are using what will be used in the application). Therefore, using the IoC container to resolve an instance of the class that is being tested is a logical way to assure that the container will correctly resolve an instance of the class. The application is dependent on this as well, and rather than letting the application bomb, the tests can instead fail, which will immediately expose any flaws in the container’s implementation.

A First Integration Test for a Business Logic Class

In order to begin testing, we need a test project. As was discussed above, the IoC container will be tested within the body of integration tests, so that will be the project to create. A reference to the NUnit framework has been added, as can be seen in the image below.

IntegrationTests Project

Next, a tester file needs created. In this example, the SuperRepository will be tested. Only one test will be written. This test will show that GetById does in fact return a SuperDomainObject with the given ID.

First Integration Test

The ISuperRepository object is constructed by using the IoC container. Note that we have to EnsureDependenciesRegistered in order to make the call because the tester will not be calling Global.asax’s Application_Start() method. This test fixture has no knowledge of what implementation it’s actually using. It is using whatever IoC returns, which also happens to be what the application will depend on. Because of this, the integration test will always make sure that the implementation of ISuperRepository in use always does what it’s supposed to do, even if in the future a different implementation is to be used.

Running this test shows that the repository is doing its job correctly:

Passing Test

Suppose, however, that ISuperRepository was not registered with the IoC Container. Well, now the integration test will break, letting the developer know that this registration does not yet exist. This is a much better time to find out about this slip rather than waiting to run the application. In the case of Unity, and an exception is thrown:

“The current type, SuperWebApplication.Core.Interfaces.ISuperRepository, is an interface and cannot be constructed. Are you missing a type mapping?”

As a result, the test fails (it actually fails during setup since IoC is called during the test’s construction):

image

Added the resolution back to the Dependency Registrar fixes the problem and makes the test pass. Now there is a little more confidence built into the application that the dependencies are registered successfully.

Conclusion

IoC containers make dependency injection really nice, but the type mappings have to be set up correctly in order for IoC to be effective. Instead of letting IoC fail at runtime, tests can be written to use IoC in the same manner. Unit tests should not have to worry about how an implementation is reached, but rather that an implementation is set up correctly. This means that IoC should be tested only within the integration tests project. With integration tests set up to use the IoC container, they will be simulating how the classes will be called in the application itself. When the tests are passing, it shows that the IoC container is configured correctly and can be used.

Downloads

Super Web Application Before Tests Were Added
Super Web Application After Test Was Added

Using Inversion of Control to Sever Dependencies

Saturday, April 24 2010         1 Comment

Introduction

Inversion of Control is a great tool that can be used to help separate the various facets of a project. It can completely separate the knowledge of interfaces with knowledge of implementation, make the UI completely persistence-ignorant (to the point of not even referencing the Infrastructure portion of a project), and so on. In this post I will show you how to easily set up an Inversion of Control container so that you may also enjoy the pleasantries of keeping your project dependencies in check.

You can download the complete source code for this blog post both before and after IoC is introduced at the end of this post.

A Simple Super Web Application

UI’s Dependencies On Concrete Implementations

Let’s start with a sample Web Forms application that does not have any kind of inversion of control to speak of:

Solution Layout

It is a simple setup: there is a UI project (Web Forms), a Core project for the business logic, and an Infrastructure project that handles communication with external resources (files, a database, etc).

Here is what Default.aspx looks like when run:

Page Before IoC

Let’s take a look at what the UI project currently depends on:

Default's Code Behind Newing Up Implementations

UI's Dependencies

The code behind of Default.aspx is invoking constructors of the implementations directly. Thus, the project requires references to Infrastructure. While it is programming against an interface, the code behind is still depending directly on existing implementations of the interface. At any point these implementations could become more complex, requiring dependencies or other necessities in their constructors. It is time-consuming to have to update all usages of an implementation when it could instead rely solely on the interface definition.

Business Logic In the Wrong Place

Another problem Inversion of Control solves is the issue of having to put business logic in non-Core projects:

SuperFactory Living in Infrastructure

SuperFactory In Solution Explorer

The SuperFactory is only in the Infrastructure project due to its default constructor knowing about an implementation that only exists in Infrastructure. This type of constructor layout is known as Poor Man’s Dependency Injection: since no Inversion of Control container exists, the class provides the default values for the constructor parameters in a parameterless constructor. However, the downside is clear. The business logic can no longer live in Core where it belongs. Instead, it has to live in Infrastructure due to its dependency on implementation.

Getting Inversion of Control Working

The IoC

It’s time now to introduce Inversion of Control to sever these dependencies and put this dependency-laden pesky project in its place. Let’s start by adding a simple static class to the root of the Core project:

IoC.cs

This lives in Core so that any portion of the project may reference it. Notice, however, that the job of resolving the dependencies is left to the IDependencyResolver (also defined in Core), which is initialized in the given Initialize method. This action could happen in the constructor, but since this is a static class it would have to happen the instant the application started instead of the developer being in control of when initialization happens. Here’s the IDependencyResolver interface:

IDependencyResolver.cs

DependencyResolver

Now that there exists IoC, there needs to be an implementation of DependencyResolver. Here is where the actual resolution and registration of dependencies will occur. This can be done by creating a custom Inversion of Control container or using an existing one. The important thing is that the rest of the project should not know what is being used to resolve dependencies. The project only cares about using IoC to resolve an dependencies it may have. Thus, DependencyResolver will live i a new project called DependencyResolution:

DependencyResolution Project

DependencyResolver.cs 

In this example, Microsoft’s Unity is the Inversion of Control container of choice. Since there is a separate project, the required DLL references are only known in one place. At any point if it is decided that a different container needs to be used, it can be switched out without affecting any other portion of the project. DependencyResolver simply maintains an instance of UnityContainer and calls its Resolve and Register methods.

Registering Dependencies

Now that there exists an implementation for IDependencyResolver, there needs to be a class responsible for registering the project’s dependencies. Enter the DependencyRegistrar:

DependencyRegistrar.cs

Like DependencyResolver, it also resides in the DependencyResolution project. The boolean field is to prevent extra work from happening if EnsureDependenciesRegistered is called more than once. Notice also how this uses IoC.cs to do the work of registration. Even in small projects, this file will get rather large, and breaking the registrations up into separate methods or even separate classes is usually a good idea. For now, however, it can all remain in a single method.

In order for IoC.cs to work properly in the web application, a call to EnsureDependenciesRegistered has to be made. This is accomplished in the Application_Startup method in the Global class:

Global.asax.cs

Now the web forms can safely use IoC’s Resolve method to get its dependencies.

One of the benefits of Unity is that it recursively resolves dependencies, choosing the most complex constructor of the type to resolve to use to instantiate the object. For example, if the interface ISuperFactory is resolved, then ISuperFactory’s constructor exposing ISuperRepository will be called. ISuperRepository will then automatically be resolved. Due to this, there is no need for the default constructor. Now that that dependency is out of the way, SuperFactory and its business logic can now move back to Core where it belongs!

Eliminate References to Concrete Types

Let’s go ahead and replace those constructor calls in Default.aspx’s code behind with calls to IoC instead:

Default.aspx.cs After IoC

Now that no more implementations are being used, the UI project no longer needs to reference Infrastructure at all. Now UI only depends on Core and DependencyResolution to get the job done. The infrastructure layer can change all it wants to and the UI project will not be affected in any way. With IoC set up, does the page still render correctly? Ideally, it should look just like the picture at the beginning of this post.

Page After IoC

Success!

Conclusion

What has been accomplished here? Any business logic that has been created or will be created can now live in Core regardless of where implementations of interfaces that it depends on live. Code that lives in Core is the easiest kind of code to test, since it by definition has no external dependencies. The interfaces can simply be mocked, faked, or stubbed and the logic itself can be thoroughly tested. Likewise, the UI project now has no business implementation details hiding away in code behind files and the like. Web Forms simply get the data they need through interface-based calls and outputs the data to the web browser. When new features are added to the project, one merely resolves an interface to start using that interface’s implementation without caring about how the interface is implemented. This makes the project incredibly flexible and free to grow.

Testing the Container

So how can one be sure that IoC is actually working correctly? Loading up the Web Forms to see is rather tedious. Furthermore, unrelated issues could crop up before being able to test IoC effectively (e.g. ASP.NET errors). In my next post I’ll show how integration testing can be used to easily prove that Inversion of Control is working properly.

Download Source

Super Web Application Before IoC
Super Web Application After IoC

Let’s Unit Test Some LINQ to SQL

Wednesday, March 24 2010         No Comments

I have recently begun working on a project that utilizes LINQ to SQL as the layer of persistence. The project has a set of repositories that are used to access the data from LINQ to SQL, which then convert the objects generated by LINQ to SQL to domain objects. An initial implementation of a common repository method looks something like this:

public class LegoRepository
{
    private readonly IMapper<Lego, LinqToSqlObjects.Lego> _legoMapper;
 
    // Repository initialization.
 
    public Lego GetById(int legoId)
    {
        using (var context = new ChristiansenDataContext())
        {
            LinqToSqlObjects.Lego legoEntity = context.Legos.
                SingleOrDefault(lego => lego.Id == legoId);
            if (legoEntity == null)
            {
                throw new LegoDoesNotExistException(int legoId);
            }
            return _mapper.Map(legoEntity);
        }
    }
    
    // Rest of class.
}

In this example, the code is searching the database for a Lego with the given legoId. If it finds one, it creates a Lego domain object via the IMapper object and returns it. If it does not find one, however, it throws an exception. Simple integration tests can be set up to show that all of the pieces are working together (domain, mapper, repository, and database) in order to successfully fetch a Lego from persistence. We could also write an integration test to assure that the LegoDoesNotExistException gets thrown when a Lego with a given ID does not exist in the database. However, if we could control what context.Legos returns, we wouldn’t be dependent on any external persistence layer to test this.

How do we unit test something like this? The above example is creating a new ChristiansenDataContext in the method, with no way to change it. The code will always try to connect to the database in this case. The first thing we need to do is pull that dependency out of our repository. Let us create an interface IAdapter.

public interface IAdapter : IDisposable
{
    Table<Activity> Legos { get; }
}

This interface is responsible for exposing the behavior we expect from a DataContext object. In this example, we only need the collection of Lego objects. Down the road we can add whatever other behaviors we need (data from other tables, methods like SubmitChanges(), and so on). Since IAdapter implements IDisposable, we can use it in place of the DataContext in the respository. Here’s what the implementation of IAdapter would look like:

public class Adapter : IAdapter
{
    private readonly ChristiansenDataContext _context = new ChristiansenDataContext();
 
    public void Dispose()
    {
        _context.Dispose();
    }
 
    public Table<Lego> Legos
    {
        get { return _context.Legos; }
    }
}

However, the repository can’t just hold on to a single instance of an IAdapter (i.e. a database connection), so we need a way to get a new IAdapter whenever we want it. Let’s create an interface that will get us an instance of IAdapter whenever we request it:

public interface IPersistenceLayer
{
    IAdapter GetAdapter();
}
 
public class PersistenceLayer : IPersistenceLayer
{
    public IAdapter GetAdapter()
    {
        return new Adapter();
    }
}

Our repository can be instantiated with an IPersistenceLayer object, which can be injected in testing so that we can return a mock IAdapter.

public class LegoRepository
{
    private readonly IPersistenceLayer _persistenceLayer;
 
    // Repository initialization.
 
    public Lego GetById(int legoId)
    {
        using (var adapter = _persistenceLayer.GetAdapter())
        {
            LinqToSqlObjects.Lego legoEntity = adapter.Legos.
                SingleOrDefault(lego => lego.Id == legoId);
            if (legoEntity == null)
            {
                throw new LegoDoesNotExistException(int legoId);
            }
            return _mapper.Map(legoEntity);
        }
    }
    
    // Rest of class.
}

Now let’s write a test that will expect the exception to be thrown when a Lego with the given ID is not found in the database (in this example I’m using NUnit and Rhino Mocks):

[Test]
[ExpectedException(typeof(LegoNotFoundException))]
public void GetByIdThrowsExceptionWhenLegoWithGivenIdDoesNotExist()
{
    // Error! The Table class does not expose constructors.
    var legos = new Table<LinqToSqlObjects.Lego>(); 
    Expect.Call(_mockedPersistenceLayer.GetAdapter()).Return(_mockedAdapter);
    Expect.Call(_mockedAdapter.Legos).Return(legos);
    
    _mockRepository.ReplayAll();
    _legoRepository.GetById(0);
    _mockRepository.VerifyAll();
}

As indicated by the comment, we cannot instantiate a new instance of Table<T>. Additionally, Table<T> is a sealed class, so we cannot inherit from it and use that to create fake data. What do we do? The whole purpose of this was to be able to create fake data, but because Table is locked down, we can’t… Or can we?

public interface ITableWrapper<TEntity> 
    where TEntity : class
{
    IEnumerable<TEntity> Collection { get; }
    void InsertOnSubmit(TEntity entity);
}
 
public class TableWrapper<TEntity> : ITableWrapper<TEntity> 
    where TEntity : class
{
    private readonly Table<TEntity> _table;
 
    public TableWrapper(Table<TEntity> table)
    {
        _table = table;
    }
 
    public IEnumerable<TEntity> Collection
    {
        get { return _table; }
    }
 
    public void InsertOnSubmit(TEntity entity)
    {
        _table.InsertOnSubmit(entity);
    }
}

To get around the issues caused by Table<T>, we will instead wrap Table<T> in a wrapper interface. Just like with IAdapter, we expose only the behaviors we need. InsertOnSubmit() would have been added after we needed to call that method. We’ll have to alter IAdapter slightly: instead of returning Table<Lego> we’re going to return ITableWrapper<Lego>.

public interface IAdapter : IDisposable
{
    ITableWrapper<Activity> Legos { get; }
}
 
public class Adapter : IAdapter
{
    /* ... */
 
    public ITableWrapper<Lego> Legos
    {
        get { return new TableWrapper<Lego>(_context.Legos); }
    }
}

In the repository, instead of saying “adapter.Legos” to access the collection, we have to say “adpater.Legos.Collection” instead.

using (var adapter = _persistenceLayer.GetAdapter())
{
    LinqToSqlObjects.Lego legoEntity = adapter.Legos.Collection.
        SingleOrDefault(lego => lego.Id == legoId);
    // Rest of method
}

The good news, however, is that from this one extra word we can now easily test GetById():

[Test]
[ExpectedException(typeof(LegoNotFoundException))]
public void GetByIdThrowsExceptionWhenLegoWithGivenIdDoesNotExist()
{
    // IEnumerable<T> is very easy to mock. :)
    IEnumerable<LinqToSqlObjects.Lego> legos = new Table<LinqToSqlObjects.Lego>(); 
    
    Expect.Call(_mockedPersistenceLayer.GetAdapter()).Return(_mockedAdapter);
    Expect.Call(_mockedAdapter.Legos).Return(_mockedTableWrapper);
    Expect.Call(_mockedTableWrapper.Collection).Return(legos);
    
    _mockRepository.ReplayAll();
    _legoRepository.GetById(0);
    _mockRepository.VerifyAll();
}

When the test is run, Collection will return an empty enumerable of Legos. legoEntity will therefore be null and the exception will be thrown. Congratulations, we’ve successfully created testable LINQ to SQL code! We have decoupled where our data originates from how we manipulate the data after the fact. Because we have done this, unit testing these repositories is incredibly simple.

Exposing List<T> Creates Concrete Dependencies

Tuesday, February 23 2010         No Comments

It recently came to my attention that exposing generic lists is not something you want to do. Throughout my C# development career, I had thought that using List<T> was acceptable and that it could be used everywhere since it is part of the .NET framework. It turns out, however, that even exposing generic lists ties down your code to a concrete implementation. The primary issue here is that if functionality has to change, you’re stuck with the concrete List type. For example, say you want to add an event to fire when an element is added or removed from the list. List’s Add and Remove methods cannot be overridden, giving you no way to accomplish this task. At that point, you have to change from List to a different type and break your contract with your clients. Another issue is that the List type exposes a lot of properties and methods. Most of the time only a handful of these are actually needed. The rest just give you more opportunities to introduce code smells.

There do exist alternatives, however: Collection<T>, ReadOnlyCollection<T>, and KeyedCollection<T> can be extended via inheritance . Furthermore, they only expose a few methods, leaving a lot of the bloat out. IList<T> is also acceptable, although it still exposes a lot of methods that may not end up being used. However, it does allow you to implement your own versions of adding and removing elements. The concrete List<T> implementation houses Add() and Remove() though: they do not exist in IList<T>.

The question of whether or not exposing List<T> as a method return type, parameter, etc. came to mind when working with our company’s new product Nitriq. One of the many queries that come with the product searches for exposures of List<T>. Not fully grasping why this was a smell, I looked into it. After doing so, I know that my coding habits have changed for good when it comes to using List<T> versus one of the more acceptable alternatives. Not being aware of this smell previously, it shows up a lot in code that I’ve written. It is not the most obvious smell to pick out either since refactoring tools such as ReSharper do not point out this smell (manually scanning for it while developing won’t yield many results either). Fortunately, Nitriq can spot these smells in seconds.

Finding this smell using Nitriq is a very simple task. Let’s have a look at the query “Do not expose generic lists”:

var listOfT = Types.FullNameIs("System.Collections.Generic.List`1").FirstOrDefault();
var results = from type in Typesfrom method in type.Methods
where method.ReturnType == listOfT && !method.IsPrivate &&
!method.IsProtected && method.Type.IsInCoreAssembly
select new { method.MethodId, method.Name, Type = method.Type.FullName };
WarnGreaterThan(results, 0);

The query first acquires an instance of the type List<T> (which is what LIst`1 refers to). It then queries all the methods in all of the types to see if List<T> is being exposed. Finally, it will generate a warning for this query if any results are returned. As you can see from the above sample, Nitriq’s LinqToCode allows you to quickly write up your own custom queries. All of the queries that come with Nitriq are written in this manner and can be easily modified to suit your individual needs.

This post is part of a series exemplifying how Nitriq can assist in the identification of code smells.

Beware the Small Smells

Monday, February 22 2010         No Comments

There are varying degrees of code smells. Some can be complete showstoppers for development/maintenance process while others are merely minor nuisances that might slow you down a bit. These kinds of smells can range from violating agreed-upon naming schemes to using global variables to exposing a member variable more than necessary. By themselves, these code smells probably won’t cause you to break down in tears when it’s time to refactor or add a new feature to your code. After all, there are more important things to do: new features to add, larger smells to tackle and the like. Someone will get around to fixing them, right?

Letting the small smells persist in your code base leads to larger smells moving into the neighborhood. Small smells can become large smells, leaving the door open for more small smells to fester. The idea is akin to that of a house with a couple of broken windows - so what if I break a third window? It’s not like the house is in worse shape because of it. It already has other broken windows! If a bunch of small smells gather around areas of your code, they’ll grow into full-fledged bugs and other nasty smells that will cause maintenance nightmares. Small code smells need to be nipped in the bud and taken care of as quickly as possible. The longer they remain, the longer your code rots.

Not all small smells are left unattended by choice; for example, you have deadlines. Unfortunately, developers do not have an infinite amount of time to work on code. Features need to be developed in a reasonable amount of time, or else the clientele would not remain the clientele for much longer. Thus, you wait to fix smells until later. You keep track of the places you want to go back to. You’ll have time to come back and finish what you started later, right? Again, while ideal, this is not going to happen. I think I’m repeating myself here, so I’ll move on.

The bottom line is that you have to spend some time clearing out the code smells. The question is this: how can you achieve the maximum benefit from refactoring while spending the least amount of time looking for where to clean up your code? One option is to keep track of all the smells you came across but did not have time to fix. There are several ways you can achieve this: place a TODO comment by every smell in your code; keep a separate listing available in a text document; create tasks in your project management software, and so on. TODOs can be easy to search for and take you right to where you need to go, but this litters your code with comments. As we all know: comments are code failures. If you have a separate list, it will invariably become outdated: smells that are fixed won’t be removed and newly-discovered smells won’t be added. Creating stories, issues, bugs, etc. in your project management system will make sure you don’t forget smells. This does, however, suffer the same fate as maintaining a separate list because you will also have to add a new story every time a smell is discovered.

Basically, maintaining a list of smells to fix as you’re developing is not the best way to go. Fortunately, you’re not out of luck just because you don’t have time to fix the small smells as you are developing. Nitriq can lead you straight to all of your smells, big and small. The predefined queries sniff out the code smells and tell you exactly where to find them. The rest of this post will focus on the small smells in this example usage of one of our team’s newest products. We’ll take a look at some code smells, use Nitriq to locate them, fix the smells and re-analyze the code to verify that the smells have been eliminated.

Let’s first examine a simple rule: declaring protected members in sealed types. In C#, when a class is sealed it cannot be inherited. Protected members by definition have private accessibility but can also be accessed by any class that derives from the parent. If the parent class is sealed, there’s no reason not to make the member private. These members could have been declared in classes that were unsealed at one point, or classes that were about to become unsealed but never were. In either case, the accessibility of the members should change so that when future developers work on the project they will not have any ambiguity as to what the accessibility of the class or its member variables should be.

Here’s a code snippet I inserted into my Project Euler solution:

public sealed class IHaveNoSon
{
    protected int _age;
}

Now let’s run the relevant query in Nitriq:

 Protected Member In Sealed Class

Success! It has found the class IHaveNoSon and has determined that it has a single protected member. Now let’s go in and change the accessibility of _age:

public sealed class IHaveNoSon
{
    private int _age;
}

And now we re-analyze the assemblies and run the query again:

Protected Member In Sealed Class Fixed

As is evidenced in the screenshot, we have eliminated all instances of this rule being broken from our code base. Excellent!

A second rule available out of the box in Nitriq is “properties should not be write only”. I don’t really have to elaborate on why this is a code smell: what’s the point of storing a value that you don’t have access to? Running the query to catch this smell functions just like the above example, only the query “Properties should not be write only” would be run instead of “Do not declare protected members in sealed types”. Another nice rule that needs no explanation is making sure that all created exceptions are public. If you have created an exception it cannot be thrown if it is not accessible! What’s worse is that there will be code that will never be used, creating rot in your code base. The last thing the aforementioned house needs is a door that leads to nowhere. In Nitriq, the “Exceptions should be public” query will easily find all the non-public exceptions in your code.

Letting small smells continue to survive is not a good practice. Not only can small smells become big smells, but it also lets other smells pile up in your code base. While it is not always possible to fix every small smell you come across due to time constraints, it is not practical to maintain a list of smells to be fixed some time in the future. It is a lot easier to let a tool do the work for you. Nitriq can automatically find the common smells (and some not-so-common smells) in your code base quickly and point you directly to the offending class, method, field, etc. Using this tool can speed up refactoring while also easing your conscience for leaving the smelly code around to begin with.

This post is part of a series exemplifying how Nitriq can assist in the identification of code smells.

Introducing the Nitriq Console Application

Monday, February 22 2010         No Comments

One of the nice things about Nitriq is that you can easily run all of your queries against your code base as many times as you want. This falls into the same boat as unit testing: tests should be able to be run quickly at any time. If tests could not be run at any time or if they took a long time to run, then they would be ignored. That would defeat the purpose of having tests. Needing to open a separate application to run a body of tests can introduce this hindrance into the development process. Therefore, we here at NimblePros have also released a console runner for Nitriq in addition to the Nitriq GUI. With this cool new application you can run all of your queries for any Nitriq project in a console window. You can then run your queries against your projects in a quick reliable fashion, which means that you will get into the good habit of running them (along with all of your other tests) on a regular basis. With a small bit of effort you can even have the queries run against your latest build via continuous integration. In this post I’ll show you the basics of running this Nitriq console application with the prepackaged queries and a Nitriq project referencing my Project Euler assemblies and executable.

Here is an instance of the Windows command prompt opened to the directory with the console runner. It also contains the queries to run (the *.nq file) and the Nitriq project to analyze (*.nitriqProj):

 

The Console Application

Running the application only requires three parameters: the Nitriq project file, the queries file, and the directory to save the generated output file. In this example, I’ll be using the following command:

Nitriq.Console.exe euler.nitriqProj myQueries.nq ./

Since I want the output file to be generated in the current directory, I used “./” to do just that. Let’s run the application and see what we get:

Running the Console Application

It looks like we had some errors in our analysis. In addition to the generation of the output file, NitriqReport.txt, the application also returns a code of 1 to indicate that the analysis reported some errors. This can be utilized in continuous integration builds to indicate success or failure. Let’s take a peek into the generated output file:

Nitriq Report

The output file tells us precisely which queries failed and for what reason. In this example, several queries threw warnings (i.e. found instances of broken rules) and one query reported an error (an exception was thrown). Now let’s fix these issues and run the application once more:

Running the Console Application Again

Now that the errors have been taken care of, we get “Nitriq Analysis Completed” in the command prompt and a successful return code of 0. As you can see, we can easily re-run the Nitriq console application with the same project and queries, encouraging us to run these queries against our code on a regular basis. Integrating this body of tests into the development cycle will allow even more bugs to be squashed and code smells to be extinguished, yielding a cleaner code base and happier clients. If you make this a part of your continuous integration setup, you can even have the build be considered broken until this application returns a successful message.

Copious Cyclomatic Complexity Creates Confusing Code

Friday, February 19 2010         No Comments

The amount of cyclomatic complexity that a block of code has is a neat metric to measure. In short, it measures the number of linearly-independent paths in a program or method. Thus, a program containing a single entry and exit point and no loops, conditional statements, try-catch blocks, etc. will have a cyclomatic complexity of 1. The name itself threw me at first; I did not understand why cycle was in the name. It made more sense, however, after reading the Wikipedia article and looking at the graphical representation of the metric. Cyclomatic complexity basically counts the number of different cycles in a control flow graph of a given program. A quick and easy way to calculate a program’s cyclomatic complexity is to count the number of decision points in the program and then add one. This calculation is accurate only if the program has only one entry and exit point.

Let’s have a look at a simple method:

void DoSimpleStuff()
{
    a();
    b();
}

This method has no decisions to make so it has a cyclomatic complexity of 1. You can see in the associated graph that there is only one linearly independent path. In the following graphs, the blue node represents the entry point and the red node the exit point.

DoSimpleStuff() Graph

A more complex method yields higher cyclomatic complexity. Let’s have a look at a method that contains a couple decision points. Its corresponding graph immediately follows.

void DoStuff()
{
    for (int i = 0; i < 1000; i++)
    {
        a();
    }
    
    if (condition())
    {
        b();
    }
    else
    {
        c();
    }
}

DoStuff() Graph

Note that a loop only generates one extra branch whether it loops once, twice, or 100 times. Also note the difference in the directed edges between the for loop and the if statement. DoStuff() has a cyclomatic complexity of 3. While this measurement starts off small, it can grow rapidly. For example, if a conditional statement has multiple conditions then each of those conditions adds to the complexity:

void DoMoreStuff()
{
    if(c1() && c2())
    {
        a();
    }
    else
    {
        b();
    }
}

DoMoreStuff() Graph

DoMoreStuff() also has a cyclomatic complexity of 3. If c1() returns false, execution goes to b(). If c1() is true, execution goes to c2(). c2() then goes either to a() or b() depending on its truth value. As you can see, even a little extra decision making can rapidly increase the complexity of a seemingly-simple method. A high cyclomatic complexity is usually a good code smell indicator: if the code has a lot of decisions to make, is it adhering to the Single Responsibility Principle? How easy or difficult would it be to mock or fake its behavior in a testing environment? Speaking of tests, a good guideline to follow when creating tests is that a method has good testing coverage if the number of tests equals the method’s cyclomatic complexity. The above examples would be relatively easy to test: a couple tests are written and the job is done. What if the complexity measurement was 10? 15? 37? Getting such a messy method under test would certainly be more difficult! Such methods can lead to difficult maintenance as well. In addition to all the tests that have to be written and maintained, all of the paths of the code open a new door for bugs to crawl into.

So how does one find smelly code that reeks of cyclomatic complexity? While it is simple to calculate this measurement for methods that only have single entry and exit points, throwing multiple entry/exit points into the equation complicates matters significantly. Furthermore, it is tough to sort through a large code base to find these complicated code clusters. This is where an analysis tool comes into play. The company I work for has recently released a cool new tool called Nitriq that makes analyzing .NET code really easy. With it, you can locate these complex bits of code in your project in a snap. For the remainder of my post, I’m going to show you how easy it is to find portions of your code containing high cyclomatic complexity.

After selecting the assemblies and executables you want to analyze, you can overview your entire codebase at a glance:

Treemap View of Cyclomatic Complexity

You can see the methods with the largest cyclomatic complexity in your code base by selecting the “Method – Cyclomatic Complexity” metric within the Treemap window. As exemplified in the above screenshot, hovering over a particular region with the mouse gives you the pertinent information. In addition to the Treemap, queries that come with Nitriq also take advantage of this metric to give you a detailed listing of methods that have a high cyclomatic complexity:

Cyclomatic Query

The Treemap window highlights the methods returned in the executed query (the assembly I analyzed in this example is far larger than the one in the previous example, so its Treemap window contains quite a bit more depth). The red portion of the Treemap indicates the class I had double-clicked within the CodeTree window. This functionality also exists when double-clicking icons in the Grid view. The cyclomatic complexity is given in addition to other useful information such as physical line count and out types (i.e. all of the types that the method depends on). The columns can be easily sorted by clicking on the column titles, so finding the methods with the highest cyclomatic complexity is a cinch.

Maintaining a reasonable amount of cyclomatic complexity in your code is essential. It is too easy to let a simple block of code get complicated very quickly. Letting that measurement grow too large can lead to maintenance nightmares, difficult testing, and a potential for splitting headaches. Refactoring out conditionals into their own methods or classes, removing loops where possible, and adhering to good software design principles are all steps that can be taken to help lower the complexity. It can be quite time-consuming to find these code smells in a manual fashion, so the usage of a solid code analysis tool is a great approach to take when setting out to lower the cyclomatic complexity of your code.

This post is part of a series exemplifying how Nitriq can assist in the identification of code smells.

Using Metrics to Promote Agile Software Development Practices

Thursday, February 18 2010         No Comments

Testing is vital. I have previously written how writing tests is important in your code and that you should never put off writing them. Having a solid body of unit and integration tests takes away the fear of refactoring code, as behavior preservation is ensured. Production tests also exist, which can be run against live information to make sure that data integrity is maintained. UI testing is also available; frameworks such as Selenium allow you to test the UI portions of your application with ease.

When writing code it is important to keep good coding principles in mind. Testing helps enforce and maintain good principles in your code; however, it is also important to review the big picture. This allows you to identify questionable areas of code (methods that have grown large, classes that have too many responsibilities, and so on). When up against a deadline sometimes there is not time to write the most optimal code. Some less-than-perfect code has to be hacked in just to complete the task. Ideally, you would want to get back to those points later and clean them up. It would be nice to have some sort of program to analyze your code base and identify those points in your code. Well, I recently got a hold of a product that will do just that.

We here at NimblePros have released a great new .NET code analyzer: Nitriq. You merely load in assemblies or executables from your project and it immediately gives you feedback, which can identify code smells. 

Nitriq

The Treemap window shows a graphical representation of the largest chunks of code of the selected category. At a glance, you can: find out where the most complex components of your code live; find the code that contains the largest methods; locate the classes that contain the most lines of code, and so on. Then you can go in and refactor the code as necessary. The real meat and potatoes of Nitriq, however, lie in the queries that can be executed against the loaded assemblies. Creating your own rules is incredibly simple in addition to having many out-of-the-box queries at your disposal. Nitriq utilizes LINQ in order to create its rules. In no time at all you can implement your own rules via simple LINQ queries. Take a look at the following query:

var results = 
from type in Types
let ConstructorCount = type.Methods.Where(m => m.IsConstructor).Count()
where type.IsAbstract && ConstructorCount > 0 && type.IsInCoreAssembly
select new { type.TypeId, type.Name, ConstructorCount } ;
 
WarnGreaterThan(results, 0);

Here it is in action:

QueryExecutionInNitriq 

As you can see, a class is flagged when it is abstract and contains a non-default constructor. In the Grid and CodeTree windows you can discern which classes were found along with other relevant information (in this case, the number of defined constructors). The Treeview window also highlights the portions of your code base that were flagged. This query is one of the many that come with Nitriq. Along with the other queries, it can be easily modified to suit individual needs.

What these metrics and queries give you is another body of tests to run against your code. As with unit and integration tests, these can be used to preserve good coding principles and lead to less bugs found in production code. As you can see, there are many rules that come prepackaged with Nitriq. These rules promote agile software practices. In future posts, I will detail why violations of these rules are indeed code smells and how removing instances of broken rules from code will lead to a better overall code base.

Telling Unity Which Constructor to Use when Initializing a Class

Sunday, January 31 2010         No Comments

I’ve been using Unity for a fair amount of time now, and it has really grown on me. In my previous post, I linked to a solution which used it (although the rest of the project didn’t depend on that particular Inversion of Control container). For basic dependency injection, it has been working really well for me. However, I recently ran into an interesting issue. Say I have a class Foo (implementing IFoo) that I want resolved at runtime. Foo has the following constructors:

public interface IFoo
{
    // ...
}
 
class Foo : IFoo
{
    public Foo(IDependency dependency)
        : this(dependency, 5)
    { }
 
    public Foo(IDependency dependency, int someNumberWhoseDefaultIs5)
    {
        // ...
    }
 
    // ...
}

By default, Unity will select the most complicated constructor and attempt to resolve all of the dependencies given in the parameters. That’s fine for interfaces that it can resolve, but what about that int? It’s clear that we want it to default to 5, but Unity will ignore the first and try to resolve the second, causing a runtime error on resolution.

Unity provides a relatively simple solution to this problem. You can tell Unity which constructor to resolve to when registering a type. In this example, we would like Unity to resolve the first constructor, so that our default value of 5 is used (and no exceptions are thrown, of course):

UnityContainer unityContainer = new UnityContainer();
unityContainer.Configure<InjectedMembers>().
    ConfigureInjectionFor<Foo>
    (new InjectionConstructor(unityContainer.Resolve<IDependency>()));
unityContainer.RegisterType<IFoo, Foo>();

(As you can see, I prefer to use code to register types as opposed to configuration files.) InjectionConstructor takes in a params object[] array, since it does not know what parameters the constructor has. It will then take what is given and try to find a matching constructor. Now Unity will resolve the constructor we want it to resolve. As you can see from the example, IDependency will still be resolved via UnityContainer, but when it calls the first constructor, it will also use our default value of 5.

This brings up a question: Where should we put default values? I like to keep them in the class they belong to, but we could also put them where we register types with Unity. If we kept up this practice, then we’d have all of our default values in one place, but we wouldn’t necessarily know what the values mean. We could extract variables, but then the registration file(s) would get messy, and I like to keep those as simple as possible.

My Visual Studio Solution for Project Euler

Saturday, January 23 2010         No Comments

I’ve recently been getting back into completing problems found at Project Euler. There are a lot of good problems on there that help me hone my programming skills (especially when it comes to test driven development). When starting into these problems again, I decided to create a solution where I could easily start up the TDD process without having to concern myself with other tasks that I found myself doing over and over again. I presently have this solution hosted in a Google Code repository. Feel free to check it out.

I created a solution with a simple architecture: The Core project houses things such as IEulerProblem, a simple interface that I can define Euler problems under, and all of the problems themselves (along with the other classes needed to solve those particular problems). I have a DependencyResolution project for my Inversion of Control needs. Currently, I’m using Unity, but as this is the only project that depends on it, I can easily switch. The UI project houses a simple console application which outputs both the problem and the solution. Now that this part is written, I don’t have to touch it: All I do to show a different problem is change which Problem class IEulerProblem resolves to in the DependencyResolution project. I have a UnitTests project as well. This is where all the TDD magic happens, naturally. I also have an IntegrationTests project. This project basically houses the Acceptance Tests for the solutions I’ve written, just to make sure that the implementations of the solutions are actually correct. I write these tests after I have submitted my answer.

I have a ClickToBuild file set up for this solution, which builds the project and runs all of the tests within a command prompt window. This allows me to easily verify that my solution is in good working order before checking in any more changes. Another goal I have for this project is to set up some sort of Continuous Integration for it (most likely through TeamCity). I have a somewhat-old machine that I’m planning to turn into a little server so I can tinker around with this concept.

Definitions of classes common to all problems, such as IEulerProblem, are subject to change. In fact, I changed the interface simply because moving from the first problem I attempted (#8) to the next one exposed something that didn’t belong there. I believe that this solution will continue to evolve as I complete more problems with it. This is probably a bit overzealous for such a simple project, but I wanted to tinker around with designing a solution architecture that I’m satisfied with.