Tag Archives: TDD

Asserting without Equals

Arnis suggested that implementing Equals just for NUnit was wrong so I thought I’d try doing without it.

The CollectionAssert.AreEqual method accepts an optional IComparer implementation. If specified, that will be used instead of Equals.

So I put together a class called PartComparer. Since I switched to comparing the state of the objects outside their classes, I had to expose some of that state via read-only properties. I think I can live with that.

I then deleted all of my Equals and GetHashCode methods (I wasn’t really using GetHashCode anyways).

Here’s what the test changed to:

[Test]
public void It_scans_literal_text()
{
    var scanner = new Scanner();

    var parts = scanner.Scan("foo");

    CollectionAssert.AreEqual(new Part[]
                                  {
                                      new LiteralText("foo"),
                                  },
                              parts,
                              new PartComparer());
}

It works the same as before. It just requires the extra argument.

Custom comparers works even with the newer Assert.That syntax:

[Test]
public void It_scans_literal_text()
{
    var scanner = new Scanner();

    var parts = scanner.Scan("foo");

    Assert.That(parts, Is.EqualTo(new Part[]
                                      {
                                          new LiteralText("foo"),
                                      })
                         .Using(new PartComparer()));
}

The verbosity of constructing the expected Part array is a bit much. If only C# could construct lists like JavaScript, Python, and Ruby…

I decided to try to hide that in a custom assertion method:

private static void AssertThatPartsAreEqual(
    IEnumerable<Part> actualParts,
    params Part[] expectedParts)
{
    Assert.That(actualParts, Is.EqualTo(expectedParts)
                               .Using(new PartComparer()));
}

Now my test looks like this:

[Test]
public void It_scans_literal_text()
{
    var scanner = new Scanner();

    var parts = scanner.Scan("foo");

    AssertThatPartsAreEqual(parts, new LiteralText("foo"));
}

One really nice thing about having the custom assertion method is that I can modify how parts are compared in just one spot. For example, Scan is really an iterator. NUnit’s failure messages when the collections aren’t arrays are less than ideal. With the comparisons being done in just one spot, I can modify it to convert the enumerable into an array:

private static void AssertThatPartsAreEqual(
    IEnumerable<Part> actualParts,
    params Part[] expectedParts)
{
    Assert.That(actualParts.ToArray(), Is.EqualTo(expectedParts)
                                         .Using(new PartComparer()));
}

Since I’m using .NET 3.5, I can take this one step further and use an extension method:

internal static class EnumerablePartExtensions
{
    public static void IsEqualTo(
        this IEnumerable<Part> actualParts,
        params Part[] expectedParts)
    {
        Assert.That(actualParts.ToArray(), Is.EqualTo(expectedParts)
                                             .Using(new PartComparer()));
    }
}

With that in place, my test now looks like this:

[Test]
public void It_scans_literal_text()
{
    var scanner = new Scanner();

    var parts = scanner.Scan("foo");

    parts.IsEqualTo(new LiteralText("foo"));
}

Wow, it’s like Ruby but without the monkey patching!

One thing I did leave in my code are all of the ToString overrides. Without those, NUnit’s failure messages would be much less helpful. They’re also very useful while debugging.

Thanks, Arnis. Your comment helped me find an alternative (and quite possibly better!) way to get what I want.

I fail at TDD?

I actually think I’m pretty good at TDD. Every now and then I get reminded that I’m not as good as I think I am.

I’ve been working on a new project (an implementation of the Mustache template language in C# that I’m calling Nustache) and have been having a lot of fun with it. This is the project I’m going to use as an example of how I fail at TDD.

Since this project involves parsing a string, I decided I would probably need a class to scan the string for tokens so those tokens could be parsed and then evaluated.

While writing the test for my Scanner class, I wrote it so that it would assert on the sequence of tokens it returns. I decided the tokens would be instances of a class called Part. One specific subclass of Part would be LiteralText. It represents a span of characters from the source template that is not supposed to be evaluated and just rendered directly to the output. I figured this would be the easiest way to start testing my Scanner class.

The test probably looked like this (I’m writing this way after the fact):

[Test]
public void It_scans_literal_text()
{
    var scanner = new Scanner();

    var parts = scanner.Scan("foo");

    CollectionAssert.AreEqual(new Part[]
                              {
                                  new LiteralText("foo"),
                              },
                              parts);
}

At this point, the test didn’t compile because I hadn’t defined my Scanner, Part, and LiteralText classes yet.

Having written the test first, I learned a few things about the Scanner class it was trying to test:

  • It has a default constructor
  • It has a method named Scan
  • Its Scan method takes in a string
  • Its Scan method returns an IEnumerable<Part> (I know this because of the parameters for CollectionAssert.AreEqual)

I also learned something about the LiteralText class:

  • It derives from Part (because I’m adding it to a Part array)
  • It has a constructor that accepts a string
  • It must override the Equals method or this will never work

Since this test is describing the Scanner class, I decided to work on it first:

public class Scanner
{
    public IEnumerable<Part> Scan(string template)
    {
       return null;
    }
}

This wouldn’t compile until I defined Part:

public class Part
{
}

The test still needed LiteralText to be defined:

public class LiteralText : Part
{
    public LiteralText(string text)
    {
    }
}

At this point, I was able to compile and run my test. When I did, NUnit said this:

Test 'Nustache.Tests.Describe_Scanner_Scan.It_scans_literal_text' failed: 
  Expected: < <Nustache.Core.LiteralText> >
  But was:  null

I liked that failure message, but I wanted to go a bit further and see what the failure message would be when I return an empty array instead of null since it didn’t make sense to me for Scan to return null. Scan changed to this:

public IEnumerable<Part> Scan(string template)
{
    return new Part[] { };
}

The failure message became this:

Test 'Nustache.Tests.Describe_Scanner_Scan.It_scans_literal_text' failed: 
  Expected is <Nustache.Core.Part[1]>, actual is <Nustache.Core.Part[0]>
  Values differ at index [0]
  Missing:  < <Nustache.Core.LiteralText> >

OK, that wasn’t too bad. Next, I wanted to see if I could get it to pass by doing the simplest thing I could possibly do so I changed Scan to this:

public IEnumerable Scan(string template)
{
    return new Part[]
               {
                   new LiteralText("foo")
               };
}

After seeing that pass, I would have implemented it a little more realistically, but I was in for a surprise. Instead of passing (which is what I expected), I got this failure message:

Test 'Nustache.Tests.Describe_Scanner_Scan.It_scans_literal_text' failed: 
  Expected and actual are both <Nustache.Core.Part[1]>
  Values differ at index [0]
  Expected: <Nustache.Core.LiteralText>
  But was:  <Nustache.Core.LiteralText>

Uh… Oh, yeah! LiteralText needs an override of the Equals method or NUnit will never be able to tell if one instance is “equal to” another.

In order to implement that, I need to make sure the string that gets passed in to the LiteralText constructor gets saved in some sort of field or property. Then I could write my Equals override by hand or let ReSharper generate it for me.

I decided to let ReSharper do it (I’m lazy) and got three methods: bool Equals(LiteralText other), bool Equals(object obj), and int GetHashCode().

After getting that to work, I added a ToString method to LiteralText to make the failure message even clearer.

See the problem? I went off and started implementing code in LiteralText when I was in the middle of trying to get a test for Scanner to pass! Sure, it’s just the Equals and GetHashCode methods, but it’s still code!

I did all of this in response to test I was trying to get to pass so I was still doing TDD, right?

Right?

At the time I was doing this, I didn’t even notice this “problem”. It wasn’t until much later when I decided to run my tests under NCover to see how I was doing. I was practicing TDD, so my coverage should have been pretty good, if not perfect. Sadly, I found I had a bunch of Equals, GetHashCode, and ToString methods that weren’t fully covered and ruining my flawless code coverage report!

So what’s the big deal? Everybody agrees that 100% code coverage isn’t sufficient to ensure the correctness of your code. I absolutely agree with that. Many people also agree that getting 100% code coverage isn’t even worth it. That, I disagree with. As does Patrick Smacchia (author of NDepend) who described why 100% code coverage is a worthwhile goal here. It’s a great article and I highly recommend you all read it.

To rectify this predicament, I forced myself to write tests for my LiteralText class (writing tests after the fact is so boring!).

Since I originally defined it, I discovered that Part had grown a Render method and LiteralText was overriding it. The method was being covered by other tests, but there was nothing that was directly testing LiteralText. That might not be such a big deal, but one of the oft-touted benefits of unit tests is that they can also act as executable documentation. Since I had no unit tests for my LiteralText class, I had no executable documentation for it! How would I ever re-learn (months from now) how it’s supposed to behave without that?

OK, I’m being a bit silly, but I went for it anyways and I really liked the result. Here’s what I came up with:

[TestFixture]
public class Describe_LiteralText
{
    [Test]
    public void It_cant_be_constructed_with_null_text()
    {
        Assert.Throws<ArgumentNullException>(() => new LiteralText(null));
    }

    [Test]
    public void It_renders_its_text()
    {
        var a = new LiteralText("a");
        var writer = new StringWriter();
        var context = new RenderContext(null, null, writer, null);

        a.Render(context);

        Assert.AreEqual("a", writer.GetStringBuilder().ToString());
    }

    [Test]
    public void It_has_a_useful_Equals_method()
    {
        object a = new LiteralText("a");
        object a2 = new LiteralText("a");
        object b = new LiteralText("b");

        Assert.IsTrue(a.Equals(a));
        Assert.IsTrue(a.Equals(a2));
        Assert.IsTrue(a2.Equals(a));
        Assert.IsFalse(a.Equals(b));
        Assert.IsFalse(a.Equals(null));
        Assert.IsFalse(a.Equals("a"));
    }

    [Test]
    public void It_has_an_Equals_overload_for_other_LiteralText_objects()
    {
        var a = new LiteralText("a");
        var a2 = new LiteralText("a");
        var b = new LiteralText("b");

        Assert.IsTrue(a.Equals(a));
        Assert.IsTrue(a.Equals(a2));
        Assert.IsTrue(a2.Equals(a));
        Assert.IsFalse(a.Equals(b));
        Assert.IsFalse(b.Equals(a));
        Assert.IsFalse(a.Equals(null));
    }

    [Test]
    public void It_has_a_useful_GetHashCode_method()
    {
        var a = new LiteralText("a");

        Assert.AreNotEqual(0, a.GetHashCode());
    }

    [Test]
    public void It_has_a_useful_ToString_method()
    {
        var a = new LiteralText("a");

        Assert.AreEqual("LiteralText(\"a\")", a.ToString());
    }
}

As you can probably tell, I’m using a non-standard (for .NET developers) naming scheme for my tests. It’s inspired by RSpec and I really like it.

If you take away all the code, the ugly underscores, and the weird prefixes, you get the documentation:

  • LiteralText
    • can’t be constructed with null text
    • renders its text
    • has a useful Equals method
    • has an Equals overload for other LiteralText objects
    • has a useful GetHashCode method
    • has a useful ToString method

Could that get any clearer? (Seriously, leave a comment if you think it could.)

I formatted that list by hand, but generating it could easily be automated by processing NUnit’s XML output or using reflection on the test assembly. RSpec has a feature built in to it that can generate this kind of output. (Don’t worry, I don’t plan on switching to Ruby like every other .NET weblogger out there seems to be doing. I do think they have some great ideas, though, and have been really enjoying reading the beta version of the RSpec Book which is what made me try out this new naming scheme.)

Even though this particular class is trivial, doing this helped set up an example to follow for the other Part subclasses which aren’t as simple. Also, I feel much more confident about this class knowing that there is a suite of tests in a single, well-named fixture that provides 100% code coverage for it.

I’m not saying that every class should be covered by one and only one fixture. If the class demands it, I’ll happily break its tests up into multiple fixtures. I could have one fixture per method or one fixture per context. I’m flexible about that. I’d use the desire to that as a possible smell that the class might be trying to do too much, though.

You can also see that I’m pretty flexible about not limiting myself to just one assertion per test. I strongly believe that most tests should only have one assertion but, in this case, it would have been ridiculous to have written a test case for each of the different ways Equals could be invoked.

I was also a little lax on the Arrange/Act/Assert format. This is another practice that I try to consistently follow. Some tests just don’t need anything to be set up! And NUnit’s Assert.Throws syntax kind of forces you to act and assert at the same time. There’s not a lot I can do about that.

Here’s the one thing that I’m firm on: Ensuring that I have a set of tests that fully cover 100% of the unit that they directly test is a Good Thing.

And here’s the million dollar question: What can I do to prevent these kinds of mistakes in the future? Is it even possible?

To be honest, I’m OK making mistakes as long as I can catch and fix them quickly enough. Did I know about the Part and LiteralText classes before starting work on Scanner? I can’t actually remember. Let’s pretend I didn’t. As soon as I saw that the test for Scanner was referencing other classes, should I have stopped what I was doing, put an Ignore attribute on the test, and started working on tests for those other classes first? I’m not so sure about that. I feel like doing that might have negatively impacted the journey I had already embarked on.

So maybe “failing” at TDD in this way is expected? It’s rare when a class has no dependencies. If I’m using tools like NUnit, NCover, and NDepend, I should be able to catch my “mistakes” pretty quickly. This means that my unit tests have to be fast or I’ll rarely run them and, if that happens, my mistakes won’t be caught until much later. By that point, I’ll have too many mistakes to fix, I’ll be out of time, forced to move on to the next task, and, there I go, abandoning everything I believe in (about developing and testing)!

And my coworkers wonder why I hate our “unit tests” that read from and write to the database so much…

Why won’t my iterator throw?

I ran into a spot of confusion last night doing some TDD on a method that returns an IEnumerable<T>. I was using multiple yield return statements in it which made the method an iterator and not just a normal method.

Even though I know how iterators work, I don’t use them enough to remember their idiosyncrasies. The main one being that it’s easy to “invoke” them without actually executing any of the code inside them!

To illustrate this using a highly contrived example, imagine you wrote the following test:

[Test]
public void It_throws_when_you_pass_in_null()
{
    Assert.Throws<ArgumentNullException>(
        () => MyObject.MyMethod(null));
}

And then implemented the method it tests it like so:

public static IEnumerable<object> MyMethod(object arg)
{
    if (arg == null)
        throw new ArgumentNullException("arg");

    yield return "whatever";
}

Surprise! Your test fails.

Why? Because the code in your iterator doesn’t start executing until you invoke GetEnumerator on its return value then invoke MoveNext on that.

One really quick way to force these method calls to happen is to use the ToArray extension method:

[Test]
public void It_throws_when_you_pass_in_null()
{
    Assert.Throws<ArgumentNullException>(
        () => MyObject.MyMethod(null).ToArray());
}

ToList or manually iterating over the result with foreach would work just as well.

A better way to fix this is to change your implementation so that it uses two methods:

public static IEnumerable<object> MyMethod(object arg)
{
    if (arg == null)
        throw new ArgumentNullException("arg");

    return MyMethodHelper(arg);
}

private static IEnumerable<object> MyMethodHelper(object arg)
{
    yield return "whatever";
}

Written this way, MyMethod isn’t an iterator anymore. It’s just a plain old method that gets executed the way you’d expect it to. MyMethodHelper becomes the iterator. Its code won’t get executed until you start calling MoveNext, but that’s OK because the validation code you care about already ran.

Problem solved, right? Unfortunately, this wasn’t quite my exact problem.

My method was actually throwing after it did its argument checking and while it was yielding values. There’s really no solution (that I know of) to handle this without invoking MoveNext (or something else that will invoke it for you like ToArray) until whatever condition throws your exception is triggered.

Adding ToArray to my test wasn’t a big deal, but it took me a bit of time to figure out why it was failing. I was actually setting breakpoints in my method, running the test in the debugger, and tripping out when my breakpoints weren’t hitting.

Maybe going through the trouble of writing this post will save me 10 minutes next time I write an iterator.

Autotest for Python

I recently went looking for an autotest equivalent for Python.

This question on StackOverflow pointed me to autonose. It wasn’t that easy to install using easy_install since one of its dependencies (snakefood) failed to install so I had to do that manually.

Unfortunately, autonose has a few issues, especially when running on Windows. Since it doesn’t appear to be updated anymore, I went searching for an alternative and found pyautotest, part of the Modipyd project. It doesn’t use nose, but that’s OK, because I wasn’t using anything that required it.

I had to install Modipyd by downloading its source from this GitHub repository and running python setup.py install.

Pyautotest is exactly what I was looking for–simple and works right out of the box without a nest of dependencies. The only part I was missing was support for Growl for Windows.

Ian Lewis and his co-workers released some custom test runners that can be used via pyautotest. The Growl version uses the growlnotify tool, but that didn’t work with the version of growlnotify.exe that works on Windows.

I started modifying their runner to work with growlnotify.exe, but it really bothered me how they copied and pasted the entire contents of the unittest.TextTestRunner.run method into their derived class so I threw together my own version which doesn’t contain such a flagrant disregard for object-oriented principles. You can clone/fork it here.

For the icons, I used Jamie Hill’s pass/fail smilies which I found here.

I hope this helps others trying to do TDD with Python. I absolutely love saving in Vim and seeing a green smiley face immediately appear. =)

AutoRunner Downloads

I took some time tonight to throw together a build script for producing proper releases of AutoRunner. If you don’t feel like compiling it yourself, you can get a pre-compiled version here.

I used ILMerge to merge the Growl for Windows assemblies into the executable so it’s basically a single file now.

By the way, last night on Twitter, Steve Bohlen pointed me to this port of the original autotest to .NET. I took a look at the code and it was much more complex than I was looking for. It actually builds and runs your tests every time you save which is much more often than I want.

I had AutoRunner turned on all day at work today and was loving how it would catch me breaking the tests when I wasn’t expecting it to. =)

AutoRunner

I recently came across this awesome code kata performance by Corey Haines here.

Besides enjoying and learning from his actual performance, I was really impressed by his use of a Ruby tool called autotest. (I’m not sure, but it looks like it has become autospec.)

Not being a Ruby developer, I wanted the same thing for .NET. I did some searching, but my Google-fu failed me so I spent an hour hacking together my own.

The result is called AutoRunner (I know–way creative) and its source is available on GitHub.

If you want it, you have to download the source and compile it yourself for now. (UPDATE: You can now download it here!) Run it from your favorite console (PowerShell, right?) without any arguments to see what options it accepts.

AutoRunner is a little more general purpose than autotest/autospec is. Basically, it can run any executable when any file changes.

What I wanted it for was to run nunit-console.exe whenever my current tests assembly was rebuilt. To do that, I just invoke it with the right arguments.

If you have Growl for Windows running, it will send it a notification which is pure eye candy and not necessary to actually get it to run your tests.

It’s not a Visual Studio add-in. It’s just a plain old console application. Using Visual Studio’s External Tools feature, however, it’s almost as good as an add-in. I set up an external tool with the appropriate arguments and it’s good to go for all of my projects.

To set this up for yourself, you’d create a new external tool with its command set to the path where you built AutoRunner.exe and its arguments set to something like the following (I’ve separated the options on their own lines, but you wouldn’t do that in Visual Studio):

--target $(BinDir)\$(TargetName)$(TargetExt)
--exe C:\path\to\nunit-console.exe
--pass "$(TargetName) FTW!"
--fail "Oh noes! $(TargetName) is FAIL!"

You can use whatever test runner you like, of course. Please note that you must have a file from your tests project open or selected in Solution Explorer when you activate the tool or your AutoRunner instance will be watching the wrong DLL!

It doesn’t support plug-ins the way autotest does and most of its functionality is hard-coded for now. If anybody finds it useful, let me know and maybe we can work on improving it together.

SharpTestsEx

I’ve been using Fabio Maulo‘s NUnitEx project to get fluent assertions on a personal project recently and have been loving it.

He then went and moved on to a new project called SharpTestsEx, which he intended to be framework-agnostic, but currently only worked with MSTest which prevented me from being able to use it (since I never saw a compelling reason to switch to MSTest).

Fabio was kind enough to let me make the changes necessary to remove the dependency on MSTest. A also made framework-specific versions of SharpTestsEx for MSTest, NUnit, and xUnit. The framework-specific versions aren’t really necessary, but they make the error messages a tiny bit prettier if you use the right one for the test framework you’re using.

You can read Fabio’s announcement here.

I plan on updating my personal project to using SharpTestsEx next.

I just realized I never posted about that project; I’ll have to get around to that soon. If you’re curious, it’s a behaviour-driven development framework for .NET called BehaveN. I’m using it at my work and we’re loving it.

TestDriven.NET Keyboard Shortcuts

I’m so sick of Ctrl+Tab’ing back to my test file, finding the test I want to run, right-clicking it with the mouse, and selecting Run Test(s) or Test With.

How come nobody told me that TestDriven.NET came with keyboard shortcuts? They have to be manually mapped, but once I did that, I’ve found them indispensible.

  • Alt+1 : TestDriven.NET.RunTests
  • Alt+2 : TestDriven.NET.RerunWithDefault
  • Alt+3 : TestDriven.NET.Debugger
  • Alt+4 : TestDriven.NET.RerunWithDebugger

I especially love the two shortcuts that repeat the most recent test run for me. Yeah, I know you can right-click in any file and select Repeat Test Run, but if I want to alternate between running inside or outside the debugger, I have to go find the test again. With my shortcuts, I can just hit Alt+2 or Alt+4 no matter where I am.