Thursday, May 17th, 2018

Is XP Unsuitable for a Diverse Team?

The discussion started on Twitter with this comment from Sarah Mei:

Wondering why all the agile/XP stuff (like pairing, TDD, etc) doesn’t seem to work for a heterogenous team?

It’s because they were developed by a bunch of white dudes. The practices assume the practitioners all have A LOT of built-in privilege.

This got a lot of reaction, as you might expect, with more heat than light evident. Much of that discussion was about what is and isn’t civil behavior on Twitter. I’m not going there because I think that ship sailed a long time ago and it’s not coming back.

I will say that I don’t believe issues of diversity in our software teams are a matter of politics. That is, “flaming liberal” I may be, but I nevertheless don’t expect intelligent conservatives to be opposed to inclusiveness, equal opportunity and respect for everyone in our companies and teams. Of course, I’m sometimes disappointed, but that happens with people who claim to share my political views as well. In the wild, neither liberals nor conservatives have cornered the market on treating people badly.

Back on point… eventually I managed to find a Twitter essay by Sarah Mei that explained her position more fully. Someone has kindly put it together in one place here: https://github.com/retrosight/learning/blob/master/why-agile-xp-so-often-fails-heterogenous-teams-sarah-mei.md I suggest reading it in full.

There’s stuff in there that strikes me as wrong or mis-stated, but other points that strike a chord. In any case, I think it’s a point of view that anyone taking XP seriously has to deal with.

A few minor things first:

1. The folks at Snowbird invented Agile as a term to describe the principles they held in common even though their individual methodologies were quite diverse.

2. Most of the problems Sara Mei describes seem to be around pairing, which is pretty much exclusively associated with XP. In this post, I’m sticking with what I know, which is XP.

So what about pairing? Does the creation of a diverse team make pairing more difficult? I’d have to say “Of course it does!” When we began to move testers out of separate organizations and into XP teams, that made things more difficult. Same goes for database folks, designers, UX experts, etc. It has never been possible to just tell people to pair together and have it work. Somebody has to sit down and show the team members how to do it, just as for any new skill. Failure to show people how to pair is an anti-pattern I saw for years as an outside coach brought into companies that imagined they were doing XP.

That said, diversity in race and gender are a different matter because here we are talking about privilege and disadvantage, not just within the team, but in society as a whole. If the team is 100% in agreement with the need for such diversification, then we “merely” have to teach them how to deal with imbalances of power – real or perceived. Ideally, we would make those imbalances disappear, at least within the team. We would make the team a safe place for its members.

On the other hand, the existing team may not be in 100% agreement. Some may oppose efforts to employ more women or people of color, or to give them equal status. In that case, I don’t think it’s any longer a question of software methodology but of the company’s willingness to enforce standards of behavior.

I didn’t write this post to offer a recipe or a remedy but to call for a discussion of the issue among members of the XP community. Next week at XP 2018 in Porto would be good for me!

Tuesday, May 1st, 2018

What’s With NUnit 2.6.5?

Some folks have expressed surprise at my release of NUnit 2.6.5. Their surprise is no surprise, given that the NUnit framework is now at version 3.10.1!

So why release a 2.6.5 version now?

For an answer, we have to go back to the first release of NUnit 3.0. At that time, we felt that NUnit 3 was going to be so attractive that people would rush to convert. For that reason, NUnit 2.6.4 was to be the last release of the NUnit V2 series. We archived the code and stopped accepting bugs on it.

We weren’t entirely wrong. NUnit 3 is a great improvement, so great that even those who are unable to convert to it wish that they could! And there lies the problem we didn’t consider: there are many, many external factors that can keep people from being able to convert. We knew that this would be a factor but seriously underestimated how many people would be affected.

To make matters worse, as NUnit 3 progresses through new releases, these folks get farther and farther behind, making their eventual conversion that much harder. The list of things you have to change in your code when converting from NUnit V2 to the latest NUnit 3 version just keeps getting bigger. It would be very convenient if folks on V2 could keep up with at least some of the ongoing changes.

Enter the NUnit Legacy Project!

The aim of this project is to support users of older NUnit V2 software in two specific ways:

1. By giving them information about exactly how much of their code may need changing in order to convert to NUnit 3.

2. By providing them with ways to keep their code more closely conformant with NUnit 3, where doing so doesn’t require a complete re-architecting of NUnit V2.

NUnit 2.6.5 is the first software release from the NUnit Legacy Project. It provides enhancements in both of the categories just listed, including a compatibility report that points out the places in your code where changes may be needed in order to upgrade. Additional enhancements in 2.6.5 provide compatible replacements for classes and methods that are no longer supported in NUnit 3, allowing you to immediately reduce the list of compatibility issues you may have to deal with.

Documentation and downloads for NUnit 2.6.5 and coming releases may be found at http://nunitsoftware.com/nunitv2.

Wednesday, January 24th, 2018

Microtests and Test Frameworks

I’m a big fan of microtests – both the term and the thing itself. My friend Hill coined the term quite a while back and I felt it completely solved the problem of ambiguity we agile folks were having when we talked about unit tests in front of people who understood the term in the way it was used 30 or more years ago.

I’m not going to describe what microtests are here. If you aren’t familiar with the term, go watch Hill’s video clip about them right now. We’ll talk more later.

Back already? Great! So… a while back I started to wonder if the NUnit test framework, which I have worked on since 2004, might be leading people to write bigger and more complex tests by having too many features. What would happen, I asked myself, if a framework only had those features that support microtests. And by the way… what features would that be?

I asked the question to a group of experienced coaches and – as you might expect – I got varying answers. Some people thought it was a good idea while others felt that having a separate tool for microtests would be a burden. There was a fair amount of agreement about what features were needed but also some disagreement on specific items.

I’ve decided to start out with a specification of features that will be most useful in microtests. There could be separate frameworks to support those features or you could just use a standard test framework and limit yourself to a particular subset. A framework could even support a setting that would warn you if you got out of the usual territory for microtests.

So here’s my short list of features at least for now. Let me know what you think of it.

1. A full set of assertions, such as are supported by nunit or junit. Assertions designed for access to the file system or databases, however, would be excluded.

2. A full set of test-identification attributes, including those that support data-driven tests.

3. Some way to create shared setup and teardown for tests. This is controversial as some people think it’s an antipattern to use separate setup methods. In the end, I decided it should be available but de-emphasized. Higher-level setup (fixture, namespace, assembly) would not be supported, however.

4. Simple reporting of test results without adding on any extra components.

5. NOTHING else, at least initially. In particular, NO way to order tests or define dependencies between them.

Tell me what you think of the list. Would you use such a framework if it existed?

Tuesday, January 23rd, 2018

Future of NUnit

A while back I began to have some concern about the future of NUnit. I was entering my 70s and I knew I wanted to spend more time on other things. NUnit had been very much my project for a few years and I didn’t want it to die when I was no longer maintaining it.

Starting about three years ago, we began the process of phasing me out of the project. It was difficult and time-consuming but now the transition is pretty well complete. Today, we have a much larger group of contributors and team members than ever before. I’m still involved but my level of participation is much reduced.

The NUnit Project is now run by a team (the Core Team) rather than an individual and is supported by the .NET foundation. The Core Team consists of Rob Prouse (chair), Terje Sandstrom, Chris Maddock, Joseph Musser and myself. The Core Team makes the big decisions, similar to a board of directors, rather than running individual projects. Each of the projects under NUnit has it’s own team and team leader, although there is a fair amount of overlap. I think that’s a great formula for future success.

With a decentralized structure like this, there is both an opportunity and a need for more people to step up into leadership positions. I hope more people will join the developers of NUnit as time goes on and that some of you who have been involved for a while will consider taking responsibility for some of the projects or subprojects we operate.

For myself, I’ll continue to be a member of the Core Team and will continue to contribute to the codebase. But most of my open source work will be in other projects, some related to NUnit and some more independent. I’ll be posting here about some of the things I’m working on as they come closer to fruition, so please follow me here if you are interested.

Thursday, September 22nd, 2016

NUnit-Summary Becoming an “Official” NUnit Application

UPDATE: I’m leaving the post here but the action described has been reversed and the project continues to live at https://github.com/charliepoole/nunit-summary

NUnit-Summary is an “extra” that I’ve maintained personally for some time. It uses built-in or user-supplied transforms to produce summary reports based on the results of NUnit tests.

I have contributed it to the NUnit project and we’re working on updating it to recognize NUnit 3 test results. The program has never had a 1.0 release, but we expect to produce one soon.

This old post talks about the original nunit-summary program.

Thursday, September 22nd, 2016

An Engine Extension for Running Failed Tests – Part 1: Creating the Extension

In a recent online discussion, one of our users talked about needing to re-run the NUnit console runner, executing just the failed tests from the previous run. This isn’t a feature in NUnit but it could be useful to some people. So… can we do this by creating an Engine Extension? Let’s give it a try!

The NUnit Test Engine supports extensions. In this case, we’re talking about a Result Writer extension, one that will take the output of a test run from NUnit and create an output file in a particular format. In this case, we want the output to be a text file with each line holding the full name of a failed test case. Why that format? Because it’s exactly the format that the console runner already recognizes for the --testlist option. We can use the file that is created as input to a subsequent test run.

Information about how to write an extension can be found on the Writing Engine Extensions page of the NUnit documentation. Details of creating a ResultWriter extension can be found on the Result Writers page.

To get started, I created a new class library project called failed-tests-writer. I made sure that it targeted .NET 2.0, because that allows it to be run under the widest range of runtime versions and I added a package reference to the NUnit.Engine.Api package. That package will be published on nuget.org with the release of NUnit 3.5. Since that’s not out yet, I used the latest pre-release version from the NUnit project MyGet feed by adding https://www.myget.org/F/nunit/api/v2 to my NuGet package sources.

Next, I created a class to implement the extension. I called it FailedTestsWriter. I added using statements for NUnit.Engine and NUnit.Engine.Extensibility and implemented the IResultWriter interface. I gave my class Extension and ExtensionProperty attributes. Here is what it looked like when I was done.

using System;
using System.IO;
using NUnit.Engine;
using NUnit.Engine.Extensibility

namespace EngineExtensions
{
    [Extension, ExtensionAttribute("Format", "failedtests")]
    public class FailedTestsWriter : IResultWriter
    {
        public void CheckWritability(string outputPath)
        {
            using (new StreamWriter(outputPath, false, Encoding.UTF8)) { }
        }

        public void WriteResultFile(XmlNode resultNode, string outputPath)
        {
            using (var writer = new StreamWriter(outputPath, false, Encoding.UTF8))
            {
                WriteResultFile(resultNode, writer);
            }
        }

        public void WriteResultFile(XmlNode resultNode, TextWriter writer)
        {
            foreach (XmlNode node in resultNode.SelectNodes("//test-case[@result='Failed']")) // (3)
                writer.WriteLine(node.Attributes["fullname"].Value);
        }
    }
}

The ExtensionAttribute marks the class as an extension. In this case as in most cases, it’s not necessary to add any arguments. The Engine can deduce how the extension should be used from the fact that it implements IResultWriter.

As explained on the Result Writers page, this type of extension requires use of the ExtensionPropertyAttribute so that NUnit knows the name of the format it implements. In this case, I chose to use “failedtests” as the format name.

The CheckWriteability method is required to throw an exception if the provided output path is not writeable. We do that very simply by trying to create a StreamWriter. The empty using statement is merely an easy way to ensure that the writer is closed.

The main point of the extension is accomplished in the second WriteResultFile method. A foreach statement selects each failing test, which is then written to the output file.

Testing the Extension

That explains how to write the extension. In Part 2, I’ll explain how to deploy it. Meanwhile, I’ll tell you how I tested my extension in it’s own solution, using nunit3-console.

First, I installed the package NUnit.ConsoleRunner from nuget.org. I used version 3.4.1. Next, I created a fake package subdirectory in my packages folder, so it ended up looking like this:

packages
    NUnit.ConsoleRunner.3.4.1
    NUnit.Engine.Api.3.5.0-dev-03211
    NUnit.Extension.FailedTestsWriter
        tools
            failed-tests-writer.dll

Note that the new extension “package” directory name must start with “NUnit.Extension.” in order to trick the console-runner and engine into using it.

With this structure in place, I was able to run the console with the --list-extensions option to see that my extension was installed and I could use a command like

nunit3-console mytests.dll --result:FailedTests.lst;format=failedtests

to actually produce the required output.

Wednesday, September 21st, 2016

Back to Blogging!

My blog has been offline for a long time, as you can see. The last prior post was in 2009!

Recently, I found a backup copy of the old blog and was able to re-establish it. Watch for some new posts in the near future.

Saturday, May 2nd, 2009

Using Lambdas as Constraints in NUnit 2.5

Let’s say you have an array of ints representing years, all of which should be leap years.

One way to test this would be to write a custom constraint, LeapYearConstraint. You
could then use it with the Matches syntax element to write your test as

Assert.That( array, Is.All.Matches( new LeapYearConstraint() );

But creating a new constraint for this adhoc problem seems like a bit of overkill.
Instead, assuming you are working with C# version 3, try this:

Assert.That( array, Is.All.Matches( (x) => x%4 == 0 && x%100 != 0 || x%400 == 0 );

If it fails, it will give a generic message: “Expected: matching lambda expression” since NUnit is actually
built with .NET 2.0, but for a quick test it may be just the tool you need.

Wednesday, April 29th, 2009

Ten Reasons to Try NUnit 2,5

NUnit 2.5 has so many new features (see the release notes) that I thought I’d try to come up with my top-ten favorites. It was hard to get down to ten, but here’s what I came up with…

Reason 1: Data-Driven Tests

Users of mbUnit and xUnit.net have enjoyed the flexibility that data-driven (aka parameteried) tests provide for some time. NUnit implements this paradigm in its own way, with it’s own set of attributes. Test methods may have arguments and the data for them may be supplied in a number of ways: inline, from a separate method or class or randomly. This feature gives you a succinct way to express a set of examples to be used in running individual test cases.

Reason 2: Theories

As used in NUnit, a Theory is a generalized statement of how a program should operate, like “For any positive number, the square root is defined as the positive or negative number, which, when multiplied by itself, gives the original number.” Traditional, example-based, testing allows you to select one or more sets of values to use in testing such a program. A Theory, on the other hand, allows you to express the generalization itself, writing a test that will pass for whatever values are passed to it, provided they meet the stated constraints. David Saff has written a number of papers about the use of Theories in testing and has implemented this construct as a part of JUnit. Now you can use the same construct in any .NET language.

Reason 3: Inline Expected Exception Tests

Testing that an expected exception is thrown correctly has always been an issue in NUnit. The ExpectedExceptionAttribute has been available since early releases but has a number of problems. It tests that the exception was thrown somewhere in the test, without specifying the exact place in the code, and it is subject to the syntactic limitations that apply to use of an attribute. With the introduction of the Assert.Throws assertion method and the even more powerful constraint expressions Throws.Exception, Throws.InstanceOf and Throws.TypeOf, exception testing logic can now be moved right into the test along with any other necessary assertions.

Reason 4: Generic Support

NUnit 2.5 provides separate framework assemblies for .NET 1.x and 2.0+. Using .NET 2.0 or higher, a number Up to 2.4, NUnit avoided any use of Generics, in order to maintain backward compatibility. In 2.5, the framework assembly used under .NET 2.0 provides a number of generic Asserts and Constraint expressions for convenience. More significantly, your test methods and classes may now be generic, and NUnit will specialize them using the types you provide.

Reason 5: Lambda Support

If you write your tests using C# 3.0, you may use Lambda expressions in a number of places where NUnit expects a delegate. This is particularly useful in providing a custom definition of equality without explicitly defining an IComparer<T> and can even be used to apply an arbitrary predicate to the members of a collection.

Reason 6: Out-of-Process Execution and Runtime Selection

NUnit 2.4 ran all tests within the same process, using one or more AppDomains for isolation. This works fine for many purposes, but has some limitations. NUnit 2.5 extends this concept to running tests under one or more separate processes. Aside from the isolation it provides, this allows running the tests under a different version of the .NET runtime from the one NUnit is currently using.

Reason 7: PNUnit

PNUnit stands for “parallel NUnit” and is an extension developed by Pablo Santos Luaces and his team at Codice Software and contributed to NUnit. It’s a new way to test applications composed of distributed, communicating components. Tests of each component run in parallel and use memory barriers to synchronize their operation. Currently, pNUnit uses a special executable to launch its tests. In the future, you will be able to run pNUnit tests from the standard NUnit console or gui runner.

Reason 8: Source Code Display

The new stack trace display in the Errors and Failures tab of the Gui is able to display the source code at the location where a problem occured, provided the source is available and the program was compiled with debug information. Currently, syntax coloring for C# is provided and other languages are treated as simple text, but additional syntax definitions will be available in the future.

Reason 9: Timeout and Delayed Constraints

These are two separate features, but they are related. Besides, I’m working hard to keep this down to only ten points! It’s now possible to set a timeout, which will pre-emptively fail a test to fail if it is exceeded. This may be done on a method, fixture, assembly or as a global default. On the other hand, if you need to wait for an action to take place after a delay, you can use the After syntax to delay the application of the constraint. NUnit will subdivide a long delay and apply your test repeatedly until the constraint succeeds or the specified amount of time is up!

Reason 10: Threading Attributes

In past releases, if any test needed to run in the STA, the entire test run had to use the STA. With 2.5, any method, fixture or assembly may be given an attribute that causes it to run on a separate thread in the STA. Other attributes allow requiring an MTA or simply running on a separate thread for isolation. This can eliminate a lot of boilerplate code now required to create a separate thread, launch it and capture the results for NUnit.

This is my own list, of course. Yours may vary. Download the release, try it out and let me know what your own favorites are.

Wednesday, January 7th, 2009

Code Generation in NUnit

The latest code for NUnit 2.5 includes seven generated files, including the Assert class and most of the classes that allow you to write constraint expressions using the NUnit fluent syntax. Some people have asked if generating these files is worth the effort, since the code created is very simple anyway.

There are two reasons for generating this code. The first relates to the syntactic constructs. While it’s relatively straight forward to create a custom constraint and various people have done so, such constraints must be used by invoking their constructors rather than by use of a simple key word. So, for example, if you have written an OddNumberConstraint that tests whether a number is odd and displays an appropriate failure message, you are still not able to write Assert.That(num, Is.Odd) without directly modifying NUnit.

It turns out, based on experience of several people who have tried, that the syntactic modification has a lot of places where you can go wrong. You have to modify at least three additional files, even after you have written the constraint. Using NUnit’s code generation facility, you would simply add a line like this to NUnit’s SyntaxElements.txt:

Gen3: Is.Odd=>new OddConstraint()

Then, after running NUnit’s code generation tool, the files Is.cs, ConstraintFactory.cs and ConstraintExpression.cs would be updated. After rebuilding NUnit – or just the framework – the statement Assert.That(num, Is.Odd) would compile and work correctly. If you wanted a classic assert, you could add the line

Gen: Assert.IsOdd(int num)=>Assert.That(num, Is.Odd)

and Assert.IsOdd would become available for your use, including overloads with an error message and optional arguments.

So, one good reason for generating code is to make it easier to extend NUnit. But an even more important reason is reliability. Take the Assert class as an example. Some of the methods have as many as 24 overloads. In the past, we have seen hidden bugs that affected only one infrequently used overload. By generating the code, we can ensure that the same logic is used in each overload. This doesn’t prevent errors, but it does make it likely that the error will be caught, since it will generally impact many of the overloads in the same way. What’s more, the layout of the SyntaxElements file puts things that need to be updated together right next to one another, so it’s much harder to forget a step.

The NUnit code generation program, GenSyntax.exe, is distributed with the NUnit source, in the tools directory.