At the AltNetSeattle Open Spaces conference at the end of February, Aaron Jensen convened a session on how the AutoMockingContainer that they created is not working for them and where to go from here. I went into the session expecting an overview of Machine.Container, but it turned out to be more of a discussion of the issues that people are having using mock objects in Test Driven Development.
Basically, the problem is that refactoring break tests because they are brittle or because the tests are no longer relevant after the refactoring. So, you spend your time maintaining tests, not maintaining code.
Aaron described the evolution of their process as starting with manual mocking in fine-grained unit tests, but found that they were creating a lot of mocks. In response, they came up with the AutoMockingContainer. It worked great, but mocking entities was not as useful because they were persistent ignorant anyway.
This led to a move towards fluent fixtures or what some people call fluent builders (e.g. new Student(“Bob”).EnrolledIn(new Course(“101”)) ). The goal of fluent builders is to make it easy to create object graphs, but not necessarily specify the business behavior. However, even this process has problems because refactoring or decomposing a service breaks the tests because they are no longer relevant.
Introducing a Container
Jimmy Bogard asked about introducing an Inversion of Control (IoC) container. Aaron said that the problem with containers is that registration time is slow and they can get in the way of testing. For example, a container would cache an object, but you might want more control than a traditional container allows you to have. For testing, you need a container that does fast registration, gives you the ability to override with mocks, and allows for lifetime reset. Machine addresses some of these issues, but he did not go into detail on it. Instead, the session became more of a discussion about how different people deal with the brittleness.
No real answers, yet
Aaron went through a couple of examples of starting with the controller and adding checkpoints as you flesh out the implementation. Also, he brought up the idea of having tests that fail (warning tests?), but don’t stop the rest of the tests. Ian Cooper mentioned their use of Fitnesse tests. (Read more about the session in Ian’s blog post about AltNetSeattle.) Aaron also mentioned the Ruby Synthesis project which I would like to investigate further.
I came away from the session with a lot of new ideas, but no real answers, yet. However, it is nice to have the opportunity to listen to people that are further down the path than I am. Thanks to all who participated.
You can watch the video here:
Here are my Session Notes.