We like small stories. If we keep stories to a size we can finish in two or three days, there is less likelihood of a story “blowing up” because we vastly under-estimated it. Doing lots of small stories every iteration helps us get into a steady rhythm. We can manage our work in process more efficiently and keep cranking stories through.
Some pieces of functionality are too big to fit into a small story, so we have to break them down. This is a hard skill to learn, but trust me, everything can be broken down into smaller chunks, somehow. This can get to be somewhat artificial and go against the principle that a story ought to deliver something of value to the business that we can release. However, we think the tradeoff is worthwhile. When we work on a big new piece of functionality, we simply hide the new stuff by means of properties until it’s ready for prime time. I guess we could branch the code too, but we prefer to work on the main trunk.
If you have some bit of functionality that doesn’t go end to end, how do you test it above the unit level? How can you show it to customers? This requires a bit of extra effort, but again, we’ve found it worthwhile. (My team has been chopping stories ever-smaller for six years now).
Here’s an example. Our application manages 401(k) plans (employer-sponsored retirement plans, to which employees and employers may contribute). Right now we are rewriting the functionality that allows a plan participant to “roll in” money from a 401(k) plan at a previous employer. We also needed to add functionality to allow users to roll in money that is not tax-deferred; this is called “Roth” money. As part of this rewrite we are rewriting the back-end processing.
When this is all finished, users will go to the UI, enter the information about the money they’re rolling in, and when they submit, instructions will be created that will allow us to receive the money and buy appropriate shares in mutual funds on behalf of the participant. One story read something like this: “As a participant, I want the system to be able to create instructions for Roth roll-ins”. Originally the story was to process the instructions, but we realized that was too big a story and cut it down.
There’s no UI yet, and we aren’t even processing the instructions, so how do we test this? The developer created a FitNesse fixture to accept information that users would enter in the UI, send these inputs to the production code, and the code in turn would create the correct instructions. Then we could use a “data fixture” (or we could use DbFit) to verify the data in the database.
I started out by just writing a FitNesse test to execute the fixture with my inputs and verifying the database with my own SQL queries. Then I made an automated regression test by setting up data about the plan and user in the database, operating the fixture to create the instructions, verifying the instructions, and then cleaning up the data. Here is the “meat” of the test. The basic data is set up earlier in the test. The “do” fixture specifies the roll-in data just as the user would in the UI, then sends it to the code which creates instructions in the database. The test then checks the data in the database. We can confirm this code works, even without the UI or ability to process the resulting instructions. And we can try lots of different test cases.