Agile Testing with Lisa Crispin
Providing Practical Agile Testing Guidance

userStories  This new book by Gojko Adzic and David Evans is deceptively slim. It’s not just 50 ideas to improve your user stories. It’s 50 experiments you can try to improve how you deliver software. For each experiment, David and Gojko provide you with information and resources “to make it work”.

One chapter that has caught my eye is “Use Low-Tech for Story Conversations”. Gojko and David advise holding story discussions in rooms with lots of whiteboards and few big tables. When everyone sits at a big conference table, looking at stories on a monitor or projected on a wall, they start tuning out and reading their phones. Standing in front of a whiteboard or flip chart encourages conversation, and the ability to draw makes that conversation more clear. Participants can draw pictures, connect boxes with arrows, write sentences, make lists. It’s a great way to communicate.

I’ve always been fond of the “walking skeleton”, identifying the minimum stories that will deliver enough of a slice to get feedback and validate learning. Gojko and David take this idea even further, they put the walking skeleton on crutches. Deliver a user interface with as little as possible below the surface now, get feedback from users, and iterate to continually improve it. As with all the ideas in the book, the authors provide examples from their own experience to help you understand the concept well enough to try it out with your team.

David and Gojko understand you’re working in a real team, with corporate policies and constraints that govern what you can do. Each story idea ends with a practical “How to Make it Work” section so you can get your experiment started.

Again, it’s not just a book of tips for improving your user stories. It’s fifty ways to help your customers identify the business value they need, and deliver a thin slice of that value to get feedback and continue to build it to achieve business goals. It’s a catalog of proven practices that guides you in learning the ones you want to try.

 

 

In my previous post I described how my teammate JoEllen Carter (@testingmojo) and I have been pairing for an iOS UI smoke test automation spike. By working together, making use of a book, online resources, and examples sent by a colleague in another office, we were getting some traction. However, we ran into a lot of obstacles and some days were pretty discouraging.

Scaredy tester!

Scaredy tester!

For example, one day our tests failed, and we discovered the programmers had changed many element names. We had known our tests were fragile, but didn’t realize how hard it would be to figure out the new names and fix the tests. After that, we refactored our tests to take out hard coded strings and use variables. We defined all the variables in one place so we can easily change them.

I’m not sure if it was the partial solar eclipse or what, but one day all the scripts we had written quit working on my machine. In fact, I couldn’t even build the product anymore. That was extremely frustrating! One day when I was working from home, the programmer/PO and JoEllen used my work iMac while I screenshared, and we worked together via Zoom to get everything working again. If I hadn’t been pairing, and we didn’t have support from programmers on our team, I know I wouldn’t have gotten past obstacles such as that one.

Well, it hasn't seemed THAT easy! But this book helps a lot.

It hasn’t been THAT easy! But, good book.

We looked for more ways to make the tests easier to maintain. In a book we’ve been using, Test iOS Apps with UI Automation by Jonathan Penn, we read about using predicates. When we got some time with one of the programmers (who is also the product owner, and is championing our automation effort), he agreed using the searchWithPredicate looked like a better approach and helped us get a rudimentary version of it working. It lets you look for a particular value somewhere on the screen, without having to know the exact element (for which we often had to resort to numbered cells) where it should be. That way, element names can change but you can still find your text string. These are intended to be smoke tests, so that is good enough. JoEllen mastered searchWith Predicate on her own the next day, and added a couple of functions for it to our “test helper” file where our variables and other functions used by multiple test scripts are defined.

Another important effort has been figuring out how to get our tests into our CI. We found a ui-automation-runner that lets us run the tests on the command line instead of from Instruments. However, we didn’t know how to bring up the iOS simulator other than via xCode and Instruments. We set up a Zoom meeting with one of our iOS programmers in Toronto, and showed him what we were doing and what we needed. He said he could check in something that would allow us to bring up the simulator from the command line. The product owner put in a chore for this, and we soon had what we needed.

The same day we met with the programmer, my co-author Janet Gregory was in town and visited our office. I showed her our test scripts. She commented that some of the script names were confusing or misleading. JoEllen and I renamed the files so that the purpose of each is more obvious. For example, our test helper file is now called “ios_test_helper.js” instead of “tracker_ios.js”. We added “test” to the name of each test script, for example, login_test.js instead of login.js. Simple changes can help a lot!

Our focus has been on refactoring the tests for maintainability, and working towards getting the tests into our CI. We changed the tests to run with fixture data so they can run against a localhost. This wasn’t too bad since we’re using variables instead of hard coded strings. But we’ve been adding a few test cases as well. For example, we had a happy path login test, but decided we should verify that you can’t log in with invalid credentials. This required us to learn how to test a pop-up alert. I tried to figure this out on my own when JoEllen was in a meeting, but failed. When we paired on it, we struggled some, but (largely thanks to JoEllen’s talent for this stuff!) finally mastered it. Today, we zeroed in on how to mitigate one of the flaky simulator behaviors, too. It feels great to start to understand what the heck we’re doing!

Some days I have been quite discouraged, like when I couldn’t even build the product anymore, but other days I see we examplereally have made a lot of progress. Forcing myself to drive more has built up my confidence. We have a lot of examples now though, both in our own scripts, and the ones our colleague in Toronto sent us. I learn best from examples, so that’s helping me. I finally feel like I kind of understand what I’m doing. And I’m excited about our next steps, as we make these smoke tests part of our CI, where they can give us more confidence about future releases.

I appreciate the positive feedback I got on my previous post. These are the kinds of experiences that Janet and I and 40 other contributors share in our new book More Agile Testing. I hope you will share your experiences – please put links to your blog posts in the comments here!

Other news:

Speaking of More Agile Testing, Janet and I did an interview with Tom Cagley, which is available now on his SPAMCast. We talk about what’s in the book and about our agile testing experiences.

"We're on a mission..."

“We’re on a mission…”

Most of us have more we’d like to do than the time to do it, even if we’re in an experienced agile team working at a sustainable pace. A couple of us that work on our iOS team wanted to try automating some UI regression tests. The programmers write their code using TDD with good test coverage, but UI regression testing was all done manually and tended to be a bit hit-or-miss.  However, we were stretched thin on the testing side, and that idea didn’t get to the top of the priority list.

Luckily, a highly experienced tester, JoEllen Carter (@testingmojo), joined our team. Soon after that, when we were regression testing an iOS release, JoEllen found a showstopper regression failure. Automating regression tests through the UI to catch failures such as that more quickly became a more compelling idea.

Getting started

The developer who is also the iOS product owner championed the automation effort, and knew some testers in our company’s Toronto office who were successfully automating mobile UI tests. He set up a meeting so we could pick their brains. They recommended using Apple’s built-in test automation tools in Instruments, along with the tuneup.js library, and kindly sent us a book, examples of their own tests, and other helpful documentation.

The PO helped us install and build the iOS code, and spike some rudimentary automation with Instruments. We agreed on some simple scripts to try, and he put stories for those in the backlog. Since the main focus of our team is our web-based SaaS product, and we testers wear other hats including helping with customer support, it was hard to find time to devote to the iOS test automation. JoEllen suggested blocking out an hour every day to pair on it. This proved key to getting traction.

Pairing FTW

You know how it’s good to be the dumbest person in the room? This worked for me as I watched JoEllen fearlessly use the record function in Instruments as well as the logging feature to see how the iOS app pages were laid out and how to refer to the different elements. Every time we paired, we made good progress, starting with the simplest of scripts and incrementally adding to them. Demands on our time for more business-critical work sometimes meant we let the iOS automation effort slide, but we’ve experimented with trying a different time of day when distractions are fewer.

Obstacles to overcome

We had a few scripts written when the element names changed unexpectedly due to updates to support iOS 8. We were discouraged because we’d had no warning about it from the programmers. They work in a different office two timezones away, and though we have a daily standup and are on chat all day, we testers are usually focused on other products. We discussed this with the PO, and agreed it was time to get our test scripts into CI, so that if we weren’t warned about big changes, they’d at least be immediately obvious to everyone. We wrote more stories for the steps needed.

Some obstacles to making the UI automation scripts part of CI seemed tough. For example, we needed to figure out how to run the tests from the command line rather than from within Instruments. After one pairing session of Internet searching, reading doc, and trial-and-error, we nailed it! By “we” I mean JoEllen, with me contributing information from searches and a bit of shell command expertise. We also got help at one point with a Javascript question from a pair of programmers that work on our web-based app.

Overcoming my own weenieness

I have personally experienced a lot of fear and trepidation about the UI test automation effort. I don’t know Javascript, I’m a newbie at iOS (even as a user) and mobile app testing in general, and I always find test automation hard even though I also really enjoy doing it. But I finally forced myself to ‘drive’ and felt much better actually trying things. We have a plan for refactoring to make the scripts easier to run and maintain, and of course more tasks to do to get them into CI.

Lessons learned

I can’t advise you on the best techniques for iOS UI test automation, but I can share some general takeaways from this experience so far:

  • Has anyone else in the company already succeeded with a similar effort? Ask them for help – they’re probably happy to share their experiences.
  • Pairing brings much better chances for success than working alone, for so many reasons.
  • Put time on the calendar every day to work on an effort like this, even if it’s only one hour (that is also good advice if you want to write a book!) Experiment with different times of day. Right after lunch has been a good time for us, before getting pulled back into the routine, and before afternoon meetings.
  • Bring up obstacles during stand-ups, set up meetings to discuss them with people who can help.
  • Write stories for automation tasks to make them visible.

I still have some misgivings about our automation effort. I’d like for the programmers to be more involved. After all, test automation is coding, and they are the people who code all day every day, rather than one hour or so a day at best. However, we have lots of support from the PO who is also a programmer, and we’re moving towards making this automation a part of the existing test automation already in the product’s CI.

If your team has a tough testing problem to solve, try pairing to spike a solution. Reflect and adapt as you go! Oh, and hiring excellent experienced testers like JoEllen is also a good idea, but don’t be trying to poach her away from our team.

 

 

atdAgile Testing Days is a wonderful experience: a chance to join a community of professionals who are passionate about delivering software with the best possible quality and value for customers.

Janet Gregory and I are co-facilitating a tutorial/workshop where we ask participants to bring your toughest agile testing challenges so that we can work on them together. The theme of Agile Testing Days is the future of testing. We can’t predict the future, but we could all be doing a better job right now! You’ll leave our day-long workshop with new tools in your toolbox and experiments to try with your own team to forge new frontiers in agile testing and software quality.

We’ll have not only unicorns, but donkeys and dragons – watch the videos to learn more about our workshop! You can also see what went on in my workshop last year, though we will have some different group exercises and slightly different format this year – plus, Janet and me pairing, always better!

AKA:

Moar Agile TestingMoreAgileTestingCover

More (Agile Testing)

(More Agile) Testing

Janet Gregory and I, along with more than 40 contributors and many helpful reviewers (it takes a village), have finished our new book. We delve into many areas that are new or more important since our first book Agile Testing was published. Please see our book website to see the giant mind map of the entire book, plus bios of all our contributors, and one of the chapters, Using Models to Help Plan!

Drawings explaining the business rules for a feature

Drawings explaining the business rules for a feature

In my previous post I told about a great experience collaborating with teammates on a difficult new feature. The conversations around this continued this week when we were all in the office. Many of our remaining questions were answered when the programmers walked us through a whiteboard drawing of how the new algorithm should work. As we talked, we updated and improved the drawings. We took pictures to attach to the pertinent story.

Not only is a picture worth a thousand words, but talking with a marker in hand and a whiteboard nearby (or their virtual equivalents) greatly enhances communication.

I can’t say it enough: it takes the whole team to build quality into a software product, and testing is an integral part of software development along with coding, design and many other activities. I’d like to illustrate this yet again with my experience a couple days ago.

Some of my teammates at our summer picnic - we have fun working AND playing!

Some of my teammates at our summer picnic – we have fun working AND playing!

Recently our team has been brainstorming and experimenting with ways to make our production site more reliable and robust. Part of this effort involved controlling the rate at which the front end client “pings” the server when the server is under a heavy load or there is a problem with our hosting service. Friday morning, after our standup, the front end team decided on a new algorithm for “backing off” the ping interval when necessary and then restoring it to normal when appropriate.

Naturally I had questions about how we would test this, not only to ensure the correct new behavior, but to make sure other scenarios weren’t adversely affected. For example, the client should behave differently when it detects that the user’s connection has gone offline. There are many risks, and the impact of incorrect behavior can be severe.

I was working from home that day. Later in the morning, the programmer pair working on the story asked if we could meet via Skype. They also had asked another tester in the office to join us. The programmers explained three conditions in which the new pinger behavior should kick in, and how we could simulate each one.

The other tester asked if there were a way we could control the pinger interval for testing. For example, sometimes the ping interval should go up to five minutes. That’s a long time to wait when testing. We discussed some ideas how to do this. The programmers started typing examples of commands we might be able to do in the browser developer tools console to control the pinger interval, as well as to get other useful information while testing. We came up with a good strategy.

Note that this conversation took place before they started writing the actual code. While they started writing the code using TDD, the other tester and I started creating given-when-then style test scenarios on a wiki page. We started with the happy path, then thought of more scenarios and questions to discuss with the programmers. By anticipating edge cases and discussing them, we’ll end up with more robust code much more quickly than if we waited until the code was complete to start thinking about testing it end to end.

There are so many stories in this vein in Janet Gregory‘s and my new book, More Agile Testing: Learning Journeys for the Whole Team. We’ll have more information about the book on our book website soon, including introductions to our more than 40 contributors. The book will be available October 1. It takes a village to write a useful book on agile testing, and it takes a whole team to build quality into a software product!

 

 

 

I was so excited to see A Coach’s Guide to Agile Testing from Samantha Laing and Karen Greaves of Growing Agile. I don’t know of any other book to guide coaches in helping agile teams learn good ways to succeed with testing and build quality into their software product. Here are my immediate thoughts after reading it.

The book is useful even for coaches/trainers who don’t themselves have a lot of

Growing Agile: Coach's Guide to Agile Testing

Growing Agile: Coach’s Guide to Agile Testing

knowledge of or experience in testing. The authors use the “4 Cs” from Sharon Bowman’s Training from the Back of the Room, and show how to apply them. Newbie coaches and presenters usually make the mistake of wanting to deliver a lot of information via lecture and slides. Bowman’s techniques have helped me make my own training classes and workshops much better learning experiences for participants, so I’m glad to see them incorporated here.

It’s so helpful that the guide includes a training kit, and advice on what supplies are best, even the best markers to use. I liked the idea to send participants photos of the workshop, though I might ask up front if everyone is comfortable with having photos taken.

Sam and Karen give options on how to schedule and organize a class or a shorter workshop. I’m betting that readers will find this incredibly helpful.

The visuals in the book are terrific. We know how well pictures help convey concepts!

One thing that bothered me about Training from the Back of the Room was that I’m not comfortable asking participants to do some of the exercises. I’m an introvert, and I find it hard to ask people to do something I wouldn’t feel good doing. The exercises here are well within my own comfort level, which is great.

Most importantly, the content is spot on. It is SO wonderful to read a book whose authors truly grok the testing mindset. Karen and Sam understand what teams need to know about testing, and how coaches can help them learn those things.

 

I’m currently at Booster in beautiful Bergen. It’s a terrific conference, about 240 people

Thanks Martin Burns for tweeting this photo!

Thanks Martin Burns for tweeting this photo!

which is a perfect number. Everyone is so enthusiastic, the food is great, the sessions are awesome. I’ll blog more about that.

I facilitated a workshop today on “changing your testing mindset”. I’ve uploaded the slide deck, and will be blogging more about that too. Many thanks to the workshop participants, we all learned a lot from each other! Some great ideas generated.

Lisa CrispinI’m doing a day-long tutorial on “Changing your testing mindset“, and an “interactive session” at Belgium Testing Days in Bruges, Belgium, March 17 – 20. You can learn more about my tutorial (and see my adorable donkeys) in my introductory video.

What could be more wonderful than being in beautiful Bruges? Why, meeting up with a bunch of amazing testing practitioners for four days to share experiences and break new ground in testing.

Check out the program. You can use my discount code, FeahY8, for a 10% discount on registration! I hope to see you there.