Agile Testing with Lisa Crispin
Providing Practical Agile Testing Guidance

In the past couple months I blogged about the spike that JoEllen Carter (@TestingMojo) and I have been doing on automating UI smoke tests for our team’s iOS app: Pairing on a Mission and Continuing the Mission… and continually improving. Today we did a tech talk for our team reporting on what we’ve done so far and asking for help in choosing our next steps.

Tech Talk Mind Map

Tech Talk Mind Map

We used this mind map to guide our talk. We explained the goals for our spike: find an automation solution for consistent, repeatable UI smoke tests that would keep bad regressions out of the app store. We demoed one of our test scripts, showed how we had designed our tests, explained how discovering ‘search with predicate’ made our scripts much easier to write and maintain. We went over the pluses and minuses of our experience so far, pointing out the frustrating roadblocks we encountered, but also the pleasure of learning and working on something fun and cool.

We shared our thought that now is a good time to assess the value of the automated UI tests and how to move forward. Our iOS app is being completely redesigned, so the scripts we created for our spike will have to be re-done from scratch. We still have to solve the problems that keep us from putting our tests into our CI.

There are several options. We could try an external service provider. We could try other tool sets and frameworks. Should we abandon the UI automation idea and spend our time doing manual exploratory testing? Our iOS app already has about 6,000 unit tests and a number of integration tests that operate through the API. However, we have had bad regressions before that could only be found via the UI, so we know we need something.

We got some good ideas from the rest of the team. One was to ask the developer community within our company if they have any iOS UI automation experiences to share, since we know there are many other iOS projects. We posted a question on the development forum and have already had some good input.

This effort is inconclusive, so why am I blogging about this? Right before I started writing this, Jason Barile posted an interesting question on Twitter:

“…is your team really clear on what problems you’re trying to solve with automation and what success looks like?”

Our team has a long history of incredible success using test-driven development and automated regression tests at all levels from unit to UI. We have our share of “flaky tests”, but we get a good return on our automation investment. That doesn’t mean that automation is always the solution. We’ll have to figure things out.

Personally, I don’t like doing manual regression testing, so I hope we can find a way to run high-level UI automation smoke tests in our CI. Then we’d have more time for manual exploratory testing, and I’m learning that could be even more critical with mobile apps than other types. We shall keep experimenting and collaborating, and finding ways to shorten our feedback loop and ensure our users have a good experience with each new release of our app.

 

In the delightful keynote “Insights from Happy Change Agents” from Fanny Pittack and Alex Schwarz, I learned a new way to share information with others. Rather than providing a recipe for success, or even a takeaway, we can offer “giveaways”. I shall offer you some giveaways that I received at Agile Testing Days.

My PotsLightning sketch notes

My PotsLightning sketch notes

Sunday I joined the PotsLightning session for the morning. PotsLightning is open to anyone, not only conference participants, and is a self-organizing sort of thing, a combination open space and lightning talks. Maik Nogens facilitated. My main giveaway was the diversity of conference participants. There were people from as far away as New Zealand and Saudi Arabia. There were several women. Participants had experience testing all kinds of software from tractors to music. My sketch notes show, rather illegibly, some of the topics we covered, such as embedded systems, guilds, and automation.

My next sketch note reminds me that someone – unfortunately now I don’t recall who it was – showed me how he uses mind maps for test reporting as well as planning. He embeds screenshots and screencasts, and uses the time machine feature of MindMeister to show progress. I love the visibility these practices add, and I’m keen to try it.

There was so much packed into the conference sessions, mealtime conversations, and hallway discussions. I even learned things in the vendor expo. Here are just a few of my favorite giveaways that stick in my mind, in no particular order.

  • The leader of the mobile testing dojo asked if we had an app we’d like to use for the dojo. I suggested my team’s app, and the group agreed to try it. I got a lot of useful insights, not only into mobile testing techniques, but into how new users perceive our app! Lots of room for improvement in both!
  • I’ve followed Bob Marshall (@flowchainsensei) on Twitter for awhile. His keynote gave me so much to think about. I need to work on my non-judgmental observation skills. Non-violent communication is critical and helps in so many of the problem areas currently getting in our way in the software business.
  • Providing a “crash pad” to cushion failures, and re-thinking failures as simply “learning”. This came out of several sessions including Roman Pilcher, who showed climbers “bouldering” with a crash pad in case they fall.
  • How to nurture testers? This came up in the tutorial Janet Gregory and I did, as well as in Lean Coffee. Janet held an Open Space on it, so I hope she will share what came out there. I think one way is to have fun, and you can see in the photo that testers had fun at the Carnival party during the conference!

    A Carnival of Testers, including Bart Knaack, my husband Bob, me, David Evans, someone I don't know, Alex Schladebeck, Thom Roden and Gareth(?) from RedGate.

    A Carnival of Testers, including Bart Knaack, my husband Bob, me, David Evans, someone I don’t know, Alex Schladebeck, Thom Roden and Gareth(?) from RedGate.

  • Lars Sjödahl did a nice consensus talk on how we don’t notice what we aren’t expecting. It’s a good reminder to me to use my peripheral vision and Spidey sense when exploring our software, and try to see what I’m not looking for. Dan Ashby’s session similarly reminded me to think laterally as well as critically.
  • Janet and I find David Evan’s Pillars of Testing so important, we asked him to write it up and used that to wrap up our new book in the last chapter. I so appreciate his shout-out to the book and our many contributors in his keynote. Plus he always cracks me up while I’m learning something new. Do watch the video of his keynote (I don’t know when or where they’ll be posted).
  • Antony Marcano’s “Don’t put me in a box” keynote is a reminder of how much we can learn from hearing others’ stories. For example, his story about how he had to work with programmers who were on the other side of a big atrium, and simply moved himself over to their side in order to collaborate and build relationships with them. Fanny and Alex emphasized that it’s all about relationships! Alan Richardson showed the power of short, crisp stories in his keynote. We can learn so much by sharing our experiences.
  • Daniël Maslyn’s talk on robotics showed how exciting the future of testing really is. We tend to get a bit blasé, but that’s a whole exciting world we could enjoy learning about!

My previous post has a list of blogs from Agile Testing Days participants, please check those out for more!

In other news, we are honored that LingoSpot listed Agile Testing as one of the top 16 books every software engineer should read!

Agile Testing Days 2014’s theme was the future of agile testing. What challenges are ahead, and how will we address them? Janet Gregory and I facilitated a workshop with experienced agile practitioners designed to identify some of the biggest issues related to testing and quality, and come up with experiments we can try to help overcome those challenges.

goalsFor me, it was exciting that we could get a room full of people who truly had lots of experience with testing on agile teams. We had a diverse mix of testers, programmers, managers, coaches, and people who multi-task among multiple roles, willing to share their experiences and collaborate to generate new ideas. In fact, many of the participants would be good coaches and facilitators for agile testing workshops themselves! More teams are succeeding in delivering business value frequently at a sustainable pace (to paraphrase Elisabeth Hendrickson). Testing and testers are a part of this success.

However, we all still face plenty of problems. During our first exercise, each participant wrote down the biggest obstacles to testing and quality that their teams face. We used an affinity diagram to identify the top three:

  • Whole team testing: how to get all roles on a team to collaborate for testing activities, how does testing “get respect” across the organization?
  • The “ketchup effect”: like getting ketchup out of a bottle, we try and try to deliver software features a little at a time, only to have them come gushing out at the end and making a big mess!
  • Agile testing mindset – how do we change testers’ mindsets? How do we spread this mindset of building quality in, testing early and often, across the organization?

We used several different brainstorming techniques to come up with experiments to work on these challenges: impact mapping, brain writing, and diagramming on a whiteboard (everyone chose mind mapping for this). You can see the results of some of this in the photos. Then we used a different technique to think about other challenges identified, such as how to build testing skill sets, building the right thing, and the tester’s role in continuous delivery.

Building Skill Sets

Building Skill Sets

This last technique was the “giveaway” (to borrow a term from Alex Schwarz and Fanny Pittack) I was the most happy to take from the workshop. Janet and I gave general instructions, but the participants self-organized. Each table group took a topic to start with and mind mapped ideas about that topic. Some teams supplemented their mind maps by drawing pictures. Then the magic happened – after a time period, the groups rotated so each was working on another group’s mind map and adding their own ideas. They rotated once more so that each group worked on each mind map.

You can see from the pictures how many ideas came out of this. Like brain writing, it is amazing that you can write down all the ideas you think you have, then, seeing someone else’s ideas, you can think of even more. I encourage you to take a look at these mind maps, and choose some ideas for your own team’s small experiments. Even more importantly, I urge you to try a brainstorming exercise such as the group mind mapping, rotating among topics, and see the power of your collective experience and skill sets!

Cube-shaped tester

Cube-shaped tester

As we rotated among the different topics drawing on mind maps, one participant, Marcelo Leite (@marcelo__leite on Twiter), made a note on the skills mind map about “cube-shaped testers”. Janet and I talk a lot about T-shaped testers and square-shaped teams, concepts we learned from Rob Lambert and Adam Knight. We asked Marcelo to explain the cube-shaped idea. As with the Rubiks cube, we have different “colors” of skills, we can twist them around and form different combinations. This way we can continually adapt to new and unique situations. A broad mix of skills lets us take on any future challenge.

I’m out here now working on my cube shaped skills. How about you? I’d love to hear about your own learning journey towards the future of agile testing.

You can take a look at the slides for our workshop, and email me if you’d like the resources list we handed out. Also do check out the slides from our keynote, which sadly the audience didn’t get to see as the projector malfunctioned.

More blogs about #AgileTD:

I know the Agile Testing Days organizers will post a list of all blog posts about the conference, but here are some I made note of (and I still haven’t read them all!) I’m sure I missed some, so please ping me with additional links if you have ‘em.

  • http://www.bredex.de/blog_article_en/agile-testing-days-2014.htmlhttp://oanasagile.blogspot.fr/2014/11/agile-testing-days-2014-go-wilde-in.html

    https://www.flickr.com/photos/iamroot/sets/72157646956955533/http://tobythetesterblog.wordpress.com/2014/11/14/my-top-5-experiences-from-agile-testing-days-conference/

    http://www.gilzilberfeld.com/2014/11/agile-testing-days-2014the-agiletd-post.html

    http://my2centsonagile.blogspot.com/2014/11/agile-testing-days-2014-there-are-no.html

    http://seasidetesting.com/2014/11/17/the-agile-testing-days-2014-day-1-the-tutorial/

    http://blog.demo.llp.pl/2014/back-to-the-future/

    http://www.mostly-testing.co.uk/2014/11/agile-testing-days-2014-part-2.html

  • http://rhythmoftesting.blogspot.com/2014/11/agile-testing-days-conversations-and_23.html (be sure to go back from here and read all of Pete’s blogs including his live blogs from AgileTD)

userStories  This new book by Gojko Adzic and David Evans is deceptively slim. It’s not just 50 ideas to improve your user stories. It’s 50 experiments you can try to improve how you deliver software. For each experiment, David and Gojko provide you with information and resources “to make it work”.

One chapter that has caught my eye is “Use Low-Tech for Story Conversations”. Gojko and David advise holding story discussions in rooms with lots of whiteboards and few big tables. When everyone sits at a big conference table, looking at stories on a monitor or projected on a wall, they start tuning out and reading their phones. Standing in front of a whiteboard or flip chart encourages conversation, and the ability to draw makes that conversation more clear. Participants can draw pictures, connect boxes with arrows, write sentences, make lists. It’s a great way to communicate.

I’ve always been fond of the “walking skeleton”, identifying the minimum stories that will deliver enough of a slice to get feedback and validate learning. Gojko and David take this idea even further, they put the walking skeleton on crutches. Deliver a user interface with as little as possible below the surface now, get feedback from users, and iterate to continually improve it. As with all the ideas in the book, the authors provide examples from their own experience to help you understand the concept well enough to try it out with your team.

David and Gojko understand you’re working in a real team, with corporate policies and constraints that govern what you can do. Each story idea ends with a practical “How to Make it Work” section so you can get your experiment started.

Again, it’s not just a book of tips for improving your user stories. It’s fifty ways to help your customers identify the business value they need, and deliver a thin slice of that value to get feedback and continue to build it to achieve business goals. It’s a catalog of proven practices that guides you in learning the ones you want to try.

 

 

In my previous post I described how my teammate JoEllen Carter (@testingmojo) and I have been pairing for an iOS UI smoke test automation spike. By working together, making use of a book, online resources, and examples sent by a colleague in another office, we were getting some traction. However, we ran into a lot of obstacles and some days were pretty discouraging.

Scaredy tester!

Scaredy tester!

For example, one day our tests failed, and we discovered the programmers had changed many element names. We had known our tests were fragile, but didn’t realize how hard it would be to figure out the new names and fix the tests. After that, we refactored our tests to take out hard coded strings and use variables. We defined all the variables in one place so we can easily change them.

I’m not sure if it was the partial solar eclipse or what, but one day all the scripts we had written quit working on my machine. In fact, I couldn’t even build the product anymore. That was extremely frustrating! One day when I was working from home, the programmer/PO and JoEllen used my work iMac while I screenshared, and we worked together via Zoom to get everything working again. If I hadn’t been pairing, and we didn’t have support from programmers on our team, I know I wouldn’t have gotten past obstacles such as that one.

Well, it hasn't seemed THAT easy! But this book helps a lot.

It hasn’t been THAT easy! But, good book.

We looked for more ways to make the tests easier to maintain. In a book we’ve been using, Test iOS Apps with UI Automation by Jonathan Penn, we read about using predicates. When we got some time with one of the programmers (who is also the product owner, and is championing our automation effort), he agreed using the searchWithPredicate looked like a better approach and helped us get a rudimentary version of it working. It lets you look for a particular value somewhere on the screen, without having to know the exact element (for which we often had to resort to numbered cells) where it should be. That way, element names can change but you can still find your text string. These are intended to be smoke tests, so that is good enough. JoEllen mastered searchWith Predicate on her own the next day, and added a couple of functions for it to our “test helper” file where our variables and other functions used by multiple test scripts are defined.

Another important effort has been figuring out how to get our tests into our CI. We found a ui-automation-runner that lets us run the tests on the command line instead of from Instruments. However, we didn’t know how to bring up the iOS simulator other than via xCode and Instruments. We set up a Zoom meeting with one of our iOS programmers in Toronto, and showed him what we were doing and what we needed. He said he could check in something that would allow us to bring up the simulator from the command line. The product owner put in a chore for this, and we soon had what we needed.

The same day we met with the programmer, my co-author Janet Gregory was in town and visited our office. I showed her our test scripts. She commented that some of the script names were confusing or misleading. JoEllen and I renamed the files so that the purpose of each is more obvious. For example, our test helper file is now called “ios_test_helper.js” instead of “tracker_ios.js”. We added “test” to the name of each test script, for example, login_test.js instead of login.js. Simple changes can help a lot!

Our focus has been on refactoring the tests for maintainability, and working towards getting the tests into our CI. We changed the tests to run with fixture data so they can run against a localhost. This wasn’t too bad since we’re using variables instead of hard coded strings. But we’ve been adding a few test cases as well. For example, we had a happy path login test, but decided we should verify that you can’t log in with invalid credentials. This required us to learn how to test a pop-up alert. I tried to figure this out on my own when JoEllen was in a meeting, but failed. When we paired on it, we struggled some, but (largely thanks to JoEllen’s talent for this stuff!) finally mastered it. Today, we zeroed in on how to mitigate one of the flaky simulator behaviors, too. It feels great to start to understand what the heck we’re doing!

Some days I have been quite discouraged, like when I couldn’t even build the product anymore, but other days I see we examplereally have made a lot of progress. Forcing myself to drive more has built up my confidence. We have a lot of examples now though, both in our own scripts, and the ones our colleague in Toronto sent us. I learn best from examples, so that’s helping me. I finally feel like I kind of understand what I’m doing. And I’m excited about our next steps, as we make these smoke tests part of our CI, where they can give us more confidence about future releases.

I appreciate the positive feedback I got on my previous post. These are the kinds of experiences that Janet and I and 40 other contributors share in our new book More Agile Testing. I hope you will share your experiences – please put links to your blog posts in the comments here!

Other news:

Speaking of More Agile Testing, Janet and I did an interview with Tom Cagley, which is available now on his SPAMCast. We talk about what’s in the book and about our agile testing experiences.

"We're on a mission..."

“We’re on a mission…”

Most of us have more we’d like to do than the time to do it, even if we’re in an experienced agile team working at a sustainable pace. A couple of us that work on our iOS team wanted to try automating some UI regression tests. The programmers write their code using TDD with good test coverage, but UI regression testing was all done manually and tended to be a bit hit-or-miss.  However, we were stretched thin on the testing side, and that idea didn’t get to the top of the priority list.

Luckily, a highly experienced tester, JoEllen Carter (@testingmojo), joined our team. Soon after that, when we were regression testing an iOS release, JoEllen found a showstopper regression failure. Automating regression tests through the UI to catch failures such as that more quickly became a more compelling idea.

Getting started

The developer who is also the iOS product owner championed the automation effort, and knew some testers in our company’s Toronto office who were successfully automating mobile UI tests. He set up a meeting so we could pick their brains. They recommended using Apple’s built-in test automation tools in Instruments, along with the tuneup.js library, and kindly sent us a book, examples of their own tests, and other helpful documentation.

The PO helped us install and build the iOS code, and spike some rudimentary automation with Instruments. We agreed on some simple scripts to try, and he put stories for those in the backlog. Since the main focus of our team is our web-based SaaS product, and we testers wear other hats including helping with customer support, it was hard to find time to devote to the iOS test automation. JoEllen suggested blocking out an hour every day to pair on it. This proved key to getting traction.

Pairing FTW

You know how it’s good to be the dumbest person in the room? This worked for me as I watched JoEllen fearlessly use the record function in Instruments as well as the logging feature to see how the iOS app pages were laid out and how to refer to the different elements. Every time we paired, we made good progress, starting with the simplest of scripts and incrementally adding to them. Demands on our time for more business-critical work sometimes meant we let the iOS automation effort slide, but we’ve experimented with trying a different time of day when distractions are fewer.

Obstacles to overcome

We had a few scripts written when the element names changed unexpectedly due to updates to support iOS 8. We were discouraged because we’d had no warning about it from the programmers. They work in a different office two timezones away, and though we have a daily standup and are on chat all day, we testers are usually focused on other products. We discussed this with the PO, and agreed it was time to get our test scripts into CI, so that if we weren’t warned about big changes, they’d at least be immediately obvious to everyone. We wrote more stories for the steps needed.

Some obstacles to making the UI automation scripts part of CI seemed tough. For example, we needed to figure out how to run the tests from the command line rather than from within Instruments. After one pairing session of Internet searching, reading doc, and trial-and-error, we nailed it! By “we” I mean JoEllen, with me contributing information from searches and a bit of shell command expertise. We also got help at one point with a Javascript question from a pair of programmers that work on our web-based app.

Overcoming my own weenieness

I have personally experienced a lot of fear and trepidation about the UI test automation effort. I don’t know Javascript, I’m a newbie at iOS (even as a user) and mobile app testing in general, and I always find test automation hard even though I also really enjoy doing it. But I finally forced myself to ‘drive’ and felt much better actually trying things. We have a plan for refactoring to make the scripts easier to run and maintain, and of course more tasks to do to get them into CI.

Lessons learned

I can’t advise you on the best techniques for iOS UI test automation, but I can share some general takeaways from this experience so far:

  • Has anyone else in the company already succeeded with a similar effort? Ask them for help – they’re probably happy to share their experiences.
  • Pairing brings much better chances for success than working alone, for so many reasons.
  • Put time on the calendar every day to work on an effort like this, even if it’s only one hour (that is also good advice if you want to write a book!) Experiment with different times of day. Right after lunch has been a good time for us, before getting pulled back into the routine, and before afternoon meetings.
  • Bring up obstacles during stand-ups, set up meetings to discuss them with people who can help.
  • Write stories for automation tasks to make them visible.

I still have some misgivings about our automation effort. I’d like for the programmers to be more involved. After all, test automation is coding, and they are the people who code all day every day, rather than one hour or so a day at best. However, we have lots of support from the PO who is also a programmer, and we’re moving towards making this automation a part of the existing test automation already in the product’s CI.

If your team has a tough testing problem to solve, try pairing to spike a solution. Reflect and adapt as you go! Oh, and hiring excellent experienced testers like JoEllen is also a good idea, but don’t be trying to poach her away from our team.

 

 

atdAgile Testing Days is a wonderful experience: a chance to join a community of professionals who are passionate about delivering software with the best possible quality and value for customers.

Janet Gregory and I are co-facilitating a tutorial/workshop where we ask participants to bring your toughest agile testing challenges so that we can work on them together. The theme of Agile Testing Days is the future of testing. We can’t predict the future, but we could all be doing a better job right now! You’ll leave our day-long workshop with new tools in your toolbox and experiments to try with your own team to forge new frontiers in agile testing and software quality.

We’ll have not only unicorns, but donkeys and dragons – watch the videos to learn more about our workshop! You can also see what went on in my workshop last year, though we will have some different group exercises and slightly different format this year – plus, Janet and me pairing, always better!

AKA:

Moar Agile TestingMoreAgileTestingCover

More (Agile Testing)

(More Agile) Testing

Janet Gregory and I, along with more than 40 contributors and many helpful reviewers (it takes a village), have finished our new book. We delve into many areas that are new or more important since our first book Agile Testing was published. Please see our book website to see the giant mind map of the entire book, plus bios of all our contributors, and one of the chapters, Using Models to Help Plan!

Drawings explaining the business rules for a feature

Drawings explaining the business rules for a feature

In my previous post I told about a great experience collaborating with teammates on a difficult new feature. The conversations around this continued this week when we were all in the office. Many of our remaining questions were answered when the programmers walked us through a whiteboard drawing of how the new algorithm should work. As we talked, we updated and improved the drawings. We took pictures to attach to the pertinent story.

Not only is a picture worth a thousand words, but talking with a marker in hand and a whiteboard nearby (or their virtual equivalents) greatly enhances communication.

I can’t say it enough: it takes the whole team to build quality into a software product, and testing is an integral part of software development along with coding, design and many other activities. I’d like to illustrate this yet again with my experience a couple days ago.

Some of my teammates at our summer picnic - we have fun working AND playing!

Some of my teammates at our summer picnic – we have fun working AND playing!

Recently our team has been brainstorming and experimenting with ways to make our production site more reliable and robust. Part of this effort involved controlling the rate at which the front end client “pings” the server when the server is under a heavy load or there is a problem with our hosting service. Friday morning, after our standup, the front end team decided on a new algorithm for “backing off” the ping interval when necessary and then restoring it to normal when appropriate.

Naturally I had questions about how we would test this, not only to ensure the correct new behavior, but to make sure other scenarios weren’t adversely affected. For example, the client should behave differently when it detects that the user’s connection has gone offline. There are many risks, and the impact of incorrect behavior can be severe.

I was working from home that day. Later in the morning, the programmer pair working on the story asked if we could meet via Skype. They also had asked another tester in the office to join us. The programmers explained three conditions in which the new pinger behavior should kick in, and how we could simulate each one.

The other tester asked if there were a way we could control the pinger interval for testing. For example, sometimes the ping interval should go up to five minutes. That’s a long time to wait when testing. We discussed some ideas how to do this. The programmers started typing examples of commands we might be able to do in the browser developer tools console to control the pinger interval, as well as to get other useful information while testing. We came up with a good strategy.

Note that this conversation took place before they started writing the actual code. While they started writing the code using TDD, the other tester and I started creating given-when-then style test scenarios on a wiki page. We started with the happy path, then thought of more scenarios and questions to discuss with the programmers. By anticipating edge cases and discussing them, we’ll end up with more robust code much more quickly than if we waited until the code was complete to start thinking about testing it end to end.

There are so many stories in this vein in Janet Gregory‘s and my new book, More Agile Testing: Learning Journeys for the Whole Team. We’ll have more information about the book on our book website soon, including introductions to our more than 40 contributors. The book will be available October 1. It takes a village to write a useful book on agile testing, and it takes a whole team to build quality into a software product!