Agile Testing with Lisa Crispin
Providing Practical Agile Testing Guidance

Bernice Niel Ruhland, a director of quality management programs, contributed so much value to  More Agile Testing. In addition to sidebars where she explains ideas she uses for training and managing testers, we refer to several stories and ideas we learned from her. She also read every draft of every chapter at least three times and gave us invaluable feedback to help create the final book.

Janet Gregory and I are so excited that Bernice has shared her experiences as a More Agile Testing reviewer and contributor. I hope it will inspire you to write about your own experiences, volunteer to review your colleagues’ draft publications, and perhaps read our book!

And while you are there, keep on reading. Bernice blogs regularly and I’ve learned so much from her stories. For example, I love her tribute to Leonard Nimoy and her stories of how he influenced her career.

Bernice’s creativity extends beyond her testing career to other pursuits, one of which is cuisine. She has a wonderful cooking blog, Realistic Cooking Ideas. This has inspired so many wonderful meals at my house! Don’t read it, though if you are hungry!

Thanks so much to Bernice!

Pairing FTW

Pairing FTW!

This post benefits from a bit of context: my day job is as a tester on the Pivotal Tracker team. Tracker is a project tracking tool with an awesome API. Our API doc is awesome too. It’s full of examples which are generated from the automated regression tests that run in our CI, so they are always accurate and up to date.

Oh, and part of my day job is to do customer support for Tracker. We often hear from users who would like to get a report of cycle time for their project’s stories. Cycle time is a great metric. The way I like to look at it is, when did you actively start working on a story, and when did that story finally get accepted? Unfortunately, this information isn’t easily visible in our tool. It’s not hard to obtain it via our API, but, the best way to get it is not immediately obvious either.

For a couple of years, I have wished I had an example script using our API to compute cycle time for stories that we could give customers. Sadly, I’ve had to let my Ruby skills rust away, because at work I don’t get to automate regression tests (the programmers do all that), and I spent most of the past couple of years co-writing a book.

Recently, our team had three “hack days” to do whatever we chose. One of the activities I planned was to work on this example cycle time script. My soon-to-be-erstwhile teammate Glen Ivey quickly wrote up an example script for me. But my own feeble efforts to enhance it were futile. Unfortunately, all my teammates were engaged in their own special hack days projects.

I whinged about this on Twitter, and Amitai Schlair, whom I only knew from Twitter, offered to pair with me. This was good and bad. One the one hand, awesome to have someone to pair with me! OTOH, gosh I’m terrible at Ruby now, how embarrassing to pair with anyone! I started thinking of excuses NOT to do it. However, Amitai was so kind, I faced my fear. We paired.

First we had to figure out how to communicate and screenshare. We ended up with Skype and Screenhero. Amitai introduced me to a great concept: first, write what I want to do in psuedocode. Then turn each line of that into a method. Then, start fleshing out those methods with real code. Amitai’s instinct was to do this TDD. But due to time limitations, our approach was: write a line of code, then run the script to see what it does. We worked step by step. By the end of our session (which was no more than a couple of hours), we had gotten as far as showing the last state change for each of the stories that were pertinent to the report.

After competing priorities led us to stop our session, I identified an issue with our new code. A teammate paired with me to show me a way to debug it and we were able to fix it. The script still wasn’t finished. Luckily, my soon-to-be-erstwhile teammate Glen later paired with me to get the script to a point where it produces a report of cycle times for the most recently accepted 500 stories.

The script runs way too slow, and Glen has explained to me a better approach. I await another pairing opportunity to do this. So what’s the takeaway for you, the reader?

Pairing is hard. You fear exposing your weaknesses to another human being. You feel pressure to keep up your side. But that’s only before you actually start pairing. The truth is we all want each other to succeed. Your friends and teammates aren’t out to make you feel stupid. And pairing is so powerful. It gets you over that “hump” of fear. Two heads really are better than one.

Is there something you’d like to try, but you feel you don’t know enough? Find a pair, and go for it!

 

 

 

I just received a flyer in my snail mail for yet another conference where four out of the five keynote speakers are white men and only one is a woman. Are you kidding me? And this is a testing conference. Testing is a field that does indeed have lots of women, I would guess a significantly higher percentage than, say, programming.

I know the organizers of this conference and they are good people who aren’t purposely discriminating against women (or minorities, for that matter). But they aren’t trying hard enough, either. I’ve personally sent long lists of women I recommend to speak at their conferences. True, most of these women aren’t “known” keynote speakers – maybe because nobody ever asks them to keynote. These women are highly experienced testing practitioners who have valuable experience to share.

This same company has an upcoming testing conference with no female keynoters, so I guess this is an improvement. But I’m not letting them off the hook, and you shouldn’t either.

What do you value more: a highly entertaining, “big name” keynote speech? Or an experienced practitioner who competently helps you learn some new ideas to go and try with your own teams, but maybe isn’t as well known or flashy?

You probably don’t get to go to many conferences, so be choosy. Choose the ones with a diverse lineup of not only keynoters but presenters of all types of sessions. In fact, choose conferences that have lots of hands-on sessions where you get to learn by practicing with your peers. We have the choice of these conferences now. And I hope you will leave your favorites in comments here. I don’t want to make my friends unhappy by naming names here, but email me and I’ll give you my own recommendations. (Another disclaimer – I’m personally not looking for keynoting gigs, so these are not sour grapes. I don’t like doing keynotes, and I know my limitations as a presenter).

The organizations sponsoring and organizing conferences are pandering to what they think you, their paying audience, wants to see. If you’re going to conferences to see big names and polished speakers, and you don’t care if the lineup is diverse, go ahead. If you want a really great learning experience, maybe do some more research about where your time and money will reap the most value for you.

I’m not trying to start a boycott, but I am saying: we are the market. Let’s start demanding what we want, and I know these conference organizers will then have to step up and try harder.

Since publishing More Agile Testing with Janet Gregory, I’ve enjoyed time for writing new articles and participating in interviews. Please see my Articles page for links to these. I’d love to hear your feedback on any of these. Have you tried any of the practices or ideas discussed in the articles or interviews?

In the past couple months I blogged about the spike that JoEllen Carter (@TestingMojo) and I have been doing on automating UI smoke tests for our team’s iOS app: Pairing on a Mission and Continuing the Mission… and continually improving. Today we did a tech talk for our team reporting on what we’ve done so far and asking for help in choosing our next steps.

Tech Talk Mind Map

Tech Talk Mind Map

We used this mind map to guide our talk. We explained the goals for our spike: find an automation solution for consistent, repeatable UI smoke tests that would keep bad regressions out of the app store. We demoed one of our test scripts, showed how we had designed our tests, explained how discovering ‘search with predicate’ made our scripts much easier to write and maintain. We went over the pluses and minuses of our experience so far, pointing out the frustrating roadblocks we encountered, but also the pleasure of learning and working on something fun and cool.

We shared our thought that now is a good time to assess the value of the automated UI tests and how to move forward. Our iOS app is being completely redesigned, so the scripts we created for our spike will have to be re-done from scratch. We still have to solve the problems that keep us from putting our tests into our CI.

There are several options. We could try an external service provider. We could try other tool sets and frameworks. Should we abandon the UI automation idea and spend our time doing manual exploratory testing? Our iOS app already has about 6,000 unit tests and a number of integration tests that operate through the API. However, we have had bad regressions before that could only be found via the UI, so we know we need something.

We got some good ideas from the rest of the team. One was to ask the developer community within our company if they have any iOS UI automation experiences to share, since we know there are many other iOS projects. We posted a question on the development forum and have already had some good input.

This effort is inconclusive, so why am I blogging about this? Right before I started writing this, Jason Barile posted an interesting question on Twitter:

“…is your team really clear on what problems you’re trying to solve with automation and what success looks like?”

Our team has a long history of incredible success using test-driven development and automated regression tests at all levels from unit to UI. We have our share of “flaky tests”, but we get a good return on our automation investment. That doesn’t mean that automation is always the solution. We’ll have to figure things out.

Personally, I don’t like doing manual regression testing, so I hope we can find a way to run high-level UI automation smoke tests in our CI. Then we’d have more time for manual exploratory testing, and I’m learning that could be even more critical with mobile apps than other types. We shall keep experimenting and collaborating, and finding ways to shorten our feedback loop and ensure our users have a good experience with each new release of our app.

 

In the delightful keynote “Insights from Happy Change Agents” from Fanny Pittack and Alex Schwarz, I learned a new way to share information with others. Rather than providing a recipe for success, or even a takeaway, we can offer “giveaways”. I shall offer you some giveaways that I received at Agile Testing Days.

My PotsLightning sketch notes

My PotsLightning sketch notes

Sunday I joined the PotsLightning session for the morning. PotsLightning is open to anyone, not only conference participants, and is a self-organizing sort of thing, a combination open space and lightning talks. Maik Nogens facilitated. My main giveaway was the diversity of conference participants. There were people from as far away as New Zealand and Saudi Arabia. There were several women. Participants had experience testing all kinds of software from tractors to music. My sketch notes show, rather illegibly, some of the topics we covered, such as embedded systems, guilds, and automation.

My next sketch note reminds me that someone – unfortunately now I don’t recall who it was – showed me how he uses mind maps for test reporting as well as planning. He embeds screenshots and screencasts, and uses the time machine feature of MindMeister to show progress. I love the visibility these practices add, and I’m keen to try it.

There was so much packed into the conference sessions, mealtime conversations, and hallway discussions. I even learned things in the vendor expo. Here are just a few of my favorite giveaways that stick in my mind, in no particular order.

  • The leader of the mobile testing dojo asked if we had an app we’d like to use for the dojo. I suggested my team’s app, and the group agreed to try it. I got a lot of useful insights, not only into mobile testing techniques, but into how new users perceive our app! Lots of room for improvement in both!
  • I’ve followed Bob Marshall (@flowchainsensei) on Twitter for awhile. His keynote gave me so much to think about. I need to work on my non-judgmental observation skills. Non-violent communication is critical and helps in so many of the problem areas currently getting in our way in the software business.
  • Providing a “crash pad” to cushion failures, and re-thinking failures as simply “learning”. This came out of several sessions including Roman Pilcher, who showed climbers “bouldering” with a crash pad in case they fall.
  • How to nurture testers? This came up in the tutorial Janet Gregory and I did, as well as in Lean Coffee. Janet held an Open Space on it, so I hope she will share what came out there. I think one way is to have fun, and you can see in the photo that testers had fun at the Carnival party during the conference!

    A Carnival of Testers, including Bart Knaack, my husband Bob, me, David Evans, someone I don't know, Alex Schladebeck, Thom Roden and Gareth(?) from RedGate.

    A Carnival of Testers, including Bart Knaack, my husband Bob, me, David Evans, someone I don’t know, Alex Schladebeck, Thom Roden and Gareth(?) from RedGate.

  • Lars Sjödahl did a nice consensus talk on how we don’t notice what we aren’t expecting. It’s a good reminder to me to use my peripheral vision and Spidey sense when exploring our software, and try to see what I’m not looking for. Dan Ashby’s session similarly reminded me to think laterally as well as critically.
  • Janet and I find David Evan’s Pillars of Testing so important, we asked him to write it up and used that to wrap up our new book in the last chapter. I so appreciate his shout-out to the book and our many contributors in his keynote. Plus he always cracks me up while I’m learning something new. Do watch the video of his keynote (I don’t know when or where they’ll be posted).
  • Antony Marcano’s “Don’t put me in a box” keynote is a reminder of how much we can learn from hearing others’ stories. For example, his story about how he had to work with programmers who were on the other side of a big atrium, and simply moved himself over to their side in order to collaborate and build relationships with them. Fanny and Alex emphasized that it’s all about relationships! Alan Richardson showed the power of short, crisp stories in his keynote. We can learn so much by sharing our experiences.
  • Daniël Maslyn’s talk on robotics showed how exciting the future of testing really is. We tend to get a bit blasé, but that’s a whole exciting world we could enjoy learning about!

My previous post has a list of blogs from Agile Testing Days participants, please check those out for more!

In other news, we are honored that LingoSpot listed Agile Testing as one of the top 16 books every software engineer should read!

Agile Testing Days 2014’s theme was the future of agile testing. What challenges are ahead, and how will we address them? Janet Gregory and I facilitated a workshop with experienced agile practitioners designed to identify some of the biggest issues related to testing and quality, and come up with experiments we can try to help overcome those challenges.

goalsFor me, it was exciting that we could get a room full of people who truly had lots of experience with testing on agile teams. We had a diverse mix of testers, programmers, managers, coaches, and people who multi-task among multiple roles, willing to share their experiences and collaborate to generate new ideas. In fact, many of the participants would be good coaches and facilitators for agile testing workshops themselves! More teams are succeeding in delivering business value frequently at a sustainable pace (to paraphrase Elisabeth Hendrickson). Testing and testers are a part of this success.

However, we all still face plenty of problems. During our first exercise, each participant wrote down the biggest obstacles to testing and quality that their teams face. We used an affinity diagram to identify the top three:

  • Whole team testing: how to get all roles on a team to collaborate for testing activities, how does testing “get respect” across the organization?
  • The “ketchup effect”: like getting ketchup out of a bottle, we try and try to deliver software features a little at a time, only to have them come gushing out at the end and making a big mess!
  • Agile testing mindset – how do we change testers’ mindsets? How do we spread this mindset of building quality in, testing early and often, across the organization?

We used several different brainstorming techniques to come up with experiments to work on these challenges: impact mapping, brain writing, and diagramming on a whiteboard (everyone chose mind mapping for this). You can see the results of some of this in the photos. Then we used a different technique to think about other challenges identified, such as how to build testing skill sets, building the right thing, and the tester’s role in continuous delivery.

Building Skill Sets

Building Skill Sets

This last technique was the “giveaway” (to borrow a term from Alex Schwarz and Fanny Pittack) I was the most happy to take from the workshop. Janet and I gave general instructions, but the participants self-organized. Each table group took a topic to start with and mind mapped ideas about that topic. Some teams supplemented their mind maps by drawing pictures. Then the magic happened – after a time period, the groups rotated so each was working on another group’s mind map and adding their own ideas. They rotated once more so that each group worked on each mind map.

You can see from the pictures how many ideas came out of this. Like brain writing, it is amazing that you can write down all the ideas you think you have, then, seeing someone else’s ideas, you can think of even more. I encourage you to take a look at these mind maps, and choose some ideas for your own team’s small experiments. Even more importantly, I urge you to try a brainstorming exercise such as the group mind mapping, rotating among topics, and see the power of your collective experience and skill sets!

Cube-shaped tester

Cube-shaped tester

As we rotated among the different topics drawing on mind maps, one participant, Marcelo Leite (@marcelo__leite on Twiter), made a note on the skills mind map about “cube-shaped testers”. Janet and I talk a lot about T-shaped testers and square-shaped teams, concepts we learned from Rob Lambert and Adam Knight. We asked Marcelo to explain the cube-shaped idea. As with the Rubiks cube, we have different “colors” of skills, we can twist them around and form different combinations. This way we can continually adapt to new and unique situations. A broad mix of skills lets us take on any future challenge.

I’m out here now working on my cube shaped skills. How about you? I’d love to hear about your own learning journey towards the future of agile testing.

You can take a look at the slides for our workshop, and email me if you’d like the resources list we handed out. Also do check out the slides from our keynote, which sadly the audience didn’t get to see as the projector malfunctioned.

More blogs about #AgileTD:

I know the Agile Testing Days organizers will post a list of all blog posts about the conference, but here are some I made note of (and I still haven’t read them all!) I’m sure I missed some, so please ping me with additional links if you have ‘em.

  • http://www.bredex.de/blog_article_en/agile-testing-days-2014.htmlhttp://oanasagile.blogspot.fr/2014/11/agile-testing-days-2014-go-wilde-in.html

    https://www.flickr.com/photos/iamroot/sets/72157646956955533/http://tobythetesterblog.wordpress.com/2014/11/14/my-top-5-experiences-from-agile-testing-days-conference/

    http://www.gilzilberfeld.com/2014/11/agile-testing-days-2014the-agiletd-post.html

    http://my2centsonagile.blogspot.com/2014/11/agile-testing-days-2014-there-are-no.html

    http://seasidetesting.com/2014/11/17/the-agile-testing-days-2014-day-1-the-tutorial/

    http://blog.demo.llp.pl/2014/back-to-the-future/

    http://www.mostly-testing.co.uk/2014/11/agile-testing-days-2014-part-2.html

  • http://rhythmoftesting.blogspot.com/2014/11/agile-testing-days-conversations-and_23.html (be sure to go back from here and read all of Pete’s blogs including his live blogs from AgileTD)

userStories  This new book by Gojko Adzic and David Evans is deceptively slim. It’s not just 50 ideas to improve your user stories. It’s 50 experiments you can try to improve how you deliver software. For each experiment, David and Gojko provide you with information and resources “to make it work”.

One chapter that has caught my eye is “Use Low-Tech for Story Conversations”. Gojko and David advise holding story discussions in rooms with lots of whiteboards and few big tables. When everyone sits at a big conference table, looking at stories on a monitor or projected on a wall, they start tuning out and reading their phones. Standing in front of a whiteboard or flip chart encourages conversation, and the ability to draw makes that conversation more clear. Participants can draw pictures, connect boxes with arrows, write sentences, make lists. It’s a great way to communicate.

I’ve always been fond of the “walking skeleton”, identifying the minimum stories that will deliver enough of a slice to get feedback and validate learning. Gojko and David take this idea even further, they put the walking skeleton on crutches. Deliver a user interface with as little as possible below the surface now, get feedback from users, and iterate to continually improve it. As with all the ideas in the book, the authors provide examples from their own experience to help you understand the concept well enough to try it out with your team.

David and Gojko understand you’re working in a real team, with corporate policies and constraints that govern what you can do. Each story idea ends with a practical “How to Make it Work” section so you can get your experiment started.

Again, it’s not just a book of tips for improving your user stories. It’s fifty ways to help your customers identify the business value they need, and deliver a thin slice of that value to get feedback and continue to build it to achieve business goals. It’s a catalog of proven practices that guides you in learning the ones you want to try.

 

 

In my previous post I described how my teammate JoEllen Carter (@testingmojo) and I have been pairing for an iOS UI smoke test automation spike. By working together, making use of a book, online resources, and examples sent by a colleague in another office, we were getting some traction. However, we ran into a lot of obstacles and some days were pretty discouraging.

Scaredy tester!

Scaredy tester!

For example, one day our tests failed, and we discovered the programmers had changed many element names. We had known our tests were fragile, but didn’t realize how hard it would be to figure out the new names and fix the tests. After that, we refactored our tests to take out hard coded strings and use variables. We defined all the variables in one place so we can easily change them.

I’m not sure if it was the partial solar eclipse or what, but one day all the scripts we had written quit working on my machine. In fact, I couldn’t even build the product anymore. That was extremely frustrating! One day when I was working from home, the programmer/PO and JoEllen used my work iMac while I screenshared, and we worked together via Zoom to get everything working again. If I hadn’t been pairing, and we didn’t have support from programmers on our team, I know I wouldn’t have gotten past obstacles such as that one.

Well, it hasn't seemed THAT easy! But this book helps a lot.

It hasn’t been THAT easy! But, good book.

We looked for more ways to make the tests easier to maintain. In a book we’ve been using, Test iOS Apps with UI Automation by Jonathan Penn, we read about using predicates. When we got some time with one of the programmers (who is also the product owner, and is championing our automation effort), he agreed using the searchWithPredicate looked like a better approach and helped us get a rudimentary version of it working. It lets you look for a particular value somewhere on the screen, without having to know the exact element (for which we often had to resort to numbered cells) where it should be. That way, element names can change but you can still find your text string. These are intended to be smoke tests, so that is good enough. JoEllen mastered searchWith Predicate on her own the next day, and added a couple of functions for it to our “test helper” file where our variables and other functions used by multiple test scripts are defined.

Another important effort has been figuring out how to get our tests into our CI. We found a ui-automation-runner that lets us run the tests on the command line instead of from Instruments. However, we didn’t know how to bring up the iOS simulator other than via xCode and Instruments. We set up a Zoom meeting with one of our iOS programmers in Toronto, and showed him what we were doing and what we needed. He said he could check in something that would allow us to bring up the simulator from the command line. The product owner put in a chore for this, and we soon had what we needed.

The same day we met with the programmer, my co-author Janet Gregory was in town and visited our office. I showed her our test scripts. She commented that some of the script names were confusing or misleading. JoEllen and I renamed the files so that the purpose of each is more obvious. For example, our test helper file is now called “ios_test_helper.js” instead of “tracker_ios.js”. We added “test” to the name of each test script, for example, login_test.js instead of login.js. Simple changes can help a lot!

Our focus has been on refactoring the tests for maintainability, and working towards getting the tests into our CI. We changed the tests to run with fixture data so they can run against a localhost. This wasn’t too bad since we’re using variables instead of hard coded strings. But we’ve been adding a few test cases as well. For example, we had a happy path login test, but decided we should verify that you can’t log in with invalid credentials. This required us to learn how to test a pop-up alert. I tried to figure this out on my own when JoEllen was in a meeting, but failed. When we paired on it, we struggled some, but (largely thanks to JoEllen’s talent for this stuff!) finally mastered it. Today, we zeroed in on how to mitigate one of the flaky simulator behaviors, too. It feels great to start to understand what the heck we’re doing!

Some days I have been quite discouraged, like when I couldn’t even build the product anymore, but other days I see we examplereally have made a lot of progress. Forcing myself to drive more has built up my confidence. We have a lot of examples now though, both in our own scripts, and the ones our colleague in Toronto sent us. I learn best from examples, so that’s helping me. I finally feel like I kind of understand what I’m doing. And I’m excited about our next steps, as we make these smoke tests part of our CI, where they can give us more confidence about future releases.

I appreciate the positive feedback I got on my previous post. These are the kinds of experiences that Janet and I and 40 other contributors share in our new book More Agile Testing. I hope you will share your experiences – please put links to your blog posts in the comments here!

Other news:

Speaking of More Agile Testing, Janet and I did an interview with Tom Cagley, which is available now on his SPAMCast. We talk about what’s in the book and about our agile testing experiences.

"We're on a mission..."

“We’re on a mission…”

Most of us have more we’d like to do than the time to do it, even if we’re in an experienced agile team working at a sustainable pace. A couple of us that work on our iOS team wanted to try automating some UI regression tests. The programmers write their code using TDD with good test coverage, but UI regression testing was all done manually and tended to be a bit hit-or-miss.  However, we were stretched thin on the testing side, and that idea didn’t get to the top of the priority list.

Luckily, a highly experienced tester, JoEllen Carter (@testingmojo), joined our team. Soon after that, when we were regression testing an iOS release, JoEllen found a showstopper regression failure. Automating regression tests through the UI to catch failures such as that more quickly became a more compelling idea.

Getting started

The developer who is also the iOS product owner championed the automation effort, and knew some testers in our company’s Toronto office who were successfully automating mobile UI tests. He set up a meeting so we could pick their brains. They recommended using Apple’s built-in test automation tools in Instruments, along with the tuneup.js library, and kindly sent us a book, examples of their own tests, and other helpful documentation.

The PO helped us install and build the iOS code, and spike some rudimentary automation with Instruments. We agreed on some simple scripts to try, and he put stories for those in the backlog. Since the main focus of our team is our web-based SaaS product, and we testers wear other hats including helping with customer support, it was hard to find time to devote to the iOS test automation. JoEllen suggested blocking out an hour every day to pair on it. This proved key to getting traction.

Pairing FTW

You know how it’s good to be the dumbest person in the room? This worked for me as I watched JoEllen fearlessly use the record function in Instruments as well as the logging feature to see how the iOS app pages were laid out and how to refer to the different elements. Every time we paired, we made good progress, starting with the simplest of scripts and incrementally adding to them. Demands on our time for more business-critical work sometimes meant we let the iOS automation effort slide, but we’ve experimented with trying a different time of day when distractions are fewer.

Obstacles to overcome

We had a few scripts written when the element names changed unexpectedly due to updates to support iOS 8. We were discouraged because we’d had no warning about it from the programmers. They work in a different office two timezones away, and though we have a daily standup and are on chat all day, we testers are usually focused on other products. We discussed this with the PO, and agreed it was time to get our test scripts into CI, so that if we weren’t warned about big changes, they’d at least be immediately obvious to everyone. We wrote more stories for the steps needed.

Some obstacles to making the UI automation scripts part of CI seemed tough. For example, we needed to figure out how to run the tests from the command line rather than from within Instruments. After one pairing session of Internet searching, reading doc, and trial-and-error, we nailed it! By “we” I mean JoEllen, with me contributing information from searches and a bit of shell command expertise. We also got help at one point with a Javascript question from a pair of programmers that work on our web-based app.

Overcoming my own weenieness

I have personally experienced a lot of fear and trepidation about the UI test automation effort. I don’t know Javascript, I’m a newbie at iOS (even as a user) and mobile app testing in general, and I always find test automation hard even though I also really enjoy doing it. But I finally forced myself to ‘drive’ and felt much better actually trying things. We have a plan for refactoring to make the scripts easier to run and maintain, and of course more tasks to do to get them into CI.

Lessons learned

I can’t advise you on the best techniques for iOS UI test automation, but I can share some general takeaways from this experience so far:

  • Has anyone else in the company already succeeded with a similar effort? Ask them for help – they’re probably happy to share their experiences.
  • Pairing brings much better chances for success than working alone, for so many reasons.
  • Put time on the calendar every day to work on an effort like this, even if it’s only one hour (that is also good advice if you want to write a book!) Experiment with different times of day. Right after lunch has been a good time for us, before getting pulled back into the routine, and before afternoon meetings.
  • Bring up obstacles during stand-ups, set up meetings to discuss them with people who can help.
  • Write stories for automation tasks to make them visible.

I still have some misgivings about our automation effort. I’d like for the programmers to be more involved. After all, test automation is coding, and they are the people who code all day every day, rather than one hour or so a day at best. However, we have lots of support from the PO who is also a programmer, and we’re moving towards making this automation a part of the existing test automation already in the product’s CI.

If your team has a tough testing problem to solve, try pairing to spike a solution. Reflect and adapt as you go! Oh, and hiring excellent experienced testers like JoEllen is also a good idea, but don’t be trying to poach her away from our team.