Agile Testing with Lisa Crispin
Providing Practical Agile Testing Guidance

At Agile 2015, I learned about example mapping from Matt Wynne. Linda Rising’s session reinforced my enthusiasm to continue doing small, frugal experiments.I came back to work the following week feeling like it would be pretty easy to try an experiment with example mapping.

Example Example Map from JoEllen Carter (testacious.com)

Example Example Map from JoEllen Carter (testacious.com)

You can use example mapping in your specification workshops, Three Amigos meetings (more on that below), or whatever format your team uses to discuss upcoming stories with your product owner and/or business stakeholders. Write the story on a yellow index card. Write business rules or acceptance criteria on blue index cards. For each business rule, write examples of desired and undesired behavior on green index cards. Questions are going to come up that nobody in the room can answer right now – write those on red cards. That’s all there is to it!

The problem

My team was experiencing a high rate of stories being rejected because of missing capabilities, and our cycle time was longer than we’d like. I asked our product owner (PO) if we could experiment with a new approach to our pre-Iteration Planning Meetings (IPM).

Up to this point, our “pre-IPM” (pre-Iteration Planning Meeting) meetings were a bit slapdash and hurried. The PO, the development anchor and me met shortly before the IPM went quickly through the stories that would be discussed. There wasn’t a lot of time to think of questions to ask.

Our experiment

For our new experiment, we decided to try a “Three Amigos” (coined by George Dinwiddie) or what Janet Gregory and I call  “Power of Three” approach. This is also similar to Gojko Adzic’s specification workshops. In our case it was Four Amigos. We decided to have our pre-IPM meeting time boxed to one hour, and hold it two business days before the IPM. The PO, designer, tester and developer anchor gathered to discuss the stories that would be discussed and estimated in the IPM. Our goal was to build a base of shared understanding that we could build on in the IPM, so that when testing and coding starts on a story, everyone knows what capabilities are needed.

We tried out example mapping as a way to learn more about each story ahead of the IPM. Now, I’ve practiced example-driven development since I learned about it from Brian Marick back around 2003. So, I was surprised how effective it is to add rules along with the examples. The truth is, you can’t write adequate tests and code just from a few examples – you need the business rules too.

Since we have remote team members,using Matt Wynne’s color-coded index cards wouldn’t work for us. We tried using CardboardIt with its virtual index cards, but it proved a bit slow and awkward for our purpose. Since our product is Pivotal Tracker, a SaaS project tracking tool, we decided to try using it for the examples, rules and questions. For our planning meetings, we share the Tracker project screen in the Zoom video meeting for remote participants, so putting our example maps in text in our Tracker stories is a natural enough fit for us.

We’ve iterated on our Amigos meeting techniques over several months now. We don’t example map every story. For example, design stories may be covered well enough with the Invision design doc. What’s important is that we are chipping away at the problem. Feedback from my teammates is that they have a much better understanding of each story before we even start talking about it in the IPM. There may still be questions and conversations about the story, but it’s going deeper into the story capabilities because the basics are already there. And, our story rejection rate has gone down, as has our cycle time!

A template for structuring a conversation

During the pre-IPM “amigos” meeting, each story gets a goal/purpose, rules, examples, and maybe a scenario or two. We found that the purpose or goal is crucial – what value will this story deliver? The combination of rules and examples that illustrate them provides the right information to write business-facing tests that guide development. Here’s an example of our example mapping outcomes in a Tracker story:

ExampleMap example

Example mapping outcomes captured in a Tracker story

 

Devs use the info to help them write tests to guide dev. Ideally (at least in my opinion), those would be Cucumber BDD tests. The example map provides personas and scenarios along with the rules. However, sometimes it makes more sense to leverage existing functional rspec tests or Jasmine unit tests for the JS.

As a developer pair start working a story, they have a conversation with one or more testers, so that we’re all on the same page. When questions come up, we reconvene the Amigos to discuss them.

User stories act as a placeholder for a conversation. Techniques like example mapping help structure those conversations and ensure that everyone on the delivery team shares the customer’s understanding of the capabilities that story should provide. Since we’re a distributed team, we want a place to keep detailed-enough results of those conversations. Putting example maps in stories is working really well for that.

Example mapping is straightforward and easy to try out. Ask your team to try it out with your Amigos a day or two before your next iteration planning meeting. If it doesn’t work well for you, there are lots of other techniques you can try out! I hope to cover some of those here in the coming weeks.

I pair on all my conference sessions. It’s more fun, participants get a better learning opportunity, and if my pairs are less experienced at presenting, they get to practice their skills. Big bonus: I learn a lot too!

I’ve paired with quite a few awesome people. Janet Gregory and I have, of course, been pairing for many years. In addition, I’ve paired during the past few years with Emma Armstrong, Abby Bangser, and Amitai Schlair, among others. I’ve picked up many good skills, habits, ideas and insights from all of them!

The Ministry of Testing published my article on what I learned pairing with Abby at a TestBash workshop about how distributed teams can build quality into their product. If you’d like to hone your own presenting and facilitating skills, consider pairing with someone to propose and present a conference session. It’s a great way to learn! And if you want to pair with me in 2017, let me know!

Yesterday I ran across Dan Billing’s post about MEWT5. An excellent and thought-provoking post, but at the end, a picture of the 14 participants showed only 1 woman. Given that testing is one of the areas of software that has a decent percentage of women, and given all the awesome women in testing I know that live in England, that was disheartening.

I know most of the people in that picture. I know they are kind, talented, open-minded, smart testing professionals who would not intentionally exclude women from events. They all spend a lot of time helping all of us in the testing community with their organizing, writing and speaking.

I also know how hard one has to work to get at least a representative number of women included in testing events. I know now that 40% of the invitees of that event were women (why only 40%, well, let’s put that off for now) and that several could not accept, or had to cancel last-minute. That’s too bad, but again, I believe it just takes more effort to avoid the last-minute problems.

Maaret Pyhäjärvi summed up the whole issue brilliantly in her subsequent post. So please just read that. I am seeing a trend among people I know, at least: when women organize events, they think of a disproportionate number of women to include. When men organize events, they think of a disproportionate number of men to include. We are humans, we seem to know more people of our own gender professionally, and our brains have built-in biases that are hard to fight.

I will leave you with some additional reading that helps explain why we don’t have more women in tech, why we don’t have more women in high level positions in tech, and probably, why women don’t necessarily feel comfortable proposing conference sessions.

Note: These are based on science. Yes, legitimate studies.

I have more of these, contact me if you want more, but those should get you started in understanding why it really is harder for women to get recognition, to feel confident, to realize their potential.

What am I doing to work on the problem of too few women? I volunteer for conference program and track committees, I am a Speakeasy mentor, I mentor other aspiring presenters outside of Speakeasy, I work with conference organizers to recommend and find women presenters, and I pair present with newbie presenters (women and men, though my bias is for women because they have fewer opportunities) to give them confidence and experience. I hope you will do all these things too.

Action item: If you’re organizing a test event, please contact me for recommendations of awesome women to invite. And, work with Speakeasy, who can help you find awesome women presenters.

Disclaimer: Yes, we need to fix all the other diversity problems too. We need more people of color, LGBQ, and other minorities sharing their ideas and experiences. Diversity is what helps us improve and progress. But I have chosen to work on what I know best, trying to get more women presenting at software conferences.

Thanks for reading.

I’m a tester on a (to me) relatively large team. Recently I was asked to facilitate an all-hands retro for our entire team. I’d like to share this interesting and rewarding experience. It’s not easy for a large team to enjoy a productive retrospective. I hope you’ll share your experiences, too.

At the time of this retro, our ever-growing team had about 30 people, including programmers, testers, designers, product owners, marketing experts, and content managers. While we all collaborate continually, we’re subdivided into smaller teams to work on different parts of our products.

I work mainly on the “Frontend” team. Our focus is our SaaS web app’s UI. We have weekly retros. The other big sub-team is the “Platform” team, further divided into “pods” that take care of our web server, API, reports and analytics, and other areas. This team has biweekly retros. The “Engineering” team (everyone in development, leaving out designers and marketing) has monthly “process retros”. But all-hands retros had become rare, due to logistics. Several of our team members are based in other cities. We hadn’t had an all-hands retro for more than six months.

Preparation

The team’s director asked me a few days in advance to facilitate the retro. I accepted the challenge gladly but I felt a bit panicked. Our team usually follows the same standard retro format. We spend some time gathering “happys”, “puzzlers” and “sads”. Then we discuss them – mainly the puzzlers and sads – and come up with action items, with a person responsible for taking the lead on each action item. Over the years this has produced continual improvement. However, I think it is good to “shake up” the retro format to generate some new thinking. Also, retros tend to be dominated by people who aren’t shy about speaking up. I wanted to find a way for everyone to contribute.

Thanks to my frequent conference and agile community participation, I have a great network of expert practitioners who like to help. I contacted the co-authors of two of my favorite books on retrospectives: Tom Roden, co-author with Ben Williams of 50 Quick Ideas to Improve Your Retrospectives , and Luis Gonçalves, co-author with Ben Linders of Getting Value out of Agile Retrospectives. (I’ve also depended for years on Agile Retrospectives: Making Good Teams Grea by Esther Derby and Diana Larsen.) I used their great advice and ideas to come up with a plan.

Food is a great idea for any meeting, so my teammates Jo and Nate helped me shop for and assemble a delicious platter of artisanal cheese and fresh berries (paid for by our employer, though I would have been willing to donate it.)

Pick a problem

Around 30 of us squeezed into a conference room. We had started a few minutes early so that our Marketing team could hand out new team hoodie sweatshirt jackets for everyone! That set a festive mood. Also, it is a tradition that people who are so inclined enjoy a beer, wine or other adult beverage during retros. For this reason, retros are typically scheduled at the end of the day. And of course, we had cheese and fruit to enjoy.

We had just over an hour for the retro, so I had to keep things moving. I gave a brief intro and said that we would choose one problem to focus on instead of our usual retro format.

I asked that they divide into their usual “pods”. Their task: choose the biggest problem that can’t be solved within their own pod that they’d like to see solved, or at least made better, in the next three months. In other words, the most compelling problem that needs to be addressed by the whole team. They should come back to the big group with this problem written on a sticky note.

As I expected, designers, testers and customer support specialists joined the pods they work with the most. Contrary to my expectations, the “Platform” team decided not to further subdivide into pods for the activity.  Together with the Marketing/content team and Frontend team, we only had three groups. I made a quick change of plan: I asked each group to pick their top *two* problems, so we’d have more to choose from. I gave them 10 minutes for this activity.

One group stayed in the conference room and the other two found their own place to work. They wrote ideas on sticky notes and dot voted to choose their highest priority problem areas. I walked around to each team to answer questions and let them know how much time they had left.

After 10 minutes, I called everyone back to the conference room. A spokesmodel from each team explained their top two problem areas and why those needed help from the whole team. We dot voted to choose the top topic, everyone got two votes. I would have preferred to use a process such as the Championship Game from Johanna Rothman’s Manage Your Product Portfolio  which would have been more fair, but we didn’t have time. Everyone seemed happy with the results anyway.

The winning problem area was getting more visibility into our company’s core values and technical practices, and how to integrate better with the company’s “ecosystem”. I don’t want to get into too much detail, because what I want to share is our process, rather than the specific problem.

Design experiments

Now that we had a problem for the whole team to solve, I explained that we wanted to design experiments, and we could use hypotheses for this. I explained Jason Little’s template for experiments:

Experiment template

Experiment template

We hypothesize by <implementing this>
We will <improve on this problem>
Which will <benefits>
As measured by <measurements>

I asked the groups to think about options to test the hypothesis. Who is affected? Who can help or hinder the experiment? I emphasized that we should focus on “minimum viable changes” rather than try to solve the whole problem at once. I gave an example using a problem that our Frontend team had identified in a previous retro.

hypoExample

Experiment example

I had three colors of index cards and we handed those out randomly around the large group. Then we divided up by index card color, so that we had three groups but each had different people than before. Again, each group found a place to work. Each designed an experiment to help address the problem area. I told them they had 10 minutes, but that wasn’t enough time, I ended up letting them have 15. I walked from group to group to help with questions and keep them informed of the time.

Then, we got back together in the conference room. A spokesmodel for each group explained their hypothesis and experiment. We had three viable experiments to work towards our goal.

Wrap-up

At this point, we had experiments including hypotheses, benefits, and ways to measuring progress. We only had about 10 minutes left in the retro now, and I wasn’t sure of the best way to proceed. How could we commit to trying these experiments? I asked the team what they’d like to do next. I was concerned about making sure the discussion wasn’t dominated by one or two people.

The directors and managers put forward some ideas, since they knew of resources that could help the team address the problem area. There were videos about the company’s core values and practices. We also discussed the idea of having someone within our team or outside of our team come in and do presentations about it. There were several good ideas, including coming up with games to help us learn.

Again, the experiments we decided to try aren’t the point, but the point is we came up with an action plan that included a way to measure progress. The managers agreed to schedule a series of weekly all-hands meetings to watch videos about company core development values and practices.

Immediately after the meeting, the marketing director and the overall director got together to put together a short survey to gauge how much team members currently know about this topic, and everyone took the survey. After watching and discussing all the videos, we can all take it again and see what we’ve learned.

I had hoped to wrap the retro up by doing appreciations, but there wasn’t time. With such a big group, I think sticking to a time box is important. We can try different techniques in future retros.

Outcomes

I was surprised and pleased to get lots of positive feedback from teammates, both in person and via email. My favorite comment, from one of the marketing specialists, was: “It’s the best retro by far I’ve been to in four years here. I felt productive!”

The initial survey has been done, and we’ve had three meetings so far to watch the videos. We watch right before lunch so that people can talk about it during lunch. Having 30 people in an hour-long meeting every week is expensive, which shows a real commitment to making real progress on our chosen problem area.

I think the key to success was that by dividing into groups, everyone had a better chance of participating in discussing and prioritizing problems, and in designing experiments to address them. The giveaway hoodies along with food and drink made the meeting fun. We stuck to our time frame, though we did vote to extend it by a few minutes at the end. Most importantly, we chose ONE problem to work on, and designed experiments that included ways to measure progress in addressing that one problem.

The team’s directors have decided they’d like to do an all hands retro every six weeks, and they’ve asked if I could facilitate these. I think it’s a great idea to do the all hands retro more often. I’m not sure I should facilitate them all, but I’ll do what I can to help our team keep identifying the biggest problem and designing experiments to chip away at it.

Do you work on a large team? How do you nurture continual learning and improvement?

My team currently has more than 20 programmers, and two testers. One of those testers is also the testing and support manager, and I also help with support. In terms of the quintessential “tester-developer ratio”, that sounds a bit dire. But we take a whole-team approach. The programmers pair all the time, they are excellent TDD practitioners. We’re using practices such as example mapping to build shared understanding of stories. And we’re now using behavior-driven development (BDD) to ensure each story meets acceptance criteria. On top of all that, developers are starting to do more exploring on each story before they mark it ready for acceptance.

If I can help my teammates learn more exploratory testing skills, it helps me focus on exploring on the feature level. We already do group exploratory testing sessions, called “group hugs”, and we’ve experimented a bit with exploratory testing in a format similar to mob programming. I’m keen to find more ways to transfer exploratory testing skills across the team.

Tech Talk Wednesday

Every Wednesday, our company has lunch catered for everyone to eat while learning something from a fellow employee giving a “tech talk”. I added myself to the schedule to do an exploratory testing workshop. On the day, a number of developers – mostly in my own team, but some from other teams in our office – enjoying a delicious catered lunch participated. Many of them were familiar with exploratory testing, although nobody had read Elisabeth Hendrickson‘s Explore It! book.
IMG_7821
I started off with a quick overview of the purpose of exploratory testing and how charters can help. I went over a few example charters based on the template Elisabeth uses in her book.

I asked participants to pair up, group up or mob. Each pair or group chose from the assortment of hand-held toys or other things such as a collapsible vase I had available to test. I asked them to come up with a charter, explore, and debrief. Each table was supplied with a copy of Elisabeth Hendrickson’s Test Heuristics Cheat Sheet.

Exploring

IMG_7778Creative exploring ensued. I got the biggest laugh watching a  pair testing battery-powered “nano bugs”.  I had scissors available to open packages. This pair picked up the scissors and started cutting legs off the bugs to explore how they worked with fewer legs. They determined that they worked fine with only two legs instead of eight. No legs wasn’t good, though they made a sort of wheelchair with the product packaging. One of the devs said “Normally I wouldn’t damage something, but being a tester made me willing to try cutting the legs off.” (Mind you, I supplied very inexpensive little toys, so it was fine to “stress test” them). They created various surfaces and enclosures to test the nano bug performance.

A team with a puzzle toy  wrote a charter to see if it was fun for an eight year old. They found it hard to figure out which puzzle was easiest to do first – the one they picked first turned out to be the trickiest. The small pieces would be easily lost. And there were only six puzzles, so it would get boring. Maybe not so great for the eight year old.

Other teams with different toys tried different perspectives. One played Kanoodle as a person with unsteady hands who was color blind. Not a great experience for that persona. One played with a maze game as a very strong but elderly person – it proved durable. The most potential for the maze game was losing the stylus, but it could easily be played with a pen instead.

For me the most fun thing to test was the collapsible vase, an idea I got from Lanette Creamer. The team testing it wanted to see how much water was required to make it stable enough that a cat brushing against it couldn’t tip it over. They found it remarkably cat-resistant.

Takeaways

The developers said they thought charters would be helpful for exploring a story before they marked it finished. They particularly liked testing in a group, saying it generated far more ideas than they would have thought of individually. I was pleased with the general feeling that participants got ideas that would help them exploring the features they develop.

controls

The Starship Enterprise was usually in control…

Recently I listened to one of Amitai Schlair’s excellent Agile in 3 Minutes podcasts (also available on iTunes), about Control. We had a brief Twitter conversation about it with Karina B., aka @GertieGamer. Amitai tweeted, “We can’t control what happens to us, but we control how we’d like to feel about it next time + what we do about it.” Karina tweeted, “the illusion of control is a fun magic trick that always leaves people wanting more”.

My dressage trainer, Elaine Marion, with Flynn

My dressage trainer, Elaine Marion, with Flynn

I enjoy the illusion of being in control. I think that’s one reason that one of my equestrian sports of choice is dressage. If I bombed around a cross-country jumping course on horseback, I’d have to let the horse make many of the decisions. The dressage arena feels so much more genteel. If I’m in perfect communion with my mount, performing the well-defined movements, I feel like I’m in charge. It’s a nice illusion! (In truth, I should be allowing the horse to perform correctly…)

The “What-Ifs”

I’ve been driving my miniature donkeys for many years. We’ve learned so much from my donkey trainer, Tom Mowery, and I trust my boys to take care of me. Still, if I’m driving my wagon with my donkey team down the road, and a huge RV motors towards us, I start getting what Tom calls the “what-ifs”. What if they run into the path of the RV? What if they spook and run into the ditch? I have an even worse case of the “what-ifs” with my newest donkey, a standard jenny, who is still a beginner at pulling a cart. She doesn’t steer reliably yet. My illusion of control is easily dispelled. What if she runs into that fence? This happens to software teams as well. What if we missed some giant bug? What if this isn’t the right feature?

We don’t need control – we need trust in our skills

Enjoying the view with Marsela

Enjoying the view and forgetting the “What-ifs” with Marsela

Tom’s words of wisdom about my worry over losing control are: “Lisa, if anything goes wrong, you have the skills to deal with it.” This is true, and I keep practicing those skills so they’ll be ready when I need them. If my donkeys run towards either an RV or a ditch, I can remember that they are actually trained, and cue them to change course and do something safer.

It’s the same with software development. We constantly learn new skills so that we can deal with whatever new obstacles get in our way. We identify problems using retrospectives, and we try small experiments to chip away at those problems. As a team and personally, if I am confident that if we have good skills and tools at our disposal, we don’t need an illusion of control. Whatever happens next, my team and I are in a position to do something about it.

In his podcast, Amitai suggests that if we can accept feeling less in control, we might make better decisions. I think if we focus on continually learning new skills and tools, our confidence in our ability to adapt to the current situation is much more important than feeling in control.

 

Lean Coffee

Janet and I facilitate Agile Testing Days Lean Coffee each morning, great way to start!

Agile Testing Days 2015 is coming up November 9-12 in Potsdam, Germany. This year’s theme: “Embracing Agile for a Competitive Edge – Establish leadership by delivering early, rapid & iterative application releases”. To paraphrase Elisabeth Hendrickson, the ability to release business value frequently at a sustainable pace is the very definition of “agile”. That sustainable pace part – that’s where the focus on quality comes in.

We’ve always known we can’t “test quality in” (though that doesn’t stop many companies from trying). But it can be hard to learn the practices that help us bake quality into a software product, starting when that product or feature is just a half-baked idea.

At Agile Testing Days, Janet Gregory and I will lead a tutorial on ways that teams can

We draw and brainstorm a lot!

We draw and brainstorm a lot!

build and share expertise in testing and other areas so that the whole team achieves a level of quality that lets them deliver frequently and fast. The tutorial is open to everyone who wants their teams to achieve frequent, even continuous, delivery of valuable software: testers, BAs, programmers, architects, POs, PMs, managers, Ops practitioners and more.

We hope you will come to Agile Testing Days, and if you do, we hope you will participate in our tutorial! We expect participants will push the boundaries and generate some new ideas, as in our tutorial last year where we identified ways to overcome future challenges in agile testing. (I’ve been using some of those myself, such as the rotating group brainwriting/drawing/mapping, and cube-shaped testers.) This year, we’ll again let participants shape the agenda, with these goals in mind:

  • Lists of skills testers have that they can share with the team – and effective ways to do
    experiments

    You’ll take away experiments to try with your team.

    that

  • How you can build your T-shaped skill sets and contribute to the cube-shaped team skills
  • Ways the whole team can collaborate to make the best use of test automation
  • And the goals that you bring!
Super Agile Person

Sightings of Super Agile Person are expected!

Please watch our video for more information. Hope to see you in Potsdam!

 

I love Weekend Testing and the chance it gives to collaborate with and learn from testing practitioners around the world. However, weekends are my only time to play with my donkeys and catch up with friends and family, so I rarely have time. In past years, I was able to participate in Weeknight Testing Europe, because it happened during my lunchtime, but AFAIK that has kind of died out. However, the first weekend in September was a three day weekend for us in the USA, which gave me the extra time I needed! I followed the instructions from our facilitator, JeanAnn Harrison (whom I’m proud to say contributed to More Agile Testing), which had me frantically draining my iPhone at the last minute by leaving on the LED flashlight plus every app I could open.

Weekend Testing is conducted via Skype chat. The facilitator provides the information about which app to test, what resources are needed (in this case, a mobile device with its battery below 15%), and other information such as a particular charter if applicable. In this case we were to test the Weather Channel phone app, with a focus on usability. The normal format I’ve experienced is one hour of testing followed by one hour of debrief, but this session didn’t follow that format.

Pairing FTW!

I prefer pairing for Weekend Testing, it’s more fun and I learn more. Shanmuga Raja and I agreed to pair, and I suggested we collaborate via a mind map. We didn’t plan that ahead, so we scrambled around and settled on Mind42. I have to say it isn’t my favorite, I prefer MindMeister (paid) and MindMup (free).

Snippet of our mind map

Snippet of our mind map

Normally I like to use the mind map to plan testing and record results. In this case, though, JeanAnn had already set out a long list of questions to guide our testing, mostly around the usability of the app, and other considerations such as how it affected battery drain. I admit to getting a bit flustered and not following my normal approach. If you click on the full mind map, you’ll see it’s a bit stream-of-consciousness. We (mostly me) recorded our reactions to the usability of the app, what we discovered as we went along, the things that puzzled us, and other information such as the devices we used.

Shanmuga’s Android phone crashed, apparently due to sudden battery drain, which was interesting. My iPhone 6 just steadily lost battery, but it didn’t seem abnormal. Having different devices made pairing especially interesting in this session, and it was fun to compare notes via the mind map.

Insights

I learned plenty from the Skype discussion. For example, I had never really thought about testing to see what an app does to the battery drain rate, though I’ve heard our own mobile developers discuss it. I also learned from JeanAnn that an app can affect the heat level of a device, I had never considered the potential issues from that.

One thing I’m working on right now is learning better ways to help other people learn testing skills. I find it hard to articulate to someone else how I do investigative or exploratory testing. So it was helpful to read the ongoing Skype discussion with comments from JeanAnn, Michael Bolton and others. This inspired me to try a charter, using the format I learned from Elisabeth Hendrickson’s book Explore It!

Explore the different settings for temperature and other metrics to discover whether they display consistently everywhere in the app.

I found an issue with the “hybrid” setting, which was supposed to give Centigrade temperatures but “anglo-saxon” measurements (eg., miles rather than kilometers). After changing to that setting, the app still showed Farenheit temperatures on some views.

Takeaways

After this session, I started using more charters at work, which I’ve found helpful to think about where to focus my testing and what risks could affect our new features. I’m continuing my reading to learn how to transfer testing skills to other team members. On my team, we are hoping to experiment with more tester – programmer pairing and having the developer pairs do more exploratory testing on each story before they pronounce it ‘finished’. Mike Talks has been helpful to me, with his blog posts on learning exploratory testing, and reading suggestions such as “Your Deceptive Mind“. All these ideas and techniques have been around for years, I just haven’t taken the time to make better use of them.

I highly recommend joining a Weekend Testing session. It’s a great way to hone your testing and thinking skills. Just like so many other professions, we need to practice our craft, and it’s harder to practice at work. The inspiration you can get from these sessions is priceless!

Warning – really long! I learned so much!

It was a hectic week for me. I co-chaired the Testing & Quality Track with Markus Gärtner, which meant I went to a lot of sessions in our track – no hardship! I also did an Agile Boot Camp session on “Everyone Owns Quality” with Janet Gregory, and a hands-on manual UI testing workshop with Emma Armstrong. (Slides for both those sessions are on the Agile 2015 Program). Here are some highlights of what I learned in sessions and conversations.

Note: Slides for many of these sessions are accessible via the Agile 2015 program online, I urge you to go look!

A Magic Carpet Ride: A business perspective on DevOps, Em Campbell-Pretty

Em Campbell-Pretty shared her experiences with improving quality in a giant enterprise data warehouse. I love her “just do it” attitude in the face of seemingly unsolvable problems (such as 400 pages of requirements!) I suspect her wonderful sense of humor helped her cope! Here’s a sample: “It would be fair to say that panic set in. Ah – opportunity!”

An example of Em’s creative approach: A program-wide retrospective helped stop the finger-pointing and led to improvements, but the Ops and Integration & Build teams were kept separate from the delivery team. When Em couldn’t get this changed, she appointed herself Product Owner of those teams.

Em noted that data warehouse developers often don’t come from the world of software engineering, so they don’t have the same understanding of source code control and the idea of frequent commits and integrations.

Team building with a haka

Team building with a haka

Some great tips from Em: Shrink the change, don’t try to boil the ocean. Make sure every team has time carved out to get familiar with a new process. Culture matters! Their onshore team didn’t even know each others’ names, plus there was an offshore team. Em started a “Unity Hour” to kick off  every sprint by bringing everyone together to play games. She showed a hilarious video of a haka performed by one of the teams.

The results Em shared showed that all these efforts paid off, for example, huge reductions in time and effort to deploy a new release.

At this point, I had to leave to enjoy talking with the inimitable Howard Sublett for an Agile Amped video podcast (Subscribe on iTunes to the series – those videos are full of great people and great ideas!). You can find links to Em’s slides on the program.

Mob exploratory testing! with Maaret Pyhäjärvi

Maaret Pyhäjärvi led an enthusiastic group through a mob “deep testing” session. This is a terrific way to help people learn effective in-depth testing techniques and provide quick feedback.

Use visuals to plan your day. Maaret uses a 2×2 model. Personally I like mind maps. Find what helps your brain the most. Plan to test fewer things, taking your time, going deep.

Our attempt at hands-on mob testing a real app showed this is a skill you have to practice. Mob testing follows mob programming practices. The navigator has to tell the driver – the person at the keyboard – what to do. The rest of the mob contributes their observations and ideas. Use a timer to have everyone switch roles at a set, short interval. Skills such as how to talk to the driver and where to focus your testing take deliberate practice. Take time to learn!

Exploratory Testing in a Nutshell from Maaret

Exploratory Testing in a Nutshell from Maaret

One important lesson was to ask questions, learn the terminology, and focus on the areas where feedback will be most valuable. My own team has tried mob testing once so far, and having the PO in the room helped a lot. Maaret says learning to test is like learning to drive a car. Don’t try to change gears in the middle of a turn – keep it simple, do one thing at a time, practice in a safe place.

Prototyping – Iterating Your Way to Glory Melissa Perri and Josh Wexler

In this session I learned more about how to design and test paper prototypes. If you use high-fidelity mockups on a computer, even if you created them quickly with a design tool, people fall into the “fear of killing your baby” syndrome. So much energy has gone into the idea, nobody wants to suggest changes. So, use pencil and paper. Color just distracts people – “I don’t like that blue”. Keep it simple.

One aha moment for me was the idea of creating goals for the persona you’re using. What would the user want to accomplish with the UI? We practiced sketching our prototypes for a mobile UI following a detailed narrative tailored to the persona, then role playing the persona and getting feedback on the design. Learning to ask open-ended questions is a skill I hope to improve. “What would you expect to happen when you click this button?” With these fast feedback cycle, we can learn so much before we write a line of code, and save a ton of time. Emma and I made this point in our own workshop.

Linda Rising, Stalwart Session

My sketch notes for this session are already posted. But in case you can’t interpret my drawings, here are a few highlights of the conversation.

Linda listens

Linda listens

Linda talked about how we resist change. For example, academics don’t want to modernize their programming courses: “The Cobol course is already in the can”.

Linda cited examples from Menlo Innovations on how to find joy through learning. She urged us to be our own scientists and do small experiments. There’s no one right answer, but patterns help. You’ve got to see it in action, don’t depend on “experts” who don’t know what feedback you’ll get.

Transparency is important in growing belief from the ground up. If there’s no trust, controlling doesn’t work. Don’t try to “sell” – instead, listen respectfully to what people have to say. You can change the environment. I’ve used the patterns from Linda and Mary Lynn Manns’ More Fearless Change, and I can attest to their helpfulness in trying small experiments and nudging baby steps of change.

Linda urged us to never give up on anybody. Don’t stereotype. Try, experiment, listen and learn, adapt. Fear less! We all come with biases and beliefs, but we have to continually change, people do shift beliefs. Linda recommended a podcast series and book, “You are not so smart”. That’s going on my podcast list!

Does the role of tester still exist? Juan Gabardini

Juan says we no longer identify ourselves by the activities we perform. I’d like that to be true, I do think we are moving that direction, but it seems hard for humans to dispense with labels. There are people out there who think “Testing isn’t technical, anyone can test”. Juan led us through considering “old school” testing in phased and gated environments versus how we test in agile development.

Skills for testing on agile teams

Skills for testing on agile teams

“Keep your eyes on the stars and your feet on the ground” urged Juan. Domain knowledge from contact to sell, from order to delivery is essential. He talked about “improving the last mile” by applying the scientific method. Use models such as Cynefin to understand the complexity of a feature. In our table groups, we came up with our own ideas for skills and characteristics that help with testing on agile projects.

Elisabeth Hendrickson, Stalwart

I was in the fishbowl for part of this session, which impacted my note-taking. Elisabeth shared how she moved from the title of “Director of Quality Engineering” to “Director of Engineering” by collaborating with four other directors so that they shared responsibility for global quality across all the teams working on the product, and removing obstacles for the teams. If developers have a problem, they ask the first director they see for help.

Elisabeth described how a few “explorers” joined delivery teams without testers and amped up their exploratory testing. They applied good development practices to the cloud. They automated solutions to lots of problems, for example, they use images to make every dev and test environment consistent. A message I got from this session was that we add value by providing something nobody else can provide.

One issue raised in the fishbowl was QA managers who resist a whole-team approach and don’t want to test early and often. Elisabeth said some of these managers identify as “the person at the end”, and it’s hard for them to let go of that role. We have to create safety for testers who are worried about changes.

A great quote from a participant: “Our CEO went to a conference and came back and said we should be more DevOps-y.” I didn’t note down Elisabeth’s answer to that!

Mob Programming – Jason Kerney

I don’t have good notes on this because I couldn’t hear the presenter very well. Luckily, this was recorded on video, and you can read the experience report paper that goes along with it! I appreciated hearing the experiences of Jason’s team. The paper is really helpful, with ideas such as doing study sessions. I wish my team would try mob programming for certain situations, such as when they’re doing something new where maybe only one developer has experience. We’re sure going to keep doing mob testing sessions when appropriate.

Be Brave! Try an Experiment! Linda Rising

I missed the beginning of this awesome session due to another video interview (this one along with Janet, by Craig Smith of InfoQ, I don’t know where to find that yet). But her slides are on the Agile 2015 program.

I learned the idea of continually trying small experiments from Linda years ago – it works! Linda noted that we aren’t doing scientific experiments, rather, we are doing trials, tinkering to see what works better. But, just for ease of conversation, we can still call them experiments.

Experiment? Trial? Tinker?

Experiment? Trial? Tinker?

We’re all born as natural scientists, but the way our schools work changes our focus from exploring and trying things – thinking “how” – to linear thinking, focusing on “what”. Agile lets us be babies again. Failure is OK! We can be more scientific. We need MANY experiments, and we don’t have the resources to do good science. Action is our best hope – not to find the truth or understand the why, but learn what works for us in our environment.

Linda explained confirmation bias – we only see information that confirms the beliefs we hold already. We also suffer cognitive dissonance, two disconfirming beliefs. To help overcome biases all humans have, talk out loud, draw on whiteboards, and get enough sleep. There was more, but I couldn’t sketch note fast enough.

Her message was clear, though. What can we do, to take some small action, to see whether it works for us or not? Not in theory, but really trying it. Do many small, simple, fast and frugal trials. Vary contexts, # participants, degree of enthusiasm, kind of project. Goal is learning about the thing, not proving it works for everyone. Re-test.

Be prepared to be surprised and learn even from “failure”. Don’t keep doing something because you’ve got so much invested in it. The investment is already gone. If you can’t finish your expensive steak, its cost makes no difference now. Involve everyone. Poke, sense, respond. Don’t look for answers, find ideas for trials.

If we have good outcomes, but “those people” (for example, managers) are in our way, don’t try to convince them. They also have biases supporting their beliefs. Data doesn’t convince people. Instead, show management the strategic value in a way that is easy to understand. Emphasize that learning happens regardless of outcome.

Performance Testing in Agile Contexts, Eric Proegler

Eric started by explaining some performance risks in areas such as capacity and reliability, and talked about the classes of bugs related to these. Please see his slides for this information – and for the cool photos from 1970s IBM sales literature. Most people do simulation testing right before release – when it’s too late to do anything about it.

Whole Team Approach, Circa 1970s

Whole Team Approach, Circa 1970s

My main takeaway is that we don’t have to do a perfect performance test where we have an environment exactly like production, and a load of “users” who behave exactly as real users behave. Eric called these the “illusion of realism”. Stop trying to test exactly what a user might do in a system that doesn’t exist yet. Instead ask – What’s a test I can do now, a test I can do in a day?

Small, fast, inexpensive tests done frequently as we iterate on our products will help us sniff out all kinds of performance issues. Don’t worry about imitating the prod environment. Isolation is more important than ‘real’. These tests need to be repeatable and reliable. Use simple workflows that are easy to recreate, avoid data caching effects. Be ready to troubleshoot when you find anomalies. We can build trust in these tests, recalibrate as necessary, use yesterday’s build to verify today’s results. Add these tests to your CI.

“Who needs ‘real’? Let’s find problems!” Do burst loads with no ramp, no think time, no pacing. Do 10 threads, 10 iterations each, 1 thread, 100 iterations. Do soak tests – 1 or 10 threads running for hours or days.

We can also use sapient techniques, something as simple as using a stopwatch, screen captures, videos, tools like Fiddler. One technique which we use on our own team is to put a load on the system in the background (we use Postman scripts driving our API for this) while just one or two people use the UI. You can use browser dev tools to measure performance.

You can also extend your existing automation. Add timers and log response times in your automated regression tests. This is something I’d like our team to try, since our tests are timed in CI. Watch for trends, be ready to drill down. Automation extends your senses, but doesn’t replace them.

You don’t have to test the whole system end to end to get valuable feedback. Login/logoff are expensive, and matter for everyone. Same with search functions. So focus on those. Think about user activities like road systems – are there highways?

Test at the API level. My teammate JoEllen Carter wrote Postman scripts to test API endpoints. She included assertions for the response time, and fails the test if the expected time is exceeded. Eric suggests using mocking stubs, service virtualization, built-in abstraction points to simulate in-progress.

Eric also suggested testing layers of the system, such as load testing web services directly. a test harness to measure response time to push and pull a message in a message bus. Use mocks and stubs, built-in abstraction points.

You can still do simulations before release, but in an agile project with short iterations, it pays to get feedback on performance with small, cheap, quick, repeatable, trustworthy tests.

Visual Testing Mike Lyles

I mostly took pictures of Mike’s slides, so you should just look at his slide deck. Mike had all these cool examples of how our brains get confused and misled. Understanding how our brain works can help us see what’s hard to notice, and get over our biases such as inattentional blindness.

Brain Facts

Brain Facts

Our team (again thanks to JoEllen) has started using a visual diffing tool, Depicted, to help us identify subtle changes in UIs. We hope to use SauceLabs to get screenshots from each browser/version, and run those screens through Depicted to compare with gold masters and flag differences for further investigation.

Example Mapping, Matt Wynne

I missed this session AND the open jam session Matt did. But Markus Gärtner, who co-chaired the testing track with me, explained what he learned about example mapping from attending the session. I also took a look at Matt’s slide deck from an earlier conference that is on the same theme. It sounds to me like another way to have a story-level conversation among testers, programmers, product owners and other stakeholders to elicit examples, distinguish those from business rules, and write down questions we need to answer before we can finish the story.

I also got to eavesdrop on a lunchtime conversation between Matt and Ellen Gottesdiener. Ellen explained techniques from her book with Mary Gorman, Discover to Deliver, such as structured conversations. I don’t know what they concluded, but to me, it sounded like their techniques achieved similar goals, though Ellen’s book goes much deeper into many ways to help business stakeholders identify the most important business value.

I have a feeling that any conversation you have that involves writing on a whiteboard, on sticky notes, on an online mind mapping tool, or on the back of a napkin will help build shared understanding of how a feature should work. One big takeaway from Agile 2015 is that visuals help our brains learn new things. Next time you start talking to a teammate about a story, walk over to a whiteboard, or get out some paper – and invite others to join you.

Get out there and start doing your small, frugal, repeatable experiments! Please let me know how it goes!

The Agile 2015 Stalwarts session are a fishbowl type discussion with leading lights of agile. I learn so much from Linda Rising every time I hear her talk (and when I read her book with Mary Lynn Manns, More Fearless Change). Do go watch videos of Linda’s recent keynotes. Whether it’s about bonobos or the agile mindset, you will have “aha” moments.

These are my sketch notes from Linda’s session. They might only make sense to me, but I wanted to share them. If you don’t sketch note already, give it a try. I will be able to look at these even a year from now and remember what I learned.

2015-08-04 07.01.20 2015-08-04 07.01.29 2015-08-04 07.01.34 2015-08-04 07.01.40 2015-08-04 07.01.46 2015-08-04 07.01.51 2015-08-04 07.01.57 2015-08-04 07.02.03