Agile Testing with Lisa Crispin
Providing Practical Agile Testing Guidance

I was honored to be interviewed by Dave Rael for his Developer on Fire! podcast. Please give it a listen and let me know if you share any of my experiences with small experiments, collaborating with customer and delivery team members, imposter syndrome, or whatever!

Developer on Fire podcast

Developer on Fire podcast

I’ve listened to many great interviews on Dave’s podcast, I highly recommend it. Dave is trying to have a more diverse lineup of guests, so if you would like to be on the podcast, or know someone (especially a woman or other minority in the software world) whom you’d love to hear interviewed, let me know and I’ll pass it along.

equine driving pair

In an equine team, both must be equally strong.

Following on to my previous post about strong style pairing for testing, I must say that this takes practice! And, I think maybe strong style pairing may not be the best way approach for all pairing situations, but I am learning more benefits of it.

I had a golden opportunity to pair with my awesome developer teammate Glen Ivey to write some exploratory testing charters for a critical project we have underway. We agreed to try strong style pairing. After a couple of hours doing this, we weren’t sure it was the best approach. It kind of felt like one of us  (ok, mainly me) dictating to the other what text to type. Later on, when I paired with my tester teammate Chad, we had a similar experience. I had the context for what should go in the charter, and I was just telling him what to type. Hmmm.

And when opportunities where strong style pairing would really help presented themselves, something I knew about and Glen didn’t, I screwed up and took control of the keyboard. I should have let him take control and guided him on what to do so he could discover the features for himself. That would have made a good learning experience. Doing something is a better way to learn than watching someone else do it. Creating a new habit takes a lot of practice and discipline, so I have to keep working at this.

Glen was much better at having me take the keyboard to walk through some of the production and test code, and that activity helped us think of a lot of exploratory testing charters that I’d never have thought of on my own. For me at that point, the strong style pairing delivered a big benefit.

Since we were pairing on charters, we naturally referred to Elisabeth Hendrickson’s Explore It! book. Glen focused on what resources we’d use for each charter. Not only did that get us thinking more deeply about how to scope charters, it prompted us to put in helpful information for whoever happens to pick up any given charter.

Brain in my fingertips

So much of what I know resides only in my fingertips, and that can make strong style pairing a challenge. Today,  Chad and I were pairing to explore a GitHub integration with our product. I totally told him the wrong syntax to commit a change. After we got an error, I had to imagine typing it myself to get it right. I think that says something about how brains work. But, thanks to the strong style pairing, he also had the opportunity to try out areas of our app he hadn’t used before, and it was much more useful for him to have control of the keyboard to learn and practice. Overall, we confirmed expected behavior for some features, and found a couple of potential issues in others, and I think we did that faster than if we hadn’t been pairing.

Pairing can feel hard

I’m going to keep practicing strong style pairing. These past few days I’ve been exploring areas of our app that I’ve largely forgotten. It feels easier, and less stressful, to poke around on my own. And I might be right about that in some cases. But I want to take advantage of the opportunities to get a fresh set of eyes on these areas, while at the same time helping a teammate learn about those areas. I believe that pairing for testing helps us improve our ability to build quality in.

It’s so exciting and rewarding to pair test with fellow testers and developers. I’d love to hear your own pair testing stories.


Ernest and Chester, strong-style pairing

I’ve been meaning to write about pair testing for ages. It’s something I still don’t do enough of. Today I listened to an Agile Amped podcast about strong style pairing with Maaret Pyhäjärvi & Llewellyn Falco. I’ve learned about strong style pairing from Maaret and Llewellyn before, and even tried mob programming with them at various conferences. The podcast motivated me to try strong style pairing at work.

I’m fortunate that the other tester in our office, Chad Wagner, has amazing exploratory testing skills. We pair test a lot. Chad says that pair testing is like getting to ride shotgun versus having to drive the car. You have so much more chance to look around. He readily agreed to experiment with strong style pairing.

I’m going to oversimplify, I am sure, but in strong style pairing, if you have an idea, you give the keyboard to your pair and explain what you want to do. Chad and I worked from an exploratory testing charter using a template style from Elisabeth Hendrickson’s Explore It! We used a pairing station that has two monitors, two keyboards and two mice. It took a lot of conscious effort to not just take control, start typing and testing with our idea. Rather, if I had an idea, I would explain it to Chad and ask him to try it, and vice versa.

Since Chad is pretty new to our team, when we pair, I have a tendency to just take control and do stuff. But he has the better testing ideas. Strong style pairing was much more engaging than what we had been doing. Chad would tell me his great idea for something to try and I’d do it. An idea would spring to my head and I’d explain it to him.

One interesting outcome is we discovered we had different ways of hard refreshing a page, and neither of us knew the other way. I use shortcut keys, and Chad uses a menu that reveals itself only when you have developer tools open in Chrome. That in itself made the strong style pairing worthwhile to me!

We ended up finding four issues worthy of showing to the developers and product owner, and writing up as stories. Not a bad outcome for a couple of hours of pairing. More fun and more bugs than I would have found on my own.

Now, if only I could get my team to mob program…

The Twitterverse and other social media continue to host many discussions on topics such as “Do testers need to code?” As Pete Walen points out in his recent post, the “whole team” approach to delivering software, popularized with agile development, is often misunderstood as “everyone must write production code”.

The “Whole Team Approach” in practice

Janet Gregory and I, along with our many collaborators, explain a healthy whole team approach to testing and quality in our books, Agile Testing and More Agile Testing. You can find many stories of teams working together to build in quality in our books. Here’s a recent example from my own team.

Improving our exploratory testing

My team works together to build quality into our product. I recently wrote about how we use example mapping to build shared understanding even before we start coding. Exploratory testing is another practice that our team (and indeed our whole company) values highly and which everyone, regardless of role, does at least occasionally. But most people are not experienced in ET techniques.

We have very few testers on our team compared to the numbers in other roles, but everyone does testing. We’ve been trying experiments to help designers, PMs and programmers improve their exploratory testing skills.

Building skills

Last December, I did a short exploratory testing workshop to help team members learn to write and use charters and apply their imagination and critical thinking powers as they test. We have occasional “group hugs” where team members in all roles pair up, assign themselves charters and test while sharing what they learn.

To help new team members learn about ET and give everyone more practice, we did a new hour-long ET workshop (split into two sessions, because we had to accommodate up to 40 people). One session had a Zoom session so remote team members could participate.

How the workshop worked

Exploratory testing charter

Exploratory testing charter by a programmer pair.

We started with a 10 minute overview of why we do exploratory testing, how to create and use personas, charters (based on Elisabeth Hendrickson’s template from Explore It!), various ET techniques and pointers to more resources. (Contact me if you’d like a copy of the slides). We gave the group a recently delivered feature to explore. Then everyone paired up or formed a group of three, wrote a charter and dove in.

As a tester it was funny to hear team members in other roles say things

Exploring with a mind map

Some pairs used mind maps

like “There’s no information in this epic, I don’t know how the feature should work!” Welcome to my world! They jumped right in, though. Some wrote charters in Tracker stories (yes, we use our own product), some mind mapped them or wrote them on paper. They had fun exploring, getting ideas from each other and calling us over to show us bugs or ask questions.


Each workshop session tested a different feature set, and each found several issues, including a serious one. The feedback they gave resulted in some design changes as well. Both features are now much improved.

More importantly, team members have some new testing tools in their toolbox. One of the designers commented that working through more examples would have helped him know what to do in his own testing. I’m working on planning testing dojo sessions so people have an opportunity to practice their exploratory testing skills.

I’d love to hear how your team is growing their whole team testing skills!

The 30 days of testing challenges are energizing me! Day 23’s challenge is to help someone test better. I’m going to combine that one with stepping out of my comfort zone on day 14 by sharing what I learned here. It may help you test better!

Recently, my awesome teammate Chad Wagner and I were trying to reproduce a problem found by another teammate.  Chad and I are testers on the Pivotal Tracker team. One of the developers on our team reported that he was hitting the backspace button while editing text in a story while another project member made an update, and it caused his backspace button to act as a browser back button. He lost his text changes, which was annoying. In trying to reproduce this, we found that whenever focus goes outside the text field, the backspace button indeed acts as a browser back button. But was that what happened in this case? It was hard to be sure what element has focus.

Chad wanted a way to see what element is in focus at any given time to help with trying to repro this issue. He found a :focus psuedo class in CSS that seemed helpful. He also found a bookmarklet to inject new CSS rules from Paul Irish. With help from a developer teammate and our Product Owner, Chad made the following bookmarklet:

javascript:(function()%7Bvar newcss%3D”:focus { outline:5px dashed red !important} .honeypot:focus { opacity:1 !important; width: 10px !important; height: 10px !important; outline:5px dashed red !important}”%3Bif(“%5Cv”%3D%3D”v”)%7Bdocument.createStyleSheet().cssText%3Dnewcss%7Delse%7Bvar tag%3Ddocument.createElement(“style”)%3Btag.type%3D”text/css”%3Bdocument.getElementsByTagName(“head”)%5B0%5D.appendChild(tag)%3Btag%5B(typeof”string”)%3F”innerText”:”innerHTML”%5D%3Dnewcss%7D%7D)()%3B

Red highlighting shows focus is currently in the Description field

Red highlighting shows focus is currently in the Description field

This bookmarklet puts red highlighting around whatever field, button or link on which your browser session has focus, as shown in the example.

What does this have to do with my comfort zone?

Chad is always trying new things and dragging me out of my comfort zone. He told me about the bookmarklet. I didn’t even know what a bookmarklet is, I had to start searching around. Chad sent me the code for the bookmarklet, and I tried unsuccessfully to use the bookmarklet. I was working from home that day, so we got on Zoom and Chad showed me how to use this. I read the blog posts (listed above) that he had found.

These fancy tools tend to scare me, because I’m afraid I won’t understand them. And indeed, I do not understand this very well. So we need to find time so that Chad can pair with me and explain more about this bookmarklet. My understanding is that this could work on any web page, but I haven’t been able to get it to work with another one.  So this will be getting me out of my comfort zone again soon.

Can you try it?

If being able to see what has focus on your web page would help you test better, maybe you can try this out, and if you can get it to work, maybe you can help me. Day 24’s challenge is to connect with someone new, so let’s connect! And when I learn more, which I’ll try to do tomorrow, I’ll update this post.

Team effort FTW

Story for built-in tool

Story for built-in tool

Our PO who helped Chad get this bookmarklet working thinks it’s such a good idea that he added and prioritized a story in our backlog to allow users to enable a mode to show what has focus in Tracker. The team thinks this is a cool idea, and it will be done soon. So I won’t have to worry about the bookmarklet for that, but I still want to learn more about how I can use CSS rules and bookmarklets to help with testing.

Raji Bhamidipati and I co-facilitated a day-long workshop at ExpoQA Madrid in June. We shared techniques testers can use with their teams to build shared understanding at the product, release, feature, iteration and story levels. Participants worked in teams to try out the various techniques and contribute their own experiences with ways to enable shared understanding among the delivery team and customer team.

This was a new workshop for me and Raji. We based it on a tutorial that my co-author Janet Gregory did at a recent conference. It includes some valuable agile business analysis techniques and ideas from Ellen Gottesdiener and Mary Gorman in their book Discover to Deliver. We are grateful for all this terrific material we were allowed to share. Our slide deck is available, but this workshop was about the participants doing and practicing more than about me and Raji talking. I got a lot of new ideas to try myself!

Adding value as testers

We encouraged a mindset shift from bug detection to bug prevention. We shared some essentials for stories and

Table groups practicing techniques to explore requirements

Table groups practicing techniques to explore requirements

requirements. The INVEST criteria from Bill Wake apply to feature sets and slices of features as well as to individual stories. The 7 Product Dimensions from Ellen Gottesdiener and Mary Gorman have been a huge help to me when discussing proposed features with business experts. We can consider these dimensions together with the Agile Testing Quadrants to help explore different aspects of a feature at the appropriate time.

A case study from an imagined tour bus company provided participants with features and requirements to explore. We used our tester’s mindset, the 7 Product Dimensions, INVEST and agile testing quadrants to come up with questions about functional requirements and “non-functional” quality attributes. Each group worked on a different dimension, and shared these on a wall chart of all 7 Product Dimensions.

Techniques to explore requirements

Example Story Map

One table group’s story map exercise

As we moved down the levels of precision from release down to stories, participants tried out several different ways to explore requirements with customers: user story mapping and personas (see Jeff Patton’s excellent book ), context diagrams, process map / flow diagrams, state diagrams, scenarios, business policies, example mapping (see Matt Wynne’s post), and various approaches to guiding development with business facing tests – acceptance test-driven development, behavior driven development, specification by example. Participants practiced writing these acceptance tests together. Combining a story, examples, rules, acceptance tests, and most importantly, conversations, helps us all get on the same page about each requirement.

Obstacles and experiments to overcome them

Throughout the day, participants used the “speed car – abyss” retrospective / futurespective activity to identify what’s holding their team back, what’s helping them move forward, what dragons may lurk in their future, and how to overcome those. A lot of common themes emerged, as well as unique issues and new ideas.

retro chart

A retro chart from one of the table groups


It wasn’t surprising to see that missing, incorrect, misunderstood and changing requirements drag down many teams. Teams struggle because of poor communication with customers and end users, unavailable product owners (PO), poorly-defined priorities, and dependencies on other teams or pods, either within the company or external. Many teams lack time, resources and skills to do useful test automation, and are slowed down by a reliance on manual regression testing. Programmers and testers on newer agile teams often aren’t used to working together, and have communication issues. For example, programmers may dismiss bugs reported by testers. Even basics like lack of CI and not being allowed to self-organize their own workflow. Slow feedback loops, lack of testers, too much work, and inflexible deadlines were also common themes.

I was more surprised that some teams are held back by confusion over who should test what, and oversimplifying features and stories. I also hadn’t expected that testers and teams often don’t know how to communicate problems to managers. Teams need to be creative for these types of challenges.


Participants had good ideas early in the workshop to put some gas in their teams’ engines. Many want to try the 7 Product Dimensions. Visualization techniques such as story mapping and process flow diagrams looked like good options to many. We had discussed Three (or four or more) Amigos or Power of Three meetings along with example mapping, which some participants are already using, and others are keen to start. Models such as the test automation pyramid and agile testing quadrants help power some teams’ engines. Pairing is also seen as a good way to overcome drag. Quite a few participants are already doing exploratory testing, and more want to try, including techniques such as investigating competing products.

Some ideas I was reminded of by participants included using the MoSCoW Must/Could/Should/Won’t prioritization method. Several participants cited smaller, self-organizing teams, pods or squads as a key to being able to effectively explore requirements and build the right thing. Team building activities were mentioned as a way to help with that. Someone mentioned a firefighter role, someone to come in and help in sticky situations, which intrigued me. Another great idea is to add people to help coordinate activities among teams and help manage dependencies for delivering software that meets requirements.


Many participants fear the same pitfalls as they’re already experiencing, such as changing requirements, unclear requirements and priorities and adding stories during the iteration. They’re worried that they’ll build the wrong thing, or fail to deliver on time. Broken or non-existent test environments lurk in the abyss. So do miscommunications with the PO and customers, misunderstanding of business rules, and a lack of documentation.

One insight is that teams get stuck into old habits and can’t get out of their comfort zone. They don’t experiment with new ways to discuss examples and business rules with customers, or spend time learning the business domain so they can better understand their needs. Some teams may get tripped up by working on more than one big feature at the same time. Others get stalled during planning, or don’t get test data in time, and then can’t deliver on time. Often they’re faced with unachievable deadlines or at the mercy of bad business decisions and micro-management.

Technical debt is a common pitfall. This also limits a team’s ability to deliver on time. Lack of testing skills and insufficient knowledge transfer can take a team down and result in incorrect implementation of features. Some participants worried that their team would lose sight of the business goals and priorities as they get bogged down with problems and technical debt. Some just don’t have enough people, especially testers, or enough time to get their work done. This also means they’re not being allowed to self-organize. Interestingly, some people feared spending too much time on testing.

Automation was again much discussed. Some teams have no test automation, others focus so much on automation they fall short on other testing activities. They worry about missing edge cases or backward compatibility issues.

Assuming that the business experts will be available for conversations and questions can lead to trouble. Another interesting insight was the possibility of a team confusing agile with ad-hoc, and going off in the weeds.


Another group's retro chart

Another group’s retro chart

My favorite suggestion from a participant to help build a bridge over the abyss is “learn mind reading”. I help with customer support on my team, and mind reading would really help there too! Here are many other great ideas to successfully explore requirements and build shared understanding with business experts.

Many participants plan to apply the INVEST method for features at planning and grooming meetings. Using visual techniques such as flow diagrams for complicated user stories is a good way to avoid misunderstandings and make sure edge cases are covered. Simplifying stories and making them independent is also part of our participants’ bridge. structured, visual discussion frameworks such as story mapping and example mapping helps clarify details before starting testing and coding. Get examples of desired and undesired feature behavior and business rules. Use personas to help elicit requirements.  Slice big stories smaller.

Participants feel it’s important to work in small increments, take “baby steps”. Some participants plan to experiment with BDD and SBE. They thought starting on a small scale or doing a spike would help get buy in to give it a try. One group suggested combining these approaches with quality attributes to help make sure they think of everything. Others plan to experiment with Kanban and other processes. Finding ways to measure progress is important to see whether various techniques are helping the customer and delivery teams understand what to build and avoid unnecessary rework. Simply agreeing on a definition of done helps.

Another important area for the bridge is educating management. This helps teams get the time and resources they need, along with the ability to self-organize and manage their own workloads. It also may help ensure that stakeholders, designers and other key players are available for conversations and collaborating, and allow them to experiment with new practices such as pairing. They can promote more collaboration. More support from management also helps with finding ways to solve dependency issues among teams and improve cross-team communication.

Some participants want more test automation, including at the unit level, to help build their bridge. This is one way to help keep bugs out of production. With more time available, they can learn the necessary skills.

What will you try?

I agree with the group that suggested passion is an important way to bridge safely over the abyss! We need to be dedicated, disciplined and excited about collaborating across the delivery team and with the customer team. Conversations, examples, tests and requirements at all levels of precision combine to help us delight our customers and end users with great features. As testers, we can contribute so much to help our team deliver value frequently, predictably and sustainably.

What will you try to help your customers and your delivery team collaborate to specify and deliver valuable features?

At Agile 2015, I learned about example mapping from Matt Wynne. Linda Rising’s session reinforced my enthusiasm to continue doing small, frugal experiments.I came back to work the following week feeling like it would be pretty easy to try an experiment with example mapping.

Example Example Map from JoEllen Carter (

Example Example Map from JoEllen Carter (

You can use example mapping in your specification workshops, Three Amigos meetings (more on that below), or whatever format your team uses to discuss upcoming stories with your product owner and/or business stakeholders. Write the story on a yellow index card. Write business rules or acceptance criteria on blue index cards. For each business rule, write examples of desired and undesired behavior on green index cards. Questions are going to come up that nobody in the room can answer right now – write those on red cards. That’s all there is to it!

The problem

My team was experiencing a high rate of stories being rejected because of missing capabilities, and our cycle time was longer than we’d like. I asked our product owner (PO) if we could experiment with a new approach to our pre-Iteration Planning Meetings (IPM).

Up to this point, our “pre-IPM” (pre-Iteration Planning Meeting) meetings were a bit slapdash and hurried. The PO, the development anchor and me met shortly before the IPM went quickly through the stories that would be discussed. There wasn’t a lot of time to think of questions to ask.

Our experiment

For our new experiment, we decided to try a “Three Amigos” (coined by George Dinwiddie) or what Janet Gregory and I call  “Power of Three” approach. This is also similar to Gojko Adzic’s specification workshops. In our case it was Four Amigos. We decided to have our pre-IPM meeting time boxed to one hour, and hold it two business days before the IPM. The PO, designer, tester and developer anchor gathered to discuss the stories that would be discussed and estimated in the IPM. Our goal was to build a base of shared understanding that we could build on in the IPM, so that when testing and coding starts on a story, everyone knows what capabilities are needed.

We tried out example mapping as a way to learn more about each story ahead of the IPM. Now, I’ve practiced example-driven development since I learned about it from Brian Marick back around 2003. So, I was surprised how effective it is to add rules along with the examples. The truth is, you can’t write adequate tests and code just from a few examples – you need the business rules too.

Since we have remote team members,using Matt Wynne’s color-coded index cards wouldn’t work for us. We tried using CardboardIt with its virtual index cards, but it proved a bit slow and awkward for our purpose. Since our product is Pivotal Tracker, a SaaS project tracking tool, we decided to try using it for the examples, rules and questions. For our planning meetings, we share the Tracker project screen in the Zoom video meeting for remote participants, so putting our example maps in text in our Tracker stories is a natural enough fit for us.

We’ve iterated on our Amigos meeting techniques over several months now. We don’t example map every story. For example, design stories may be covered well enough with the Invision design doc. What’s important is that we are chipping away at the problem. Feedback from my teammates is that they have a much better understanding of each story before we even start talking about it in the IPM. There may still be questions and conversations about the story, but it’s going deeper into the story capabilities because the basics are already there. And, our story rejection rate has gone down, as has our cycle time!

A template for structuring a conversation

During the pre-IPM “amigos” meeting, each story gets a goal/purpose, rules, examples, and maybe a scenario or two. We found that the purpose or goal is crucial – what value will this story deliver? The combination of rules and examples that illustrate them provides the right information to write business-facing tests that guide development. Here’s an example of our example mapping outcomes in a Tracker story:

ExampleMap example

Example mapping outcomes captured in a Tracker story


Devs use the info to help them write tests to guide dev. Ideally (at least in my opinion), those would be Cucumber BDD tests. The example map provides personas and scenarios along with the rules. However, sometimes it makes more sense to leverage existing functional rspec tests or Jasmine unit tests for the JS.

As a developer pair start working a story, they have a conversation with one or more testers, so that we’re all on the same page. When questions come up, we reconvene the Amigos to discuss them.

User stories act as a placeholder for a conversation. Techniques like example mapping help structure those conversations and ensure that everyone on the delivery team shares the customer’s understanding of the capabilities that story should provide. Since we’re a distributed team, we want a place to keep detailed-enough results of those conversations. Putting example maps in stories is working really well for that.

Example mapping is straightforward and easy to try out. Ask your team to try it out with your Amigos a day or two before your next iteration planning meeting. If it doesn’t work well for you, there are lots of other techniques you can try out! I hope to cover some of those here in the coming weeks.

I pair on all my conference sessions. It’s more fun, participants get a better learning opportunity, and if my pairs are less experienced at presenting, they get to practice their skills. Big bonus: I learn a lot too!

I’ve paired with quite a few awesome people. Janet Gregory and I have, of course, been pairing for many years. In addition, I’ve paired during the past few years with Emma Armstrong, Abby Bangser, and Amitai Schlair, among others. I’ve picked up many good skills, habits, ideas and insights from all of them!

The Ministry of Testing published my article on what I learned pairing with Abby at a TestBash workshop about how distributed teams can build quality into their product. If you’d like to hone your own presenting and facilitating skills, consider pairing with someone to propose and present a conference session. It’s a great way to learn! And if you want to pair with me in 2017, let me know!

Yesterday I ran across Dan Billing’s post about MEWT5. An excellent and thought-provoking post, but at the end, a picture of the 14 participants showed only 1 woman. Given that testing is one of the areas of software that has a decent percentage of women, and given all the awesome women in testing I know that live in England, that was disheartening.

I know most of the people in that picture. I know they are kind, talented, open-minded, smart testing professionals who would not intentionally exclude women from events. They all spend a lot of time helping all of us in the testing community with their organizing, writing and speaking.

I also know how hard one has to work to get at least a representative number of women included in testing events. I know now that 40% of the invitees of that event were women (why only 40%, well, let’s put that off for now) and that several could not accept, or had to cancel last-minute. That’s too bad, but again, I believe it just takes more effort to avoid the last-minute problems.

Maaret Pyhäjärvi summed up the whole issue brilliantly in her subsequent post. So please just read that. I am seeing a trend among people I know, at least: when women organize events, they think of a disproportionate number of women to include. When men organize events, they think of a disproportionate number of men to include. We are humans, we seem to know more people of our own gender professionally, and our brains have built-in biases that are hard to fight.

I will leave you with some additional reading that helps explain why we don’t have more women in tech, why we don’t have more women in high level positions in tech, and probably, why women don’t necessarily feel comfortable proposing conference sessions.

Note: These are based on science. Yes, legitimate studies.

I have more of these, contact me if you want more, but those should get you started in understanding why it really is harder for women to get recognition, to feel confident, to realize their potential.

What am I doing to work on the problem of too few women? I volunteer for conference program and track committees, I am a Speakeasy mentor, I mentor other aspiring presenters outside of Speakeasy, I work with conference organizers to recommend and find women presenters, and I pair present with newbie presenters (women and men, though my bias is for women because they have fewer opportunities) to give them confidence and experience. I hope you will do all these things too.

Action item: If you’re organizing a test event, please contact me for recommendations of awesome women to invite. And, work with Speakeasy, who can help you find awesome women presenters.

Disclaimer: Yes, we need to fix all the other diversity problems too. We need more people of color, LGBQ, and other minorities sharing their ideas and experiences. Diversity is what helps us improve and progress. But I have chosen to work on what I know best, trying to get more women presenting at software conferences.

Thanks for reading.

I’m a tester on a (to me) relatively large team. Recently I was asked to facilitate an all-hands retro for our entire team. I’d like to share this interesting and rewarding experience. It’s not easy for a large team to enjoy a productive retrospective. I hope you’ll share your experiences, too.

At the time of this retro, our ever-growing team had about 30 people, including programmers, testers, designers, product owners, marketing experts, and content managers. While we all collaborate continually, we’re subdivided into smaller teams to work on different parts of our products.

I work mainly on the “Frontend” team. Our focus is our SaaS web app’s UI. We have weekly retros. The other big sub-team is the “Platform” team, further divided into “pods” that take care of our web server, API, reports and analytics, and other areas. This team has biweekly retros. The “Engineering” team (everyone in development, leaving out designers and marketing) has monthly “process retros”. But all-hands retros had become rare, due to logistics. Several of our team members are based in other cities. We hadn’t had an all-hands retro for more than six months.


The team’s director asked me a few days in advance to facilitate the retro. I accepted the challenge gladly but I felt a bit panicked. Our team usually follows the same standard retro format. We spend some time gathering “happys”, “puzzlers” and “sads”. Then we discuss them – mainly the puzzlers and sads – and come up with action items, with a person responsible for taking the lead on each action item. Over the years this has produced continual improvement. However, I think it is good to “shake up” the retro format to generate some new thinking. Also, retros tend to be dominated by people who aren’t shy about speaking up. I wanted to find a way for everyone to contribute.

Thanks to my frequent conference and agile community participation, I have a great network of expert practitioners who like to help. I contacted the co-authors of two of my favorite books on retrospectives: Tom Roden, co-author with Ben Williams of 50 Quick Ideas to Improve Your Retrospectives , and Luis Gonçalves, co-author with Ben Linders of Getting Value out of Agile Retrospectives. (I’ve also depended for years on Agile Retrospectives: Making Good Teams Grea by Esther Derby and Diana Larsen.) I used their great advice and ideas to come up with a plan.

Food is a great idea for any meeting, so my teammates Jo and Nate helped me shop for and assemble a delicious platter of artisanal cheese and fresh berries (paid for by our employer, though I would have been willing to donate it.)

Pick a problem

Around 30 of us squeezed into a conference room. We had started a few minutes early so that our Marketing team could hand out new team hoodie sweatshirt jackets for everyone! That set a festive mood. Also, it is a tradition that people who are so inclined enjoy a beer, wine or other adult beverage during retros. For this reason, retros are typically scheduled at the end of the day. And of course, we had cheese and fruit to enjoy.

We had just over an hour for the retro, so I had to keep things moving. I gave a brief intro and said that we would choose one problem to focus on instead of our usual retro format.

I asked that they divide into their usual “pods”. Their task: choose the biggest problem that can’t be solved within their own pod that they’d like to see solved, or at least made better, in the next three months. In other words, the most compelling problem that needs to be addressed by the whole team. They should come back to the big group with this problem written on a sticky note.

As I expected, designers, testers and customer support specialists joined the pods they work with the most. Contrary to my expectations, the “Platform” team decided not to further subdivide into pods for the activity.  Together with the Marketing/content team and Frontend team, we only had three groups. I made a quick change of plan: I asked each group to pick their top *two* problems, so we’d have more to choose from. I gave them 10 minutes for this activity.

One group stayed in the conference room and the other two found their own place to work. They wrote ideas on sticky notes and dot voted to choose their highest priority problem areas. I walked around to each team to answer questions and let them know how much time they had left.

After 10 minutes, I called everyone back to the conference room. A spokesmodel from each team explained their top two problem areas and why those needed help from the whole team. We dot voted to choose the top topic, everyone got two votes. I would have preferred to use a process such as the Championship Game from Johanna Rothman’s Manage Your Product Portfolio  which would have been more fair, but we didn’t have time. Everyone seemed happy with the results anyway.

The winning problem area was getting more visibility into our company’s core values and technical practices, and how to integrate better with the company’s “ecosystem”. I don’t want to get into too much detail, because what I want to share is our process, rather than the specific problem.

Design experiments

Now that we had a problem for the whole team to solve, I explained that we wanted to design experiments, and we could use hypotheses for this. I explained Jason Little’s template for experiments:

Experiment template

Experiment template

We hypothesize by <implementing this>
We will <improve on this problem>
Which will <benefits>
As measured by <measurements>

I asked the groups to think about options to test the hypothesis. Who is affected? Who can help or hinder the experiment? I emphasized that we should focus on “minimum viable changes” rather than try to solve the whole problem at once. I gave an example using a problem that our Frontend team had identified in a previous retro.


Experiment example

I had three colors of index cards and we handed those out randomly around the large group. Then we divided up by index card color, so that we had three groups but each had different people than before. Again, each group found a place to work. Each designed an experiment to help address the problem area. I told them they had 10 minutes, but that wasn’t enough time, I ended up letting them have 15. I walked from group to group to help with questions and keep them informed of the time.

Then, we got back together in the conference room. A spokesmodel for each group explained their hypothesis and experiment. We had three viable experiments to work towards our goal.


At this point, we had experiments including hypotheses, benefits, and ways to measuring progress. We only had about 10 minutes left in the retro now, and I wasn’t sure of the best way to proceed. How could we commit to trying these experiments? I asked the team what they’d like to do next. I was concerned about making sure the discussion wasn’t dominated by one or two people.

The directors and managers put forward some ideas, since they knew of resources that could help the team address the problem area. There were videos about the company’s core values and practices. We also discussed the idea of having someone within our team or outside of our team come in and do presentations about it. There were several good ideas, including coming up with games to help us learn.

Again, the experiments we decided to try aren’t the point, but the point is we came up with an action plan that included a way to measure progress. The managers agreed to schedule a series of weekly all-hands meetings to watch videos about company core development values and practices.

Immediately after the meeting, the marketing director and the overall director got together to put together a short survey to gauge how much team members currently know about this topic, and everyone took the survey. After watching and discussing all the videos, we can all take it again and see what we’ve learned.

I had hoped to wrap the retro up by doing appreciations, but there wasn’t time. With such a big group, I think sticking to a time box is important. We can try different techniques in future retros.


I was surprised and pleased to get lots of positive feedback from teammates, both in person and via email. My favorite comment, from one of the marketing specialists, was: “It’s the best retro by far I’ve been to in four years here. I felt productive!”

The initial survey has been done, and we’ve had three meetings so far to watch the videos. We watch right before lunch so that people can talk about it during lunch. Having 30 people in an hour-long meeting every week is expensive, which shows a real commitment to making real progress on our chosen problem area.

I think the key to success was that by dividing into groups, everyone had a better chance of participating in discussing and prioritizing problems, and in designing experiments to address them. The giveaway hoodies along with food and drink made the meeting fun. We stuck to our time frame, though we did vote to extend it by a few minutes at the end. Most importantly, we chose ONE problem to work on, and designed experiments that included ways to measure progress in addressing that one problem.

The team’s directors have decided they’d like to do an all hands retro every six weeks, and they’ve asked if I could facilitate these. I think it’s a great idea to do the all hands retro more often. I’m not sure I should facilitate them all, but I’ll do what I can to help our team keep identifying the biggest problem and designing experiments to chip away at it.

Do you work on a large team? How do you nurture continual learning and improvement?