Agile Testing with Lisa Crispin
Providing Practical Agile Testing Guidance
Drawings explaining the business rules for a feature

Drawings explaining the business rules for a feature

In my previous post I told about a great experience collaborating with teammates on a difficult new feature. The conversations around this continued this week when we were all in the office. Many of our remaining questions were answered when the programmers walked us through a whiteboard drawing of how the new algorithm should work. As we talked, we updated and improved the drawings. We took pictures to attach to the pertinent story.

Not only is a picture worth a thousand words, but talking with a marker in hand and a whiteboard nearby (or their virtual equivalents) greatly enhances communication.

I can’t say it enough: it takes the whole team to build quality into a software product, and testing is an integral part of software development along with coding, design and many other activities. I’d like to illustrate this yet again with my experience a couple days ago.

Some of my teammates at our summer picnic - we have fun working AND playing!

Some of my teammates at our summer picnic – we have fun working AND playing!

Recently our team has been brainstorming and experimenting with ways to make our production site more reliable and robust. Part of this effort involved controlling the rate at which the front end client “pings” the server when the server is under a heavy load or there is a problem with our hosting service. Friday morning, after our standup, the front end team decided on a new algorithm for “backing off” the ping interval when necessary and then restoring it to normal when appropriate.

Naturally I had questions about how we would test this, not only to ensure the correct new behavior, but to make sure other scenarios weren’t adversely affected. For example, the client should behave differently when it detects that the user’s connection has gone offline. There are many risks, and the impact of incorrect behavior can be severe.

I was working from home that day. Later in the morning, the programmer pair working on the story asked if we could meet via Skype. They also had asked another tester in the office to join us. The programmers explained three conditions in which the new pinger behavior should kick in, and how we could simulate each one.

The other tester asked if there were a way we could control the pinger interval for testing. For example, sometimes the ping interval should go up to five minutes. That’s a long time to wait when testing. We discussed some ideas how to do this. The programmers started typing examples of commands we might be able to do in the browser developer tools console to control the pinger interval, as well as to get other useful information while testing. We came up with a good strategy.

Note that this conversation took place before they started writing the actual code. While they started writing the code using TDD, the other tester and I started creating given-when-then style test scenarios on a wiki page. We started with the happy path, then thought of more scenarios and questions to discuss with the programmers. By anticipating edge cases and discussing them, we’ll end up with more robust code much more quickly than if we waited until the code was complete to start thinking about testing it end to end.

There are so many stories in this vein in Janet Gregory‘s and my new book, More Agile Testing: Learning Journeys for the Whole Team. We’ll have more information about the book on our book website soon, including introductions to our more than 40 contributors. The book will be available October 1. It takes a village to write a useful book on agile testing, and it takes a whole team to build quality into a software product!




I was so excited to see A Coach’s Guide to Agile Testing from Samantha Laing and Karen Greaves of Growing Agile. I don’t know of any other book to guide coaches in helping agile teams learn good ways to succeed with testing and build quality into their software product. Here are my immediate thoughts after reading it.

The book is useful even for coaches/trainers who don’t themselves have a lot of

Growing Agile: Coach's Guide to Agile Testing

Growing Agile: Coach’s Guide to Agile Testing

knowledge of or experience in testing. The authors use the “4 Cs” from Sharon Bowman’s Training from the Back of the Room, and show how to apply them. Newbie coaches and presenters usually make the mistake of wanting to deliver a lot of information via lecture and slides. Bowman’s techniques have helped me make my own training classes and workshops much better learning experiences for participants, so I’m glad to see them incorporated here.

It’s so helpful that the guide includes a training kit, and advice on what supplies are best, even the best markers to use. I liked the idea to send participants photos of the workshop, though I might ask up front if everyone is comfortable with having photos taken.

Sam and Karen give options on how to schedule and organize a class or a shorter workshop. I’m betting that readers will find this incredibly helpful.

The visuals in the book are terrific. We know how well pictures help convey concepts!

One thing that bothered me about Training from the Back of the Room was that I’m not comfortable asking participants to do some of the exercises. I’m an introvert, and I find it hard to ask people to do something I wouldn’t feel good doing. The exercises here are well within my own comfort level, which is great.

Most importantly, the content is spot on. It is SO wonderful to read a book whose authors truly grok the testing mindset. Karen and Sam understand what teams need to know about testing, and how coaches can help them learn those things.


I’m currently at Booster in beautiful Bergen. It’s a terrific conference, about 240 people

Thanks Martin Burns for tweeting this photo!

Thanks Martin Burns for tweeting this photo!

which is a perfect number. Everyone is so enthusiastic, the food is great, the sessions are awesome. I’ll blog more about that.

I facilitated a workshop today on “changing your testing mindset”. I’ve uploaded the slide deck, and will be blogging more about that too. Many thanks to the workshop participants, we all learned a lot from each other! Some great ideas generated.

Lisa CrispinI’m doing a day-long tutorial on “Changing your testing mindset“, and an “interactive session” at Belgium Testing Days in Bruges, Belgium, March 17 – 20. You can learn more about my tutorial (and see my adorable donkeys) in my introductory video.

What could be more wonderful than being in beautiful Bruges? Why, meeting up with a bunch of amazing testing practitioners for four days to share experiences and break new ground in testing.

Check out the program. You can use my discount code, FeahY8, for a 10% discount on registration! I hope to see you there.

I presented a session called “Developers who grok testing: Why I love them and how they mitigate risk” at CodeMash. We did a mini workshop, dividing into groups to brainstorm what devs need to learn about testing in order to grow towards grokking it. How can testers and developers communicate and collaborate better?

Take a look at the pictures to see the ideas. Participants also shared wonderful stories about how they got testers and developers collaborating for testing. For example, one team has testers and developers pair on a story. They do test automation and coding together. The story doesn’t have to move to a separate “testing” column or state, the pair does all testing activities together and then the story’s done!

Do you have any success stories for getting developers more involved in testing? Please share in the comments directly or via links to your own blog/articles. (Sorry the pictures are displaying in a dorky way but I don’t have time to fix!)


IMG_1162 IMG_1161 IMG_1160 IMG_1158

Thanks to the participants in today’s workshop! We tackled quite a few tough testing-related issues that many agile teams encounter.

Big visible outputs emerged!

Big visible outputs emerged!

The slides, pictures of all the outcomes such as topics, impact maps and brainwriting pages, are available here.

After each table group identified their most burning testing issues, they created SMART goals for each one and then brainstormed experiments to try which would address the issues. We tried impact mapping, brainwriting, mind mapping, and “if you had super powers”. In the afternoon, we had an enthusiastic round table discussion and finished up with a LeanCoffee format.

One of the most interesting ideas I heard was using food to build trust. One participant said he brings food to meetings (at his own cost). Sharing snacks creates a social atmosphere, promotes conversation, and helps team members build more trust.

We had good conversations about encouraging whole-team responsibility for testing activities, incorporating testers onto cross-functional teams, letting teams jell, finding ways to make story requirements more clear. Take a look at the ideas in the pictures. I hope some participants will try some of the experiments created, and blog about what they learned. I learned a lot but I couldn’t take notes!


Here’s more on what I observed and learned during Agile Testing Days. I’m just going to brain dump the rest here, because otherwise I’ll never get it posted!

Lean Coffee

Janet and I had a great turnout for Lean Coffee all three mornings, despite the early hour. We averaged 20 – 25 people squeezed into two groups. The Fritze Pub is cramped so it was impossible to break into smaller groups as more people turned up. It worked fine anyway. Participants were passionate about the topics, and shared their experiences.

Agile Testing Days was not without bugs! And pirates!

Agile Testing Days was not without bugs! And pirates!

For a nice overview of our first Lean Coffee, plus notes about the first keynote, see Pete Walen’s excellent live blog (how DOES he do that, especially with marginal wifi): (search on his blog for days 2 and 3)

For me, the keynote was just blah blah blah, I had to agree with Seb Rose who tweeted: “Feels like a certification course compressed into a keynote”

More sessions

Unfortunately I didn’t take good notes on the sessions I went to Monday morning. Sami Soderblom (read his great post on the conference) had a fabulous preso on exploratory testing. One of his messages was that when programmers and testers work together, we code, test and review together. He’s a mind mapping fan. Also a hockey fan. Doh, he’s from Finland. He referred to Explore IT  by Eisabeth Hendrickson, +1 on that!

I couldn’t get into the session on infrastructure testing with Jenkins, Puppet and Vagrant, but here is the slide deck, and  a from Lyndsay Prewer.

Instead, I went to Anand Ramdeo’s session on “Misleading Validations, Beware of Green”. It was a good reminder to keep an open mind. Don’t let “green” builds lull you into a false sense of security.

Mary Gorman – Strength through Interdependence

Mary Gorman’s keynote was terrific. I don’t have a lot of notes on that one either, because the information is all in the excellent book  Discover to Deliver that she wrote with Ellen  Gottensdiener and which I have read and used. She talked about interdependence and dependence, and had us rate our teams on various aspects of trust. Do we trust that our teammates will do what they say? Is everyone open to feedback, do they provide it in a direct and constructive way? Do we trust each others’ competence? Mary said we should “shed light without generating heat”. Bottom line: more trust means a better product.

At my previous job, I kept Mary and Ellen’s “seven dimensions” in front of me during planning and brainstorming meetings. They help me ask questions that cause the business stakeholders to think about more aspects of their feature, and help the development team better understand what’s needed for the technical implementation.

At lunch, J.B. Rainsberger tempted a few of us (Oana Juncu, Gil Zilberfeld, Matt Huesser) into escaping off to the Café Heider for cappucino, hot chocolate,and interesting conversations. These impromptu discussions are always the best part of a conference. Joe (J.B.) and his wife Sarah are so inspiring to me. They’ve purposely designed a lifestyle where they don’t need so much money and thus don’t need to work all the time. And, they have changed what they eat so dramatically that they’ve lost something like 200 lbs between them.

Consensus Talks

The consensus talks, I believe they were 15 minutes each, were mini-experience reports, and probably my favorite part of the conference. A lot of the presenters were not experienced conference speakers, but they had the courage to share their stories, and I learned something from each one.

Chris Baumann told us about interesting experiments his team did for testing when they had no testers. One technique they used was to put all the regression tests on cards in a box. Each dev took a card and tested, until the box was empty. That way they got different tests to do each time, it was less boring. Still must have been somewhat boring, as he referred to “the hamster wheel of regression testing”- such a great visual! Regression testing slowed to a stop whenever it became too tedious. The long term results of his team’s efforts remain unknown, as Chris changed jobs.

Stephan Kaemper‘s talk was on “are we still testing the wrong stuff”, it was good, but I didn’t take notes and I’m afraid all I remember is his “zebricorn”: more black and white than unicorn. But do read his blog for his useful thoughts on things like the “two values of software“. Though he had the interesting observation: how important is future readiness? Maybe the question is more important than the answer. Stephan and I had interesting discussions later in the week about what he’s doing with testing infrastructure. Something I never thought to do.

Mitch Rauth had a great talk on being an “agile test manager”. He applied the “Three Amigos” pattern to the manager level with good results. Also recommended Management 3.0 by Jurgen Appelo, which I heartily endorse as well. As manager, he manages the environment, the boundaries around the team, the strategy, the guidelines for self-organizing teams.

Chris George (check out his blog posts on Agile Testing Days) described how his team went from being a separate test team to quite blurred lines where who does testing and who does coding is not so hard and fast. Tester-coder collaboration helped each feel the other’s pain points. Memorable quote: “Focus on skills, not roles”. Pairing up a tester and programmer sped up debugging.

Then, I got obsessed with the onstage timer, which looked like a giant light saber, slowly grew up green, then turned red when the time was exceeded. They should have that at award shows.

I spent the last session of the day making a thing in the Test Lab with James Lyndsay and Bart Knaack. I used Scratch, which is designed to teach kids how to code, to create a bat in a castle who did particular things based on my arrow clicks and where the cursor was. I got a button. Cool!

Test Lab: The Doctor is In! Bart Knaack at the Halloween Party

Test Lab: The Doctor is In! Bart Knaack at the Halloween Party

Then I found my husband Bob in the hotel lobby playing testing dice games with Pete Walen, Huib Schoots, Mieke Mertsch and more. He was doing pretty well, too. Evidently you learn a lot about testing just by hanging out at testing conferences for so many years!

The conference dinner was a Halloween party. Halloween is NOT a big thing in Germany, though I was told that kids do trick or treat these days. However, a surprising number of attendees wore costumes. We had a giant bug at our table. He won first prize for costume. I never figured out who he was, though he was about 7 feet tall so it should have been easy to find him in plain clothes at the conference. He never got out of costume in spite of the heat, even when dancing to the awesome all-woman jazz and rock band. Best group costume were Pete and Connie

Matt Huesser, Pete and Connie Walen, and Huib Schoots pose with José Diaz

Matt Huesser, Pete and Connie Walen, and Huib Schoots pose with José Diaz

Walen, Matt Heusser, and Huib Schoots as – bits. My headless horseman costume purchased from Amazon was a giant fail. Sigh.

But the big fun was that Markus Gaertner won the Most Influential Agile Testing Professional Person award! Yes, it’s a goofy-sounding award, but it’s strictly a vote of your peers, which is really cool. Congratulations to Markus! I’ve learned so much from him and his book ATDD by Example. Really important things, such as: when you need a test automation framework, don’t go looking for the coolest one, first decide how you want your tests to look.

Day Two

Christian Hassa, “Scaling the Enterprise”
Christian introduced me to a new concept: underpant gnomes. So many companies want to collect underpants, and somehow make a profit, but what turns the underpants into a profit? They don’t look at that. You can’t just collect underpants and not change the way you work. A burning question from Christian: What can we learn from Pinky and the Brain about changing the world? Each episode, they fail, but they learn something. We can too.

Christian recommended many of my Favorite Things including Impact Maps, Story Maps, examples.

I liked this quote: your job as a tester isn’t to verify the software, it’s to verify that the world is actually changing fast enough. I think that was Christian, but my notes have gotten confused.

Christian talked about scaling TDD to the enterprise. Set a desired goal, figure out the stakeholders, define a desired behavior change, figure out deliverables, do TDD, then write a failing acceptance test, etc. Measure the impact of the deployed system, refine the deliverable. You can continually refine your strategy based on what behavior change actually happens. This elevates your testing. Scaling isn’t about how to do more work with more people. Test goals/impacts as early/often as possible. Scale is in: what to measure, meter; how to measure the range: be, constraint, target.

It doesn’t mean doing more stuff with more people, it means understanding agile principles on a higher level and building things that help you w/ build / measure / learn

Check out Christian’s slides.
He also recommended the How to Measure Anything book, which is on my list.

A session I wanted to go to, but there were so many at the same time, was by Adam P. Knight on big data. Here are Dan Ashby’s sketch notes. You can learn more in this article about big data article by Adam.

Alex Schwarz, Ripening of a Restful API service

Alex delved into the risks for a RESTful API, including “clients misuse the API” and “version hell”. He works for Nokia, and their API is that thing that when you go to a hotel website, you can look up all the restaurants, tourist sights and so on that are nearby. It’s huge. He showed an interesting timeline of alpha launch, public beta launch, ready for commercial deals. His “many product interdependencies” included search, analytics and Splunk. They have a B2B2C business model, so they’re concerned about their ‘business’ customers, and the end users of those customers. To avoid regression, they test end to end on devices, use Concordian acceptance tests (though he wasn’t too happy with those, seems like), Scala acceptance tests, and CDCs – Customer Driven Contracts, that capture the expectations of the consumer and how they use the API.

They use “Canary” testing, releasing new features in prod for just a few customers. They avoid end to end tests and device tests. To counteract the “noise generator” of introducing new fields, they use a “chaosMonkeyField”, “We reserve the right to add fields at any time”. I have to admit I was a bit confused on this but it sounded cool.

Mobile app CI session:

This was a disappointing. session. Not sure I really understood this:

Project size – mobile small, embedded large
Complexity – mobile low, embedded high
Criticality – mobile low, embedded high
Ease of outsourcing testing – mobile easy, embedded hard

The presenter went over how they test bluetooth communication between, for example, iPhone and the ComPilot device (which I actually have, since I have Phonak hearing aids. This guy was referring to testing at Phonak.)

The session was confusing to me, and later when I talked to other people who attended, who actually do mobile testing, they thought the approaches he described were a bad idea. For example, he recommended jailbreaking the phone. Other attendees told me that’s a bad idea, because most users won’t have done that, so you’re not testing a real situation. Also, his tester job description included the task “Get developers to write tests.” Huh? So, I’m not going to repeat my notes here.

Since it was clear there were many experienced mobile testers at the conference, I scheduled an Open Space session on mobile testing. It was focused on Android testing, which I know little about, as I have only gotten to test iOS apps. One interesting tip there was, given the multiplicity of Android devices, one team just keeps track of which devices have the most defects reported by customers. For example, if they’re getting lots of defect reports on Samsumg devices, they focus their testing there. Basically covering whta their customers use most. One interesting point there that I didn’t know was that the provider of cell phone service makes a difference. For example there may be issues with Vodafon that other carriers don’t have.

People in the session recommended logging everything, using Google analytics and Crashalytics. The kinds of info they get: if lots of user are leaving a particular view, try to figure out why that is – usability issue? Trace software issues from the API.

One person mentioned that they package an HTTP server with their app, and open it up in real time. It provides a way to automate and change phone settings. That was a bit over my head, but it seemed an alternative to jailbreaking, which everyone agreed was a bad idea. Also they were changing the phone location some way, using Calabash.

It was interesting to hear how people keep their phones charged, go reset stuff manually every day, parallelize the CI. People were doing CI on actual devices. One team is keeping their phones up in the ceiling, because other people kept “borrowing” cables and cords. They tried using pink cables to deter thieves but it didn’t work. :-> Need to look up some of the folks who seemed to know their stuff around mobile testing.

More Lean Coffee

One interesting story on Lean Coffee Day 2 from Emma Armstrong (do go read her blog posts from Agile Testing Days). They keep their backlog on a giant whiteboard. Sounds like there are hundreds of story cards on it. Some unestimated ones are in an envelope on the floor by the whiteboard. She pointed out the difficulties of finding a particular card when she wants to make notes, perhaps about a defect she just found. These are the types of challenges that more experienced agile teams encounter.

Dan North’s Keynote

How do agile teams do testing? Blah, blah, blah, test!, done. Dan rocks, and everyone loved his index card slides (though I know that was a TON of work to do!)

“What we do reveals what we value”. Our values and beliefs, capabilities and behavior are all inter-related. Dan had an interesting quadrant to help find what capabilities are missing, with manual and automated on the top and bottom, deterministic and “random” on the left and right. (Yes, the “context-driven” school started  complaining right away). The summary slide was that the quadrant with Manual and Deterministic was “Boring”, and the quadrant with “Random” and Automated was “Weird”.

Dan noted that grooming is for horses, not backlogs, and he put in that maybe it is also for donkeys. Yes.

Dan wrapped up by reminding us to explore other testing methods, consider opportunity cost (what did we NOT do while we were finding and fixing all those bugs), and to test deliberately – how, where, what, when.

Dan’s keynote was notable for bringing up “130 kinds of testing”. He called it “Testing Boggle”. People debated the rest of the conference whether there are really this many kinds of testing. OH and Dan noted that happy Ops guys should be our goal. Whether there are 130 kinds of testing was a hot topic the rest of the week.

You can look at Dan’s slides for more.

Matt Heusser’s keynote

Matt talked about “fast” and “slow” thinking, which has intrigued me since I saw Peter Varhol‘s session on it at the SQuAD conference. We definitely miss glaring bugs due to “fast” thinking. But what the heck do we do about it? The book by Daniel Kahneman does not say. I want to learn more about how we can apply psychology to achieving better testing.

Agile Games Night

Facilitated by Bart Knaack, the games night was shockingly well attended. I ‘helped’ Janet run an interesting game to compare “NoEstimates” to Scrum. My team was the Scrum team. We had planning meetings and 20 minute iterations, and worked in pairs. The other team did Mob Programming. The task was to work part of a big jigsaw puzzle. The Product Owner decided which part to do. The Mob Programming team could only have one person moving the jigsaw pieces around, and they rotated that job around.

Our team estimated some stories for doing a part of the picture, and paired up. I acted as ScrumMaster. There was some panic at the end of the first sprint, when I pointed out that the other team was “Kicking our ass”. No, I’m not a good ScrumMaster, which is why I don’t do it for a living. In the end, we got about as much of the puzzle done as the Mob Programming team, but ours was in two sections. The Mob Programmers did one whole section. Is this good or bad? I have no idea, but it’s  worth more experiments.

Bart Knaack and James Lyndsay did a class (sponsored by the conference) for school kids on the Friday after the conference, with the programs they’ve developed to help teach kids to code and get interested in software coding and testing. How cool is that?

Day Three

David Evans, Making Quality Visible

Why do I have so many notes on this talk? Ummm, mostly because I fought off my compulsion to continually tweet, and took sketch notes. (Why I didn’t do that Day 1 and 2… I can’t say).

‘The “product” of testing is confidence.’ I thought this was a great way to put it.

Testing is like an expert witness at a trial. We look at data and use our judgment to come to a decision. As an example, he showed the actual (hand-written!) engineer reports of the testing on the space shuttle O-rings, and the temperature/hardness data. When you look at those reports you can clearly see it was a horrible idea to go ahead with the Challenger launch in ’86. But apparently, Morton Thiokol’s managers did some magical thinking and decided that if the O-rings hardened due to cold and failed to seat, the second set of O-rings would seat. Though there was absolutely no data on how this could be.

David gave some interesting examples (with audience participation) proving that our vision wins over our hearing. We hear what we see. For example, the McGurk effect – if we see someone’s mouth forming a word, we believe their mouth, rather than what they actually say. The way we visualize information is critical. Beware of conflicts.

He also gave great examples of how data is presented with or without context. If you look at how much the U.S. spends on military, it appears to be far more than any other country. However, as  a % of GDP, it’s quite a different story, the U.S. is way down the list. So context is key.

Another interesting story of visualizing was from Napoleon’s “Grand Armée”‘s march into Moscow (which took a couple years). The graphic of the size of the army along the way into Moscow, and coming back out, is striking. And if you graph temperature along with it, you see that indeed, Napolean was defeated by two generals: January and February. Most of the troops died, and most of those from the frigid weather.  See the link to the prezi deck later on.

It’s good to have visuals that require little interpretation. For example, a heat map showing where a particular soccer player spends most of his time, due to his position on the team – everyone can look at the map and understand what it means.

Checking Twitter for feedback is a simple way to get data – search on your product name and #fail or :( Another technique: Wordle the text in your bug reports and see what words come up the most.

Interesting ways to interpret the 2012 presidential election results – we aren’t really such a divided nation (well, I’m not so sure about that!)

Also a great example in the London tube map. It was a revolutionary step forward in design. However, people interpret it as if it represents the actual geography of London. They’ll go through extra tube stops and changes to get to Paddington when they could have gotten off at Lancaster Gate and walked a couple blocks. (Nice to have examples I can really identify with, as I’m quite familiar with that area and those tube stops!) So, don’t let your architecture diagrams be the London tube map! Scale them to reflect actual geography.

Humans use their sensory and motor skills more than anything. Keep that in mind when deciding what’s most important to your customers.

Work with the brain. 3-D is, like, faster than 2-D, right? (Ref to Spinal Tap) But 3-D is way over-used in charts and it doesn’t convey information well. Find the right scale for your audience. Be careful with dashboards. You aren’t flying a jet, you’re developing software.

Put people on your Kanban (or whatever) board. Show who’s working on what, and who has nothing to do.

Show milestones on your board, for example, as is done in story mapping. (Hey! We can do that!)

For graphs – Keep time on your X-axis, people can interpret time much better that way. Put numbers or size on your Y axis. Use line graphs and an easy color scheme for time-related data, such as – number of low, medium, high bugs over time.

Keep pie charts to 6 sections or less – group the lowest numbers together in an “other” slice.

David’s slides are worth a look. Further good tips can be found in this short piece (with some good links). A deeper analysis, try the book Information Dashboard Design by Stephen Few.

Ajay Balamurugadas – Exploratory testing

Ajay  did some interesting feats of mind mapping ideas thrown out by the audience, and demonstrated some cool memory tricks using mnemonics. I had hoped for a more advanced talk on exploratory testing, but do check his blog for his explanation of his slides, plus his wonderful stories about exploratory testing.

Seb Rose on using BDD (or TDD or SbE/ATDD) to build trust on your team

Seb gave some good analogies for team trust. For example, if you buy a used car that turns out to be no good, even if you get your money back, you won’t trust that dealer anymore. You put time and effort into shopping that you can’t get back. Do check out Seb’s beer belly testing anti-pattern.

This was a pretty basic session with stuff that I already try to practice, like involving your non-tech stakeholders, using the Three Amigos approach. He likes Cucumber because it works with just about all programming languages.

Seb advises that you need to understand, document and communicate your architecture. He feels most programmers and testers don’t really understand their system’s architecture. Personally I’ve always found it really helpful to learn more about the architecture of our team’s application.

J.B. Rainsberger: The Next Decade

Joe’s keynote was brilliant. It started with a video of him giving a keynote at another recent conference. He showed himself a couple years back when he was some many kilos heaver. One question, why aren’t ‘we’ rich, meaning the people who came up with agile values and practices. I can think of a lot of theories for that. Joe noted that we’ve scripted the critical moves, but we’ve apparently failed to convey the vision.

Joe had an “intermission” with a hilarious Bob Newhart and Mo Collins video.

Joe’s slides are worth a look: there are some excellent cartoons, plus a video of him giving a similar talk a couple years ago when he was twice as large.

And the last keynote was mine and Janet’s. We started it off with the Legend of Super Agile Person, which was even more hilarious than we could have predicted. Thanks again to Stephan, Mary and Pete for their admirable acting skills!

(I have the rest of the talk on video if anyone wants to see it, but it’s not nearly as funny!) Here are Dan Ashby’s sketch notes.

Here are random from Agile Testing Days.

I didn’t get to TDD as if you meant it, but I heard good things about it. I also missed  Carlos Blé’s live coding session. It was really tough to choose sessions in many of the timeslots.

Vojtěch Barta wrote a great summary of the conference. Oana Juncu’s summary of Agile Testing Days captures the magic of what this community brings together and generates.

You can find my photos from Agile Testing Days.

After consolidating my notes and photos from Agile Testing Days (so many devices! Plus sketch notes!), I realized it’s way too much for one post. I’m taking the advice of tweeps and serializing this novel. First up: the workshop I facilitated on Advanced Topics in Agile Testing. I must emphasize that I was only a facilitator. I organized brainstorming exercises, and we all shared our experiences. We generated some interesting ideas!

We had 18 participants including myself, and they were truly advanced, lots of experience in both agile and testing, working in a variety of domains. I was so happy to meet so many people who have improved their process and product. Most work in globally distributed companies.

Goals and Purpose

We started by setting SMART goals around the testing-related problems and obstacles each participant faces on their own teams. For example, one team had a goal “get faster feedback with automated tests in three months using nightly builds”, another had “automate regression tests which cover the work of several teams”. Not everyone managed to follow the whole SMART pattern, but we had plenty of interesting issues to address. Topics ranged from how to get started with ATDD to “dev and QA love”.

Brainstorming techniques

Then we tried different brainstorming techniques to come up with small experiments that participants could try back at work. The first one was impact mapping. Most people weren’t familiar with it, so I gave an introduction. Workshop

Each table group took their highest priority topic and started out by explaining the “why”, the purpose of the goal. Next, they brainstormed about who could help attain the goal, or who might get in the way of success. The next step was to consider each “who” and decide how they could help or hinder, the impacts. Finally we discussed deliverables, the “what” for each “why, who, how” combination. After each ’round’, the groups shared their outcomes. To wrap up, each team chose one or two experiments to try when they get back to work. The general feeling was that these were doable experiments worth trying.

The exercise generated lots of interesting ideas. One group came up with the idea to pair across teams in order to improve communication and get a better understanding of the product. Another team proposed creating a definition of done for each epic to improve test coverage.

Mind Mapping

To address the next highest priority goal for each group, we used a simpler technique, mind mapping. The free-form discussion with everyone adding nodes to the group mind map stimulated helpful thoughts. The groups ended up using sticky notes as the nodes, which was a bit more practical with five or six people working on the same map.


After lunch, everyone self-organized into new table groups, and we tried another brainstorming technique, “brainwriting“, to come up with experiments around the next highest priority goal. This works in five minute intervals. Each person starts writing ideas on a piece of paper. At the end of five minutes, they pass it on to the person on their right, read the ideas on the new piece of paper they received, and see if that sparks more ideas.

brainwritingThe room was quiet for 30 minutes as everyone wrote. I was surprised how many unique ideas each group came up with. And I was surprised how creative people got, using mind maps and graphics to illustrate their ideas.  This proved the most popular brainstorming technique. As one participant commented, it’s fair, because everyone gets an equal chance to contribute.

More techniques

The next go-round, teams used the “super powers” technique, where you dream what you’d do if you had super powers such as x-ray vision or the ability to teleport, or SWOT analysis – What are the Strengths, Weaknesses, Opportunities and Threats currently relevant to your goal?

Some groups tried both. The SWOT was confusing, the difference between weaknesses and threats for example is a blurry line. These techniques weren’t as popular as brain writing, but led to some creative experiments.

Sharing experiences

In between brainstorming sessions, we had large group discussions on topics that emerged. I enjoyed hearing the different experiences team members shared. Since I was facilitating, I couldn’t easily take notes, but I learned some insights into tough areas such as working in a highly regulated domain. Pore over the photos for some inspiration.

Try it yourself

See more photos from the workshop, sorry they aren’t more organized, but maybe they’ll inspire you to try some of these techniques yourself, with your own team! Identify the areas where you want to improve, set SMART goals, and brainstorm small experiments to try. Learn from those experiments to chip away at obstacles and achieve your goals of high-quality software that delight your customers!

During a recent stay in San Francisco, I walked down to the Ferry Building before dawn to enjoy the beautiful sunrise over the bay. I stopped in Peet’s Coffee for tea. As I waited for it to steep, I noticed a diverse group of people gathered around one of the large wooden tables in the public area of the Ferry Building. It reminded me a bit of a Lean Coffee. These table-mates were discussing something animatedly. Yet they apparently weren’t there just to talk. One had a laptop open, a couple of others had a newspaper, They represented a range of genders, ages and abilities. Nothing really seemed to connect them, except enjoying each others’ company at the same table.

communitycroppedWere they some kind of club? A Cheers-style group of regulars to Peet’s Coffee who gathered every day in the pre-dawn hours? Whatever their connection, I envied their obvious sense of community. What would it be like to meet with a group of regulars each morning, to drink coffee and pursue your own interests (the paper, the laptop) while also indulging in conversation?

I’d been spending the week engaged in different sorts of communities myself. I participated in an Agile Open conference for two days, followed by working with some of my teammates in our SF office. And as usual, I was involved in online communities via social media. These communities provide me with learning opportunities, a chance to share experiences and ideas with my peers. But that’s not quite the same as being in a regular gang where everybody knows your name, at Peet’s Coffee or Cheers.

What’s the point of telling this story on my blog? Somehow, the sight of this nice group of friends enjoying their discussion along with different pursuits prods me to spend more time finding in-person learning and sharing opportunities. There are so many meetups and user groups in the town where I work, but I don’t make time for very many. The evening before my visit to the Ferry Building, I met my friend Angeline for dinner. She takes full advantage of all the interesting groups and meetups in San Francisco, and invests a lot of time in learning. Right now she’s learning how to make hardware. That’s so inspiring to me.

So maybe I’d better make time for the next Women Who Code meetup, and converse in-person with people outside of my own (albeit wonderful) team. It might not be a Cheers- or Peet’s-style gang, but I might learn something and make some new friends.