Agile Testing with Lisa Crispin
Providing Practical Agile Testing Guidance

I learned so much last week at Agile Roots 2015 last week. Check out the artifacts, they’ll inspire you too! Janet Gregory and I did a plenary talk on “Do Testers Have to Code… To Be Useful?” I always love pair presenting with Janet. She did a super job of explaining our views on the subject. To summarize: Your software delivery team already has coders, and they can write test code as well as production code. But we think testers do need technical awareness to help them communicate and collaborate well with other team members.

This blog post is meant to be about our workshop, though, so on to that. We had 90 minutes and a great group of participants to think about what skills a team needs to help them build quality in to their software product. Testing isn’t a phase, as our friend Elisabeth Hendrickson so aptly says. We know we can’t test quality into a product (I am not sure who first said that, but I’ve heard it for 20 years! Still, people seem to try!) Quality has to be baked in. What skills help us do that? As testers, Janet and I tend to focus on testing skills, but are they the most important?

T-Shaped Skills

Each of us has a wide range of thinking (aka ‘soft’ or ‘people’) and technical skills. Most of us also have some area of special passion where we have deep skills. For example, I have lots of experience in exploratory testing, test automation, eliciting examples from customers, SQL, and so on. But I can bring the most value to my team with my ability to learn domains quickly – that’s my deep skill. I learned about the T-Shaped Skills concept from Rob Lambert. Each workshop participant noted their skills which can help their team build in quality, one per sticky note.

Commitment to quality

Quality is like Mom and apple pie. Ask any software delivery team, they’ll say they want to create a high-quality product. But are they really committed to doing that? What will they do when they encounter an obstacle? We shared stories and discussed the importance of making that commitment mean something. It will take a variety of skills, experience and perspectives to creatively overcome all the things that get in the way of building in quality. Get your team together and talk about what your commitment to quality really mans.

Square-Shaped Teams

When all team members put their T-shaped skill sets together, we get square-shaped teams, see Adam Knight‘s blog post on this topic. Our workshop participants compared their individual skills, grouped similar ones, and discussed which were most important. (Pictures of the results are at the end of this post). What skills can each specialty bring to the party? If an essential skill is missing, how can your team obtain it?

Transferring knowledge, effecting change

We discussed collaboration techniques teams can use to make the best use of specialized skills they need. Learning new skills or sharing specialized ones can mean change, and change is hard. Patterns from More Fearless Change by Linda Rising and Mary Lynn Manns are helpful as you try to spread new ideas or encourage new experiments.

Each workshop group discussed the skill area they deemed most important, and thought of experiments they could try with their own teams to build those skills. Interestingly, communication skills, rather than technical testing skills such as exploratory testing or test automation, were tops in three out of the five table groups. The other two groups chose related skill areas: conflict resolution and gaining empathy with users. Interesting experiments were tried. One group decided to try teaching a simple skill to see how hard it might be. One of the group members was left handed, and set about teaching the others to write left-handed. This proved a simple way to learn how to teach a skill, a pre-requisite to helping spread skills across the team! Another group played an icebreaker game to learn more about each other as a first step in improving communication. Again, this is something simple and fun that any team can try.

Giveaways

With only 90 minutes for our workshop, we didn’t have time to try out a lot of techniques to transfer skills. For myself, a key giveaway (I learned that term from Alex Schwarz and Fanny Pittack at last year’s Agile Testing Days, I like it better than takeaways) were that what so many play down as “soft” skills form the core strength of a team’s ability to build quality into their software. If they can’t communicate with each other or their customer effectively, it’s hard even to define what quality means to them and to their customer. Another “aha” moment was realizing that extremely simple exercises such as an icebreaker game or teaching a skill like writing left-handed provide a lot of insights and help teams work together better.

Below are the skill charts from each of our groups (WP won’t let me format these in a nicer way, for some reason). You can also check out our slides, which have some good resources for further reading. Janet and I will do a similar workshop at Agile Testing Days, but we’ll have a whole day there, so we are looking forward to more in-depth outcomes which we can share.

IMG_5235

IMG_5237
IMG_5236

IMG_5238

IMG_5239

I work every day as a tester on the Pivotal Tracker team. Some people think that because I’ve written books and speak at conferences I must be a full time consultant, but my passion lies in being a member of a great team and doing hands-on testing.

brainstormingAtWork

Brainstorming with JoEllen, at our old office

It’s easy for me to go around telling you all what’s the best way to build quality into a software product, but practicing what I preach can be a challenge. For example, I’m a very shy person with self-esteem issues. So though I’m always telling you how great pairing is, I often find it hard to leave my bubble and pair with amazing people like my teammate JoEllen Carter.

Here’s a slice of life from a particularly exciting week. We moved to a fancy new office half a block down the street, along with other teams from Pivotal Labs. Right now our Tracker team is rattling around in our new digs, but we have several new hires, interns and a Colorado School of Mines project team joining us soon.

Enough said

Enough said

Until all those new folks join us, there are extra monitors lying around. One of my teammates, knowing how much I love major monitor real estate, hooked a third monitor up to my usual workstation. Of course, another teammate caught me using my laptop for a standup meeting with some remote team members, and posted it on our Slack channel. Yeah, I look pretty silly! And there’s a downside: it’d be nice to move locations every day as our dev pairs do, but I can’t tear myself away from those three monitors. Well, it won’t last forever.

On the right are the scenarios I wrote, on the left are the designer's ideas.

On the right are the scenarios I wrote, on the left are the designer’s ideas.

Our new space has acres of glorious whiteboards. We had few whiteboards at the old office, but whenever any of us got in front of a whiteboard and started drawing while talking, magic happened. Today, a few of us started discussing a poor user experience in our customer signup process. After a few minutes of waving hands and explaining, I walked over to the giant wall o’ whiteboards and wrote out three scenarios. The others walked over and we had a good conversation.

Later on, the designer and a couple of developers went back to the whiteboard to talk more about it. The designer sketched out his ideas. Writing and drawing on the whiteboard helped us think things through and share the same understanding about the problem and the potential solution.

newOffice

New Tracker office

Other stuff that went on this week? I usually work from home two days a week because I live 35 miles and much gridlock away from work. The office move translated into problems connecting with the office. The team is changing up how we do builds and deploys, and that’s getting in the way of delivering stories for final acceptance testing. We testers pitch in on customer support, and that’s been a bit busy this week. Like everyone I know, I don’t have enough time to do all the things I want/need to be doing. But we sure have a pretty new office!

Oh, and I did NOT pair with JoEllen at all. This is terrible. OK, she was gone the first two days of the week. We have a three day Hackathon next week. I had better take advantage of the opportunity.

 

 

Janet Gregory and I enjoyed participating in the Quality in Agile conference in Vancouver April 20-21. We paired on a keynote: “Do testers need to code… to be useful?” Our opinion in a nutshell: testers need technical awareness to collaborate effectively with all their team members, but our software delivery teams already should have expert coders!

Even if they don’t write code, testers need to participate in automating regression tests and other useful automation, in collaboration with programmers, business stakeholders and others. Janet and I facilitated an all-day workshop on advanced topics in agile testing, with a focus on automation.

Challenges around automation

After some introductory slides, the 15 workshop participants self-organized into three smaller groups, choosing to sit with people that had similar goals for the day, or who had experience related to their goals. Each person listed their team’s impediments to automation, one per sticky note, and we grouped these on a big wall chart.

Automation Challenges

Automation Challenges

Next, everyone dot voted on the topics they wanted to tackle during the workshop. The top three vote-getters were:

  • Culture and responsibility – whose job is it to automate?
  • Lack of time for automation activities
  • Things that make tests hard to automate, such as complexity

(Note, you can find higher resolution photos of all the session wall charts, including those not on this post.)

Formulating the problem and brainstorming ideas to overcome it

Mind map and problem statement for cultural challenges

Mind map and problem statement for cultural challenges

Each group was tasked with writing their own problem statement for the culture and responsibility topic. It is challenging to write a good problem statement! You can see an example at the bottom of the mind map at left. Once the problem was defined, everyone picked up a Sharpie and each team mind mapped on their big piece of easel pad paper. IMG_4427

One group focused on a lack of shared vision and investment in automation at the company level. Another saw a lack of education on both sides.

Third group's mind map is on the right - my individual pic of it was blurred

Third group’s mind map is on the right – my individual pic of it was blurred

I thought it was interesting that the third group exploring culture and responsibility mentioned doing social activities together, and honing soft and tech skills including being respectful of each other.

For topic #2, after each group wrote their problem statement around the lack of time for automation, we tried a different brainstorming technique: Brainwriting. Each person wrote their ideas for dealing with complexity and other things that make automation difficult on a plain piece of paper. Every three minutes, they passed their paper to the group member to their right. They read what was written already, then wrote more ideas. This continued until each person within the group had written on each paper. Most people agreed that reading other peoples’ ideas jogged new ones for themselves. This technique lets people who might not be comfortable coming forward to draw on a mind map or say their ideas aloud contribute equally.

IMG_4438

Example problem statement

IMG_4435

Ideas for dealing with a brittle app

IMG_4437

Ideas for good code design and for collaboration

For topic #3 (sample problem statement to the left), we did “brainwriting with a twist”, an idea of Janet’s. Each team started by drawing, mindmapping or brainwriting ideas on a big flip chart page. After 10 minutes, each group moved to the next group’s flip chart, read the problem statement and ideas, and added their own. Some specific ways to design better automation code came out of this, as well as ideas for better tester-coder collaboration and ways to make these problems more visible.

Designing experiments

Example experiments

Example experiments

Screen Shot 2015-04-26 at 1.11.02 PMOf course, there is more to problem solving than brainstorming ideas. Janet presented a model of Esther Derby’s (left): define the problem and desired outcome, understand the context and requirements around potential solutions, design experiments, try them and evaluate the results. Each team spent time coming up with experiments they will try when back with their own teams. We hope that participants will report back to us on how their experiments went!

Got automation challenges – or any challenges related to quality and testing, for that matter? Get your team together, try out some brainstorming techniques, make it comfortable and safe for each person to contribute their ideas. Identify the biggest problem, brainstorm a couple of experiments to try to make that problem smaller, and use your retrospectives to evaluate the results. Keep experimenting, inspecting and adapting. Over time, your problems will be smaller and your successes bigger. But remember to celebrate even the small successes!

I learned a lot at Mile High Agile last Friday. Here are some notes from sessions I attended. I never heard the for-sure count of attendees, for sure it was well over 700 people. The fun part was running into people I worked with 15, 20 or more years ago, I think pretty much the whole tech population of the Front Range was there.

Mike Cohn, keynote, “Let Go of Knowing: How holding on to your views may hold you back”.

I expected magnificence from Mike and I wasn’t disappointed! (I worked for Mike back in 2003-4 and have learned so much from him over the years. Plus, the books I’ve co-written with Janet Gregory are part of Mike’s Signature Series of books with Addison Wesley.)

It’s hard to sum Mike’s talk up briefly. One theme was “intellectual humility“, being able to say, “I think this is the best way to do X, but I could be wrong”. He talked about the Dunning-Kruger effect – the less we know, the less we think there is to know.” Lots of interesting studies around that which are worth looking up.

Mike pointed out that process != a list of rules, and encouraged everyone to avoid “brand loyalty”. Don’t go only for one “brand” of agile – Mike pointed out that though Scrum is his bread and butter, he is known for user stories and estimates, which both came from XP. Question assumptions, lose your grip on certainty. Too many of us aren’t as open-minded as we should be. When we’re willing to be wrong, we have a new path to growth.

Mike Clement, “The Quest for Continuous Delivery at PluralSight”

  • One of Mike’s main messages was that the whole project team needs to get involved in CD. Most of this wasn’t new to me: “you should be able to push a button and have confidence, get ideas to market quickly” – amen. “Source code control for ALL THE THINGS”.
  • Mike’s team strongly favors feature toggles over feature branches so that they’re integrating continually. They can conveniently toggle off in prod if there are problems, instead of rolling back.
  • They use TeamCity for CI and seem to like it.
  • They’ve ben using a homegrown deployment tool. Mike strongly advises against this. They’re switching to Octopus (they are a .Net shop).
  • They use alerts in New Relic. They’ve been able to get better monitoring and live testing with New Relic (my team does this too).
  • They embrace DevOps as a culture, not a role, tho their Ops team is separate from their Dev teams.
  • He likes saltstack for server management, they treat server management as code.
  • They use Cassandra database scheme to automate modifications.
  • Their staging environment is not enough like prod and they’re working on that.
  • The goal is immutable infrastructure and continuous deployment – they aren’t there yet but still working towards it.

Mike wrapped up with a picture of Picasso’s Don Quixote and quoted the lyrics of “the Impossible Dream” which seemed appropriate for our quest for CD! But hey, we just need donkeys, right?

Paul Rayner, “Lean UX: Want to get better at the Lean discipline of ‘Deliver Faster’?

Paul highly recommends Jeff Gothelf’s book and video course on LeanUX. Check out his review of the book.

  • agile + design thinking + Lean startup = Lean UX
  • “requirements don’t exist. What you really have are unvalidated assumptions”. Question what’s in your backlog. Ask why.
  • UX design is a call to action on the part of the user, for example, “next step”.
  • Use a MVP to validate learning. Five properties of MVP:
    • Clear and concise
    • Prioritize ruthlessly
    • Stay agile – if it’s not what you need, fix it. Inspect and adapt.
    • Measure behavior – a lot of people don’t do this.
    • Use a call-to-action. Just give user one thing to click on.

Chris Shinkle, “You Can’t Manage What You Can’t See”

Visuals let you communicate a lot of information quickly. Chris had pictures of how they manage all the jets and traffic on a huge aircraft carrier – on a big table with little models of all the planes and equipment that the person in charge moves around manually. It was pretty impressive. Here are some more points I noted:

  • Make sure everyone’s looking at the same information. Visuals lead to better decision making, shared understanding (he quoted Jeff Patton’s book that we just read in our book club).
  • Chris advocated simple, low overhead visuals, making progress visible to all, identifying impediments to progress.
  • His teams use a combination (physical) Kanban and Scrum board, it was pretty interesting. They don’t just have cards on the boards, also lots of pictures and drawings. They use little tricks such as a story card turned sideways to remind them to talk about it in the standup. He feels physical boards and other visuals help people feel connected.
  • Remote people have a “sticky buddy” in the office to move their cards/stickies for them. They try to get things out of computers and up on walls.
  • They do use electronic boards as well, for history and analytics. He said people grumble about keeping both a physical and online board up to date, but in truth it isn’t a big amount of time.
  • He recommended something by Arne Roock but I’m not sure if it was his book or what.

My own workshop, “Building Your Agile Testing Skill Sets”

I had a lot of people in my own workshop, “Build your agile testing skill sets“, and a good diversity of roles, about a third testers, a third developers, and the rest BAs, managers, ScrumMasters and the like.

I guess it was around 60 people. It was mostly group exercises, each group brainstorming about what skills testers need and teams need to succeed with testing, what specific skills their own teams are missing, and ideas for experiments to obtain the missing skills.

I was surprised that overall, the knowledge of testing was lower than I expected. There were some highly experienced expert practitioners, but many people seemed to be beginners at both testing and agile. A lot of people were in companies who have silo’ed “QA teams” and where testers were considered “second class citizens”.

Everyone seemed to enjoy the discussions and sharing of lots of interesting ideas. I hope they each got a couple of ideas to try for baby steps towards doing a better job of building quality in.

Bernice Niel Ruhland, a director of quality management programs, contributed so much value to  More Agile Testing. In addition to sidebars where she explains ideas she uses for training and managing testers, we refer to several stories and ideas we learned from her. She also read every draft of every chapter at least three times and gave us invaluable feedback to help create the final book.

Janet Gregory and I are so excited that Bernice has shared her experiences as a More Agile Testing reviewer and contributor. I hope it will inspire you to write about your own experiences, volunteer to review your colleagues’ draft publications, and perhaps read our book!

And while you are there, keep on reading. Bernice blogs regularly and I’ve learned so much from her stories. For example, I love her tribute to Leonard Nimoy and her stories of how he influenced her career.

Bernice’s creativity extends beyond her testing career to other pursuits, one of which is cuisine. She has a wonderful cooking blog, Realistic Cooking Ideas. This has inspired so many wonderful meals at my house! Don’t read it, though if you are hungry!

Thanks so much to Bernice!

Pairing FTW

Pairing FTW!

This post benefits from a bit of context: my day job is as a tester on the Pivotal Tracker team. Tracker is a project tracking tool with an awesome API. Our API doc is awesome too. It’s full of examples which are generated from the automated regression tests that run in our CI, so they are always accurate and up to date.

Oh, and part of my day job is to do customer support for Tracker. We often hear from users who would like to get a report of cycle time for their project’s stories. Cycle time is a great metric. The way I like to look at it is, when did you actively start working on a story, and when did that story finally get accepted? Unfortunately, this information isn’t easily visible in our tool. It’s not hard to obtain it via our API, but, the best way to get it is not immediately obvious either.

For a couple of years, I have wished I had an example script using our API to compute cycle time for stories that we could give customers. Sadly, I’ve had to let my Ruby skills rust away, because at work I don’t get to automate regression tests (the programmers do all that), and I spent most of the past couple of years co-writing a book.

Recently, our team had three “hack days” to do whatever we chose. One of the activities I planned was to work on this example cycle time script. My soon-to-be-erstwhile teammate Glen Ivey quickly wrote up an example script for me. But my own feeble efforts to enhance it were futile. Unfortunately, all my teammates were engaged in their own special hack days projects.

I whinged about this on Twitter, and Amitai Schlair, whom I only knew from Twitter, offered to pair with me. This was good and bad. One the one hand, awesome to have someone to pair with me! OTOH, gosh I’m terrible at Ruby now, how embarrassing to pair with anyone! I started thinking of excuses NOT to do it. However, Amitai was so kind, I faced my fear. We paired.

First we had to figure out how to communicate and screenshare. We ended up with Skype and Screenhero. Amitai introduced me to a great concept: first, write what I want to do in psuedocode. Then turn each line of that into a method. Then, start fleshing out those methods with real code. Amitai’s instinct was to do this TDD. But due to time limitations, our approach was: write a line of code, then run the script to see what it does. We worked step by step. By the end of our session (which was no more than a couple of hours), we had gotten as far as showing the last state change for each of the stories that were pertinent to the report.

After competing priorities led us to stop our session, I identified an issue with our new code. A teammate paired with me to show me a way to debug it and we were able to fix it. The script still wasn’t finished. Luckily, my soon-to-be-erstwhile teammate Glen later paired with me to get the script to a point where it produces a report of cycle times for the most recently accepted 500 stories.

The script runs way too slow, and Glen has explained to me a better approach. I await another pairing opportunity to do this. So what’s the takeaway for you, the reader?

Pairing is hard. You fear exposing your weaknesses to another human being. You feel pressure to keep up your side. But that’s only before you actually start pairing. The truth is we all want each other to succeed. Your friends and teammates aren’t out to make you feel stupid. And pairing is so powerful. It gets you over that “hump” of fear. Two heads really are better than one.

Is there something you’d like to try, but you feel you don’t know enough? Find a pair, and go for it!

 

 

 

I just received a flyer in my snail mail for yet another conference where four out of the five keynote speakers are white men and only one is a woman. Are you kidding me? And this is a testing conference. Testing is a field that does indeed have lots of women, I would guess a significantly higher percentage than, say, programming.

I know the organizers of this conference and they are good people who aren’t purposely discriminating against women (or minorities, for that matter). But they aren’t trying hard enough, either. I’ve personally sent long lists of women I recommend to speak at their conferences. True, most of these women aren’t “known” keynote speakers – maybe because nobody ever asks them to keynote. These women are highly experienced testing practitioners who have valuable experience to share.

This same company has an upcoming testing conference with no female keynoters, so I guess this is an improvement. But I’m not letting them off the hook, and you shouldn’t either.

What do you value more: a highly entertaining, “big name” keynote speech? Or an experienced practitioner who competently helps you learn some new ideas to go and try with your own teams, but maybe isn’t as well known or flashy?

You probably don’t get to go to many conferences, so be choosy. Choose the ones with a diverse lineup of not only keynoters but presenters of all types of sessions. In fact, choose conferences that have lots of hands-on sessions where you get to learn by practicing with your peers. We have the choice of these conferences now. And I hope you will leave your favorites in comments here. I don’t want to make my friends unhappy by naming names here, but email me and I’ll give you my own recommendations. (Another disclaimer – I’m personally not looking for keynoting gigs, so these are not sour grapes. I don’t like doing keynotes, and I know my limitations as a presenter).

The organizations sponsoring and organizing conferences are pandering to what they think you, their paying audience, wants to see. If you’re going to conferences to see big names and polished speakers, and you don’t care if the lineup is diverse, go ahead. If you want a really great learning experience, maybe do some more research about where your time and money will reap the most value for you.

I’m not trying to start a boycott, but I am saying: we are the market. Let’s start demanding what we want, and I know these conference organizers will then have to step up and try harder.

Since publishing More Agile Testing with Janet Gregory, I’ve enjoyed time for writing new articles and participating in interviews. Please see my Articles page for links to these. I’d love to hear your feedback on any of these. Have you tried any of the practices or ideas discussed in the articles or interviews?

In the past couple months I blogged about the spike that JoEllen Carter (@TestingMojo) and I have been doing on automating UI smoke tests for our team’s iOS app: Pairing on a Mission and Continuing the Mission… and continually improving. Today we did a tech talk for our team reporting on what we’ve done so far and asking for help in choosing our next steps.

Tech Talk Mind Map

Tech Talk Mind Map

We used this mind map to guide our talk. We explained the goals for our spike: find an automation solution for consistent, repeatable UI smoke tests that would keep bad regressions out of the app store. We demoed one of our test scripts, showed how we had designed our tests, explained how discovering ‘search with predicate’ made our scripts much easier to write and maintain. We went over the pluses and minuses of our experience so far, pointing out the frustrating roadblocks we encountered, but also the pleasure of learning and working on something fun and cool.

We shared our thought that now is a good time to assess the value of the automated UI tests and how to move forward. Our iOS app is being completely redesigned, so the scripts we created for our spike will have to be re-done from scratch. We still have to solve the problems that keep us from putting our tests into our CI.

There are several options. We could try an external service provider. We could try other tool sets and frameworks. Should we abandon the UI automation idea and spend our time doing manual exploratory testing? Our iOS app already has about 6,000 unit tests and a number of integration tests that operate through the API. However, we have had bad regressions before that could only be found via the UI, so we know we need something.

We got some good ideas from the rest of the team. One was to ask the developer community within our company if they have any iOS UI automation experiences to share, since we know there are many other iOS projects. We posted a question on the development forum and have already had some good input.

This effort is inconclusive, so why am I blogging about this? Right before I started writing this, Jason Barile posted an interesting question on Twitter:

“…is your team really clear on what problems you’re trying to solve with automation and what success looks like?”

Our team has a long history of incredible success using test-driven development and automated regression tests at all levels from unit to UI. We have our share of “flaky tests”, but we get a good return on our automation investment. That doesn’t mean that automation is always the solution. We’ll have to figure things out.

Personally, I don’t like doing manual regression testing, so I hope we can find a way to run high-level UI automation smoke tests in our CI. Then we’d have more time for manual exploratory testing, and I’m learning that could be even more critical with mobile apps than other types. We shall keep experimenting and collaborating, and finding ways to shorten our feedback loop and ensure our users have a good experience with each new release of our app.

 

In the delightful keynote “Insights from Happy Change Agents” from Fanny Pittack and Alex Schwarz, I learned a new way to share information with others. Rather than providing a recipe for success, or even a takeaway, we can offer “giveaways”. I shall offer you some giveaways that I received at Agile Testing Days.

My PotsLightning sketch notes

My PotsLightning sketch notes

Sunday I joined the PotsLightning session for the morning. PotsLightning is open to anyone, not only conference participants, and is a self-organizing sort of thing, a combination open space and lightning talks. Maik Nogens facilitated. My main giveaway was the diversity of conference participants. There were people from as far away as New Zealand and Saudi Arabia. There were several women. Participants had experience testing all kinds of software from tractors to music. My sketch notes show, rather illegibly, some of the topics we covered, such as embedded systems, guilds, and automation.

My next sketch note reminds me that someone – unfortunately now I don’t recall who it was – showed me how he uses mind maps for test reporting as well as planning. He embeds screenshots and screencasts, and uses the time machine feature of MindMeister to show progress. I love the visibility these practices add, and I’m keen to try it.

There was so much packed into the conference sessions, mealtime conversations, and hallway discussions. I even learned things in the vendor expo. Here are just a few of my favorite giveaways that stick in my mind, in no particular order.

  • The leader of the mobile testing dojo asked if we had an app we’d like to use for the dojo. I suggested my team’s app, and the group agreed to try it. I got a lot of useful insights, not only into mobile testing techniques, but into how new users perceive our app! Lots of room for improvement in both!
  • I’ve followed Bob Marshall (@flowchainsensei) on Twitter for awhile. His keynote gave me so much to think about. I need to work on my non-judgmental observation skills. Non-violent communication is critical and helps in so many of the problem areas currently getting in our way in the software business.
  • Providing a “crash pad” to cushion failures, and re-thinking failures as simply “learning”. This came out of several sessions including Roman Pilcher, who showed climbers “bouldering” with a crash pad in case they fall.
  • How to nurture testers? This came up in the tutorial Janet Gregory and I did, as well as in Lean Coffee. Janet held an Open Space on it, so I hope she will share what came out there. I think one way is to have fun, and you can see in the photo that testers had fun at the Carnival party during the conference!

    A Carnival of Testers, including Bart Knaack, my husband Bob, me, David Evans, someone I don't know, Alex Schladebeck, Thom Roden and Gareth(?) from RedGate.

    A Carnival of Testers, including Bart Knaack, my husband Bob, me, David Evans, someone I don’t know, Alex Schladebeck, Thom Roden and Gareth(?) from RedGate.

  • Lars Sjödahl did a nice consensus talk on how we don’t notice what we aren’t expecting. It’s a good reminder to me to use my peripheral vision and Spidey sense when exploring our software, and try to see what I’m not looking for. Dan Ashby’s session similarly reminded me to think laterally as well as critically.
  • Janet and I find David Evan’s Pillars of Testing so important, we asked him to write it up and used that to wrap up our new book in the last chapter. I so appreciate his shout-out to the book and our many contributors in his keynote. Plus he always cracks me up while I’m learning something new. Do watch the video of his keynote (I don’t know when or where they’ll be posted).
  • Antony Marcano’s “Don’t put me in a box” keynote is a reminder of how much we can learn from hearing others’ stories. For example, his story about how he had to work with programmers who were on the other side of a big atrium, and simply moved himself over to their side in order to collaborate and build relationships with them. Fanny and Alex emphasized that it’s all about relationships! Alan Richardson showed the power of short, crisp stories in his keynote. We can learn so much by sharing our experiences.
  • Daniël Maslyn’s talk on robotics showed how exciting the future of testing really is. We tend to get a bit blasé, but that’s a whole exciting world we could enjoy learning about!

My previous post has a list of blogs from Agile Testing Days participants, please check those out for more!

In other news, we are honored that LingoSpot listed Agile Testing as one of the top 16 books every software engineer should read!