logo

More Agile Testing Days Joy (part 2 of 2)

Here’s more on what I observed and learned during Agile Testing Days. I’m just going to brain dump the rest here, because otherwise I’ll never get it posted!

Lean Coffee

Janet and I had a great turnout for Lean Coffee all three mornings, despite the early hour. We averaged 20 – 25 people squeezed into two groups. The Fritze Pub is cramped so it was impossible to break into smaller groups as more people turned up. It worked fine anyway. Participants were passionate about the topics, and shared their experiences.

Agile Testing Days was not without bugs! And pirates!
Agile Testing Days was not without bugs! And pirates!

For a nice overview of our first Lean Coffee, plus notes about the first keynote, see Pete Walen’s excellent live blog (how DOES he do that, especially with marginal wifi): http://rhythmoftesting.blogspot.com/2013/10/live-agile-testing-days-2013-day-1.html (search on his blog for days 2 and 3)

For me, the keynote was just blah blah blah, I had to agree with Seb Rose who tweeted: “Feels like a certification course compressed into a keynote”

More sessions

Unfortunately I didn’t take good notes on the sessions I went to Monday morning. Sami Soderblom (read his great post on the conference) had a fabulous preso on exploratory testing. One of his messages was that when programmers and testers work together, we code, test and review together. He’s a mind mapping fan. Also a hockey fan. Doh, he’s from Finland. He referred to Explore IT  by Eisabeth Hendrickson, +1 on that!

I couldn’t get into the session on infrastructure testing with Jenkins, Puppet and Vagrant, but here is the slide deck, and  a http://tdoks.blogspot.co.uk/2013/11/agile-testing-days-take-aways-3.html from Lyndsay Prewer.

Instead, I went to Anand Ramdeo’s session on “Misleading Validations, Beware of Green”. It was a good reminder to keep an open mind. Don’t let “green” builds lull you into a false sense of security.

Mary Gorman – Strength through Interdependence

Mary Gorman’s keynote was terrific. I don’t have a lot of notes on that one either, because the information is all in the excellent book  Discover to Deliver that she wrote with Ellen  Gottensdiener and which I have read and used. She talked about interdependence and dependence, and had us rate our teams on various aspects of trust. Do we trust that our teammates will do what they say? Is everyone open to feedback, do they provide it in a direct and constructive way? Do we trust each others’ competence? Mary said we should “shed light without generating heat”. Bottom line: more trust means a better product.

At my previous job, I kept Mary and Ellen’s “seven dimensions” in front of me during planning and brainstorming meetings. They help me ask questions that cause the business stakeholders to think about more aspects of their feature, and help the development team better understand what’s needed for the technical implementation.

At lunch, J.B. Rainsberger tempted a few of us (Oana Juncu, Gil Zilberfeld, Matt Huesser) into escaping off to the Café Heider for cappucino, hot chocolate,and interesting conversations. These impromptu discussions are always the best part of a conference. Joe (J.B.) and his wife Sarah are so inspiring to me. They’ve purposely designed a lifestyle where they don’t need so much money and thus don’t need to work all the time. And, they have changed what they eat so dramatically that they’ve lost something like 200 lbs between them.

Consensus Talks

The consensus talks, I believe they were 15 minutes each, were mini-experience reports, and probably my favorite part of the conference. A lot of the presenters were not experienced conference speakers, but they had the courage to share their stories, and I learned something from each one.

Chris Baumann told us about interesting experiments his team did for testing when they had no testers. One technique they used was to put all the regression tests on cards in a box. Each dev took a card and tested, until the box was empty. That way they got different tests to do each time, it was less boring. Still must have been somewhat boring, as he referred to “the hamster wheel of regression testing”- such a great visual! Regression testing slowed to a stop whenever it became too tedious. The long term results of his team’s efforts remain unknown, as Chris changed jobs.

Stephan Kaemper‘s talk was on “are we still testing the wrong stuff”, it was good, but I didn’t take notes and I’m afraid all I remember is his “zebricorn”: more black and white than unicorn. But do read his blog for his useful thoughts on things like the “two values of software“. Though he had the interesting observation: how important is future readiness? Maybe the question is more important than the answer. Stephan and I had interesting discussions later in the week about what he’s doing with testing infrastructure. Something I never thought to do.

Mitch Rauth had a great talk on being an “agile test manager”. He applied the “Three Amigos” pattern to the manager level with good results. Also recommended Management 3.0 by Jurgen Appelo, which I heartily endorse as well. As manager, he manages the environment, the boundaries around the team, the strategy, the guidelines for self-organizing teams.

Chris George (check out his blog posts on Agile Testing Days) described how his team went from being a separate test team to quite blurred lines where who does testing and who does coding is not so hard and fast. Tester-coder collaboration helped each feel the other’s pain points. Memorable quote: “Focus on skills, not roles”. Pairing up a tester and programmer sped up debugging.

Then, I got obsessed with the onstage timer, which looked like a giant light saber, slowly grew up green, then turned red when the time was exceeded. They should have that at award shows.

I spent the last session of the day making a thing in the Test Lab with James Lyndsay and Bart Knaack. I used Scratch, which is designed to teach kids how to code, to create a bat in a castle who did particular things based on my arrow clicks and where the cursor was. I got a button. Cool!

Test Lab: The Doctor is In! Bart Knaack at the Halloween Party
Test Lab: The Doctor is In! Bart Knaack at the Halloween Party

Then I found my husband Bob in the hotel lobby playing testing dice games with Pete Walen, Huib Schoots, Mieke Mertsch and more. He was doing pretty well, too. Evidently you learn a lot about testing just by hanging out at testing conferences for so many years!

The conference dinner was a Halloween party. Halloween is NOT a big thing in Germany, though I was told that kids do trick or treat these days. However, a surprising number of attendees wore costumes. We had a giant bug at our table. He won first prize for costume. I never figured out who he was, though he was about 7 feet tall so it should have been easy to find him in plain clothes at the conference. He never got out of costume in spite of the heat, even when dancing to the awesome all-woman jazz and rock band. Best group costume were Pete and Connie

Matt Huesser, Pete and Connie Walen, and Huib Schoots pose with José Diaz
Matt Huesser, Pete and Connie Walen, and Huib Schoots pose with José Diaz

Walen, Matt Heusser, and Huib Schoots as – bits. My headless horseman costume purchased from Amazon was a giant fail. Sigh.

But the big fun was that Markus Gaertner won the Most Influential Agile Testing Professional Person award! Yes, it’s a goofy-sounding award, but it’s strictly a vote of your peers, which is really cool. Congratulations to Markus! I’ve learned so much from him and his book ATDD by Example. Really important things, such as: when you need a test automation framework, don’t go looking for the coolest one, first decide how you want your tests to look.


Day Two

Christian Hassa, “Scaling the Enterprise”
Christian introduced me to a new concept: underpant gnomes. So many companies want to collect underpants, and somehow make a profit, but what turns the underpants into a profit? They don’t look at that. You can’t just collect underpants and not change the way you work. A burning question from Christian: What can we learn from Pinky and the Brain about changing the world? Each episode, they fail, but they learn something. We can too.

Christian recommended many of my Favorite Things including Impact Maps, Story Maps, examples.

I liked this quote: your job as a tester isn’t to verify the software, it’s to verify that the world is actually changing fast enough. I think that was Christian, but my notes have gotten confused.

Christian talked about scaling TDD to the enterprise. Set a desired goal, figure out the stakeholders, define a desired behavior change, figure out deliverables, do TDD, then write a failing acceptance test, etc. Measure the impact of the deployed system, refine the deliverable. You can continually refine your strategy based on what behavior change actually happens. This elevates your testing. Scaling isn’t about how to do more work with more people. Test goals/impacts as early/often as possible. Scale is in: what to measure, meter; how to measure the range: be, constraint, target.

It doesn’t mean doing more stuff with more people, it means understanding agile principles on a higher level and building things that help you w/ build / measure / learn

Check out Christian’s slides.
He also recommended the How to Measure Anything book, which is on my list.

A session I wanted to go to, but there were so many at the same time, was by Adam P. Knight on big data. Here are Dan Ashby’s sketch notes. You can learn more in this article about big data article by Adam.

Alex Schwarz, Ripening of a Restful API service

Alex delved into the risks for a RESTful API, including “clients misuse the API” and “version hell”. He works for Nokia, and their API is that thing that when you go to a hotel website, you can look up all the restaurants, tourist sights and so on that are nearby. It’s huge. He showed an interesting timeline of alpha launch, public beta launch, ready for commercial deals. His “many product interdependencies” included search, analytics and Splunk. They have a B2B2C business model, so they’re concerned about their ‘business’ customers, and the end users of those customers. To avoid regression, they test end to end on devices, use Concordian acceptance tests (though he wasn’t too happy with those, seems like), Scala acceptance tests, and CDCs – Customer Driven Contracts, that capture the expectations of the consumer and how they use the API.

They use “Canary” testing, releasing new features in prod for just a few customers. They avoid end to end tests and device tests. To counteract the “noise generator” of introducing new fields, they use a “chaosMonkeyField”, “We reserve the right to add fields at any time”. I have to admit I was a bit confused on this but it sounded cool.

Mobile app CI session:

This was a disappointing. session. Not sure I really understood this:

Project size – mobile small, embedded large
Complexity – mobile low, embedded high
Criticality – mobile low, embedded high
Ease of outsourcing testing – mobile easy, embedded hard

The presenter went over how they test bluetooth communication between, for example, iPhone and the ComPilot device (which I actually have, since I have Phonak hearing aids. This guy was referring to testing at Phonak.)

The session was confusing to me, and later when I talked to other people who attended, who actually do mobile testing, they thought the approaches he described were a bad idea. For example, he recommended jailbreaking the phone. Other attendees told me that’s a bad idea, because most users won’t have done that, so you’re not testing a real situation. Also, his tester job description included the task “Get developers to write tests.” Huh? So, I’m not going to repeat my notes here.

Since it was clear there were many experienced mobile testers at the conference, I scheduled an Open Space session on mobile testing. It was focused on Android testing, which I know little about, as I have only gotten to test iOS apps. One interesting tip there was, given the multiplicity of Android devices, one team just keeps track of which devices have the most defects reported by customers. For example, if they’re getting lots of defect reports on Samsumg devices, they focus their testing there. Basically covering whta their customers use most. One interesting point there that I didn’t know was that the provider of cell phone service makes a difference. For example there may be issues with Vodafon that other carriers don’t have.

People in the session recommended logging everything, using Google analytics and Crashalytics. The kinds of info they get: if lots of user are leaving a particular view, try to figure out why that is – usability issue? Trace software issues from the API.

One person mentioned that they package an HTTP server with their app, and open it up in real time. It provides a way to automate and change phone settings. That was a bit over my head, but it seemed an alternative to jailbreaking, which everyone agreed was a bad idea. Also they were changing the phone location some way, using Calabash.

It was interesting to hear how people keep their phones charged, go reset stuff manually every day, parallelize the CI. People were doing CI on actual devices. One team is keeping their phones up in the ceiling, because other people kept “borrowing” cables and cords. They tried using pink cables to deter thieves but it didn’t work. :-> Need to look up some of the folks who seemed to know their stuff around mobile testing.

More Lean Coffee

One interesting story on Lean Coffee Day 2 from Emma Armstrong (do go read her blog posts from Agile Testing Days). They keep their backlog on a giant whiteboard. Sounds like there are hundreds of story cards on it. Some unestimated ones are in an envelope on the floor by the whiteboard. She pointed out the difficulties of finding a particular card when she wants to make notes, perhaps about a defect she just found. These are the types of challenges that more experienced agile teams encounter.

Dan North’s Keynote

How do agile teams do testing? Blah, blah, blah, test!, done. Dan rocks, and everyone loved his index card slides (though I know that was a TON of work to do!)

“What we do reveals what we value”. Our values and beliefs, capabilities and behavior are all inter-related. Dan had an interesting quadrant to help find what capabilities are missing, with manual and automated on the top and bottom, deterministic and “random” on the left and right. (Yes, the “context-driven” school started  complaining right away). The summary slide was that the quadrant with Manual and Deterministic was “Boring”, and the quadrant with “Random” and Automated was “Weird”.

Dan noted that grooming is for horses, not backlogs, and he put in that maybe it is also for donkeys. Yes.

Dan wrapped up by reminding us to explore other testing methods, consider opportunity cost (what did we NOT do while we were finding and fixing all those bugs), and to test deliberately – how, where, what, when.

Dan’s keynote was notable for bringing up “130 kinds of testing”. He called it “Testing Boggle”. People debated the rest of the conference whether there are really this many kinds of testing. OH and Dan noted that happy Ops guys should be our goal. Whether there are 130 kinds of testing was a hot topic the rest of the week.

You can look at Dan’s slides for more.

Matt Heusser’s keynote

Matt talked about “fast” and “slow” thinking, which has intrigued me since I saw Peter Varhol‘s session on it at the SQuAD conference. We definitely miss glaring bugs due to “fast” thinking. But what the heck do we do about it? The book by Daniel Kahneman does not say. I want to learn more about how we can apply psychology to achieving better testing.

Agile Games Night

Facilitated by Bart Knaack, the games night was shockingly well attended. I ‘helped’ Janet run an interesting game to compare “NoEstimates” to Scrum. My team was the Scrum team. We had planning meetings and 20 minute iterations, and worked in pairs. The other team did Mob Programming. The task was to work part of a big jigsaw puzzle. The Product Owner decided which part to do. The Mob Programming team could only have one person moving the jigsaw pieces around, and they rotated that job around.

Our team estimated some stories for doing a part of the picture, and paired up. I acted as ScrumMaster. There was some panic at the end of the first sprint, when I pointed out that the other team was “Kicking our ass”. No, I’m not a good ScrumMaster, which is why I don’t do it for a living. In the end, we got about as much of the puzzle done as the Mob Programming team, but ours was in two sections. The Mob Programmers did one whole section. Is this good or bad? I have no idea, but it’s  worth more experiments.

Bart Knaack and James Lyndsay did a class (sponsored by the conference) for school kids on the Friday after the conference, with the programs they’ve developed to help teach kids to code and get interested in software coding and testing. How cool is that?

Day Three

David Evans, Making Quality Visible

Why do I have so many notes on this talk? Ummm, mostly because I fought off my compulsion to continually tweet, and took sketch notes. (Why I didn’t do that Day 1 and 2… I can’t say).

‘The “product” of testing is confidence.’ I thought this was a great way to put it.

Testing is like an expert witness at a trial. We look at data and use our judgment to come to a decision. As an example, he showed the actual (hand-written!) engineer reports of the testing on the space shuttle O-rings, and the temperature/hardness data. When you look at those reports you can clearly see it was a horrible idea to go ahead with the Challenger launch in ’86. But apparently, Morton Thiokol’s managers did some magical thinking and decided that if the O-rings hardened due to cold and failed to seat, the second set of O-rings would seat. Though there was absolutely no data on how this could be.

David gave some interesting examples (with audience participation) proving that our vision wins over our hearing. We hear what we see. For example, the McGurk effect – if we see someone’s mouth forming a word, we believe their mouth, rather than what they actually say. The way we visualize information is critical. Beware of conflicts.

He also gave great examples of how data is presented with or without context. If you look at how much the U.S. spends on military, it appears to be far more than any other country. However, as  a % of GDP, it’s quite a different story, the U.S. is way down the list. So context is key.

Another interesting story of visualizing was from Napoleon’s “Grand Armée”‘s march into Moscow (which took a couple years). The graphic of the size of the army along the way into Moscow, and coming back out, is striking. And if you graph temperature along with it, you see that indeed, Napolean was defeated by two generals: January and February. Most of the troops died, and most of those from the frigid weather.  See the link to the prezi deck later on.

It’s good to have visuals that require little interpretation. For example, a heat map showing where a particular soccer player spends most of his time, due to his position on the team – everyone can look at the map and understand what it means.

Checking Twitter for feedback is a simple way to get data – search on your product name and #fail or 🙁 Another technique: Wordle the text in your bug reports and see what words come up the most.

Interesting ways to interpret the 2012 presidential election results – we aren’t really such a divided nation (well, I’m not so sure about that!)

Also a great example in the London tube map. It was a revolutionary step forward in design. However, people interpret it as if it represents the actual geography of London. They’ll go through extra tube stops and changes to get to Paddington when they could have gotten off at Lancaster Gate and walked a couple blocks. (Nice to have examples I can really identify with, as I’m quite familiar with that area and those tube stops!) So, don’t let your architecture diagrams be the London tube map! Scale them to reflect actual geography.

Humans use their sensory and motor skills more than anything. Keep that in mind when deciding what’s most important to your customers.

Work with the brain. 3-D is, like, faster than 2-D, right? (Ref to Spinal Tap) But 3-D is way over-used in charts and it doesn’t convey information well. Find the right scale for your audience. Be careful with dashboards. You aren’t flying a jet, you’re developing software.

Put people on your Kanban (or whatever) board. Show who’s working on what, and who has nothing to do.

Show milestones on your board, for example, as is done in story mapping. (Hey! We can do that!)

For graphs – Keep time on your X-axis, people can interpret time much better that way. Put numbers or size on your Y axis. Use line graphs and an easy color scheme for time-related data, such as – number of low, medium, high bugs over time.

Keep pie charts to 6 sections or less – group the lowest numbers together in an “other” slice.

David’s slides are worth a look. Further good tips can be found in this short piece (with some good links). A deeper analysis, try the book Information Dashboard Design by Stephen Few.

Ajay Balamurugadas – Exploratory testing

Ajay  did some interesting feats of mind mapping ideas thrown out by the audience, and demonstrated some cool memory tricks using mnemonics. I had hoped for a more advanced talk on exploratory testing, but do check his blog for his explanation of his slides, plus his wonderful stories about exploratory testing.

Seb Rose on using BDD (or TDD or SbE/ATDD) to build trust on your team

Seb gave some good analogies for team trust. For example, if you buy a used car that turns out to be no good, even if you get your money back, you won’t trust that dealer anymore. You put time and effort into shopping that you can’t get back. Do check out Seb’s beer belly testing anti-pattern.

This was a pretty basic session with stuff that I already try to practice, like involving your non-tech stakeholders, using the Three Amigos approach. He likes Cucumber because it works with just about all programming languages.

Seb advises that you need to understand, document and communicate your architecture. He feels most programmers and testers don’t really understand their system’s architecture. Personally I’ve always found it really helpful to learn more about the architecture of our team’s application.

J.B. Rainsberger: The Next Decade

Joe’s keynote was brilliant. It started with a video of him giving a keynote at another recent conference. He showed himself a couple years back when he was some many kilos heaver. One question, why aren’t ‘we’ rich, meaning the people who came up with agile values and practices. I can think of a lot of theories for that. Joe noted that we’ve scripted the critical moves, but we’ve apparently failed to convey the vision.

Joe had an “intermission” with a hilarious Bob Newhart and Mo Collins video.

Joe’s slides are worth a look: there are some excellent cartoons, plus a video of him giving a similar talk a couple years ago when he was twice as large.

And the last keynote was mine and Janet’s. We started it off with the Legend of Super Agile Person, which was even more hilarious than we could have predicted. Thanks again to Stephan, Mary and Pete for their admirable acting skills!

(I have the rest of the talk on video if anyone wants to see it, but it’s not nearly as funny!) Here are Dan Ashby’s sketch notes.

Here are random https://www.dropbox.com/sh/8a2yc2oh6saaqw7/6xUCu9ymtw from Agile Testing Days.

I didn’t get to TDD as if you meant it, but I heard good things about it. I also missed  Carlos Blé’s live coding session. It was really tough to choose sessions in many of the timeslots.

Vojtěch Barta wrote a great summary of the conference. Oana Juncu’s summary of Agile Testing Days captures the magic of what this community brings together and generates.

You can find my photos from Agile Testing Days.

2 comments on “More Agile Testing Days Joy (part 2 of 2)

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Archives

Recent Posts: