When I joined my first eXtreme Programming (XP) team back in 2000, we all watched the Princess Bride together, because we’d heard it was the “XP movie”. Whenever Buttercup asks for something, Wesley replies, “As you wish…” – most hilariously as he is rolling down a steep and rocky hill. With XP, we welcomed changes from our customers, so that was a perfect meme.
What do you wish machine learning would do for you?
I think that Artificial intelligence (AI), specifically machine learning (ML), has huge potential to give us more confidence that we’re providing a product that makes our customers happy. There are already lots of test automation tool vendors applying ML to their products for things like visual testing and identifying performance anomalies. Full disclosure: I’m a testing advocate for one of those vendors, mabl. Naturally, the vendors have their own ideas about how best to use ML to make a product their customers will buy. But I think the people using the software ought to have a say. There are probably many ways ML can help us build quality into our products.
ML is a double-edged sword, it can do harm as well as good. If you train it on inappropriate data or use flawed models you can get biased ML that hurts people. The road to successful applications of ML is paved with both buttercups and Rodents of Unusual Size.
That said, I like to start an improvement effort by thinking of what we’d like to achieve. I did a 3 minute lightning talk at SeleniumConf Chicago a couple weeks ago. My “talk” consisted of me asking the audience, “what do YOU wish AI/ML would help you with?” Since then, I have received more input via social media. I’d like to summarize what I have so far, and then ask you for your suggestions.
Ideas so far
The SeleniumConf audience shouted out lots of challenges that they wished ML could help them address, among them:
- “Flaky” automated tests – tests that fail frequently because of timing or just mysterious reasons
- Captchas (I presume they meant, testing captchas, not actually filling them in, but hmmm)
- Cross-site scripting (XSS ) and other format exploit testing
- Help analyze risk and identify where to focus testing, risky areas not adequately covered by automated regression test, what tests are required
- Test case management
- Identifying false negatives & false positives in test results
- Learning how people use the product in production, analyzing usage patterns,
- Detect anti-patterns in code
- Distinguish between good and bad UI layout changes
- Visual testing
- Test data management
- Recording Selenium tests to rerun as performance tests
From Twitter and tester Slack workspaces, I got more ideas. For simplicity, I’m not going to give sources, but some of you may recognize yours. Some of these are related or could build on each other.
- Test results analysis, helping to identify progressive failure insights, finding potential failures before the test actually fails
- Help you design better checks, for example, if a lot of UI tests use the same API endpoint, use ML to suggest that you might be better off testing through the API instead
- Identify multiple flows to accomplish the same thing and see if any are broken
- Enhance the team’s knowledge of the product they are creating and helping prevent bugs
- Analyze the effectiveness of the programming language used
- Detect and predict defect clusters
- Help with error reporting, gather information from logs when errors occur, automatically send data to customer support team
Mike Talks had a great suggestion 🙃 :
Can you think of any other ways machine learning could be applied to help your team make your customers’ day a bit better? What are your pain points as you roll down the rocky hill of delivering a change to your customers? I would love to hear more suggestions. I’ll pass them on to my own team, of course, but I’ll also share them here. I can’t wait to see who comes up not only with great ideas, but also perhaps some way to actually implement one.