logo

Staying On Top of Test Automation

Tomorrow I’m off to Austin for the Austin Workshop on Test Automation (awta). Every year I’ve wanted to attend this, but this is the first year I’ve made it. I was especially motivated because the subject is Watir.

We’ve been using Watir for about 4 years. We have a robust suite of tests and I think it was fairly well-designed (by a coworker who had Perl experience and decent OO understanding). We use these tests mainly to help with exploratory testing. They are very ‘smart’, they have a lot of logic to cope with whatever they encounter, and each script accepts many run-time parameters that make them quite flexible in what they test. This power comes at a cost – maintenance and updates can be time-consuming and difficult. Still, we could not do nearly as much exploratory testing without them (not to mention, we would go crazy from the tedium of manually creating lots of different scenarios for testing).

We’re a small team and we all have to multi-task to some extent. I spend the bulk of my time working with the customers and turning their examples and requirements into business-facing tests that help drive development, and doing manual exploratory testing on these. We automate 100% of our regression tests at the unit, behind-the-GUI (FitNesse) and GUI (Canoo WebTest and Watir), but automating doesn’t usually take that much of our time.

However, because we’re so busy, it’s hard to find big blocks of time to keep up with what’s new in each of the tools we use, upgrade to the latest version, and take advantage of new features. We’ve started to accumulate technical debt in our Watir scripts – we get sporadic failures when we run the full suite of tests, and haven’t been able to figure out why (all the tests run fine individually, and often run fine in the suite).

I recently spent a lot of time upgrading to the latest Watir version, getting it working on my new Vista, and on my Mac VMWare with Windows XP. It was a frustrating and painful experience and I got a lot of help from my team and from people in my Twitterverse. And I still can’t figure out why tests get sporadic failures in the suite.

At the same time, our FitNesse version is way behind, and we aren’t taking advantages of Slim and other new features of FitNesse. We’re overdue for an “engineering sprint” where we normally get a whole iteration to do things like upgrade our tools and learn new features. It’s hurting us. Recently we’ve had more failures of the build that runs our functional regression tests, and spent more time getting it green again. That’s time taken from the whole team, as everyone on the team addresses the problems.

So I’m thrilled to be heading off for three days of soaking up new information about how other people use Watir to address their test automation challenges, and hopefully get help with our Watir issues. I expect to have lots of ‘aha’ moments, and perhaps come home with a whole new approach for my team to try.

Even if your team, like ours, has been quite successful with test automation, don’t get complacent. Stay on top of the latest in both the tools you use and new tools that are out there. Refactor your tests and make sure they’re still meeting your needs. Don’t do like I have done lately and just give up running one of your regression suites because you don’t have time to diagnose why it’s having problems! (I’m atoning for that now). Budget time every iteration for the team to make sure your test automation is working for you.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Archives

Recent Posts: