Tuesday, November 16, 2010

Successful Automation Campaign

Things you must do to be successful in a long term software test automation campaign.
1. Don’t give up manual testing.
2. Get team and management “buy-in”
3. Test before you build
4. Identify low-hanging fruit
5. Make re-usable parts
6. Generate your tests, don’t write them. – Automate your automation execution processes.

Don’t give up manual testing:
Automation is great at saving time, but it is terrible at finding bugs. In fact, in order to get your automation working the first time (and get it working enough to trust it) the application usually has to be free of bugs in the area that the automated test is focused. So for new features and new functionality, you almost always need to test manually before you can create and trust automation to test your app (the exception to this is unit tests).

Automation is also not intelligent. A human can be executing a test to validate a user’s ability to create an account, and while testing, can notice a bug that has nothing to do with the test that they started, like maybe dark blue text on a black background which is hard for humans to read. Automating the same test (to verify the ability to create an account) will not result in a usability bug like this being found, unless the creator of the automated script thought to put in a validation for color contrast in the test. But the human doesn’t need to be told these things, human testers just “notice” them as “odd”, and humans can digress to chase down an interesting bug and then come back to their original test.

The moral of this section, don’t think that because you automated your BVT’s that you will catch the same number of bugs you used to catch when you did these manually. There are always undocumented tests that people don’t realize that they are running in their heads and these are hard to capture in automation, until someone catches a bug and then explicitly creates a script to search for the bad condition. Automation will buy you time so that a human can still do the higher level thinking and apply intelligent scrutiny to the application under test, without wasting time to do the mind numbing repetitive tests.

Get team and management buy-in:
To be successful in any automation campaign you need to take the initiative and start automating, but you will not be successful long term if you don’t have buy in from your team and your management.

You don’t need to be a sales person to do this, but it is selling of an idea. And selling is easier with tangible results and an ROI.

Keep a log of how long it takes to do what you do manually and repetitively for a week or month or whatever your test cycle is. Then do some preliminary napkin-math projections in a spreadsheet to see how much time (per month, or per year) you will be spending on doing these activities if you are to continue doing them manually. Build in to your model the expected growth of your test case library and expansion of your regression suite as new features become “old features” and newer features are added.

Once you have that model, follow the rest of the guidelines below to start building and using your automation for your most repetitive tasks. Keep a log of the time it takes to execute your automation, plus the time it takes to maintain your automation through several (feature changing) builds. Use the execution time plus maintenance time per testing cycle and add these into your spreadsheet to make projections side-by-side versus manual. The longer your time projection the more dramatic the savings will be.

Also keep a log of how much time it took to get the initial versions of the tests running for the first time and validated so that you can trust your results. This is the “I” investment part of the “ROI” that you will show in your spreadsheet.

Now depending on your organization you may want to put dollars next to the time savings to show the efficiency you will have created. If you can show efficiency in one area then you can sell the idea of applying the automation model to other areas.

Test before you build …Test Driven Development:
When a team has taken on an automated testing campaign, the tester’s input in the design phase is crucial to the success and timeliness of the development cycle.

Often in a development organization, testing is an afterthought. The test team is something added to the organization after prototyping the first beta application, when there’s not enough time as a developer to complete unit testing and still complete all features on time. But to deliver high quality software products a test team is not only necessary, but their involvement must be sought during the design phase of a product.

Imagine a flight from Seattle to New York departs and is slightly off course at takeoff. If the pilot examines the position early, a slight correction is needed and the error has negligible fuel impact. Early in the flight the distance between the current path and the correct path is small. If the pilot waits 3 hours into the flight to check position and heading, instead of being just a few miles off course now the flight is hundreds of miles off course. The distance between the correct path and the current path has grown over time and it will cost more fuel to traverse the distance back to the proper course. A correction later in the game is more costly.

In any discipline, once a basic foundation of design is established and features are built upon that foundation, changes to the foundation will ripple throughout the product. If the foundation is tested early on for the possibilities it may need to support, then a correction in design is less costly. An error created during the requirements phase can cost 50 to 200 times more to correct later in a project. ,

Developers, Requirements engineers, Business Analysts, Program and Project Managers, should seek the advice of testers during the earliest phases possible of any software development project. Here the tester’s role is to test the ideas presented to see if they will stand up to the unspoken assumptions of the users, and the team building the product.

Testers should be involved with the developers to help define test code to validate the components before they are combined. The tester should work side by side with the developer to write unit tests, and once the unit tests are functional and verified, they should be run before any build of the product is deployed to a test environment.

Testing early saves time.


Identify low-hanging fruit:
Identify in your application the low hanging fruit. These are the tests that are repeated most often. Of those that are repeated most often start with the ones with the least complexity of underlying information (don’t need to read from an excel spreadsheet or have data that becomes stale easily) but the most tedious steps (from a human point of view).

Automation is faster than a human when tasks are repetitive. If you are performing regression testing, then you are taking the same actions you took a day, a week, a month ago (or some measure of your cycle) to validate that existing features, which were not supposed to change; actually did not change. Automation of those actions once may mean that they can be reused for many future "regression testing" cycles. If you have a suite of tests that you run to validate that a build was successful (BVT tests), and this must be run after every build before in-depth functional testing can begin. Automation can increase the speed of execution of these tests, and free up the time that a tester would have spent conducting the same tests manually. The same can be said for “Acceptance tests”, “Regression tests” or “Performance tests”; and in some cases functional tests when the AUT is designed with the proper “automation hooks” in place.

Make re-usable parts:
Once you’ve chosen a starting point and a tool, automate several tests from start to finish and then look for common “action groups” which can be componentized. An example of this common to many web applications is “log in”. If you are testing an application where the user needs to be “logged in” for a variety of the tests, then creating a re-usable module for “log in” is smart. If your application changes – for example if the “login” button changes to a “sign-in” button – it’s much easier to modify one “login” function rather than doing a find and replace in every test that logs in to your application. Also if you know that you will need to log in as sever different user types then perhaps your login function should take a variable like “usertype” or “user name” and “password” as an override. Look for other common things to group – navigation to a certain section, or placing the application in a certain known state are also places where re-usable parts come in handy.

If you are working on a team with others who are automating various features of the same application, then communicate with them. Share your components, and use a source control system. Your whole team can benefit by standing on each other’s shoulders and not re-doing work.

Start small with the most atomic user action groups as functions, then make higher level functions that are groups of the low level functions… such as “login_and_goto_MyAccount(usertype)”. Your tests will become easier to read, and faster to write.

Generate your tests:
Once you have a substantial library of tests and re-usable parts for the “happy path(s)” of your application under test, you may want to start testing the other permutations of those tests. But maintaining several “very similar” versions of the same test can also produce a maintenance headache, especially if your library of tests is growing large. Use a system to “variablize” the common tests into a simple base test that can run with different configurations to accomplish the various validations you seek.

For example, let’s say you have a website that you are testing, and the site is available in several different languages for different countries. In each of the countries some of the features are the same and some might be different due to laws and regulations. You want to verify that what is supposed to be there in each language and country is actually there and you want to verify that what is not supposed to be there isn’t there either. In this situation it is best to have a simple base test that covers the most common path(s), language and country, and then variablize each validation point within the test to allow you to run the test with different “configurations” and still be able to validate what you expect (when what you expect changes with the configuration).

Finally use a tool (or write code) to generate the test instances you need at the time of execution from the base tests and configuration files (or configuration data if you build a database – recommended). Have your tool able to create the language file that your test tool eats and create some method of bootstrapping the automation tool so that you can look for the newly generated test and start your tool running that test… and listen for it’s results to be complete.

We’ve been successful in multiple projects using this approach and have built our own system to generate advanced permutations of our tests from simple base tests and information about the expectations in varying configurations of the application under test. We call our tool TestCentral.

1 comment:

Ken MIzell said...

Boehm, B.W., and Papaccio, P.N. “Understanding and Controlling Software Costs” - IEEE Transactions on Software Engineering - October 1988 (Vol. 14, No. 10) pp. 1462-1477
McConnell, Steve "Upstream Decisions, Downstream Costs" Windows Tech Journal, November 1997