E-Book Overview
Paper, published in the proceedings of the 8th IEEE International Symposium on Software Reliability Engineering, pp 264-274,Albuquerque,NM, November 1997
E-Book Content
A Study of Effective Regression Testing in Practice W. Eric Wong, J. R. Horgan, Saul London, Hira Agrawal Bell Communications Research 445 South Street Morristown, NJ 07960
Abstract The purpose of regression testing is to ensure that changes made to software, such as adding new features or modifying existing features, have not adversely affected features of the software that should not change. Regression testing is usually performed by running some, or all, of the test cases created to test modifications in previous versions of the software. Many techniques have been reported on how to select regression tests so that the number of test cases does not grow too large as the software evolves. Our proposed hybrid technique combines modification, minimization and prioritization-based selection using a list of source code changes and the execution traces from test cases run on previous versions. This technique seeks to identify a representative subset of all test cases that may result in different output behavior on the new software version. We report our experience with a tool called ATAC which implements this technique. Keywords: Regression Testing, Modification-Based Test Selection, Test Set Minimization, Test Set Prioritization
1
Introduction
No matter how well conceived and tested before being released, software will eventually have to be modified in order to fix bugs or respond to changes in user specifications. Regression testing must be conducted to confirm that recent program changes have not adversely affected existing features and new tests must be conducted to test new features. Testers might rerun all test cases generated at earlier stages to ensure that the program behaves as expected. However, as a program evolves the regression test set grows larger, old tests are rarely discarded, and the expense of regression testing grows. Repeating all previous test cases in re This paper was published in the proceedings of the 8th IEEE International Symposium on Software Reliability Engineering (ISSRE’97), pp 264-274, Albuquerque, NM, November 1997.
gression testing after each minor software revision or patch is often impossible due to the pressure of time and budget constraints. On the other hand, for software revalidation, arbitrarily omitting test cases used in regression testing is risky. In this paper, we investigate methods to select small subsets of effective fault-revealing regression test cases to revalidate software. Many techniques have been reported in the literature on how to select regression tests for program revalidation. The goal of some studies [1, 3, 13, 21] is to select every test case on which the new and the old programs produce different outputs, but ignore the coverage of these tests in the modified program. In general, however, this is a difficult, sometimes undecidable, problem. Others [5, 8, 10, 15, 18, 20] place an emphasis on selecting existing test cases to cover modified program components and those may be affected by the modifications, i.e., they use coverage information to guide test selection. They are not concerned with finding test cases on which the original and the modified programs differ. Consequently, these techniques may fail to select existing tests that expose faults in the modified program. They may also include test cases that do not distinguish the new program from the old for reexecution. In this paper, a combination of both techniques described above is used. We first select tests from the regression suite that execute any of the modifications in the old program and refer to this technique as a modification-based test selection technique. This includes tests that have to b