Automatic Generation of Test Oracles
Staff - Faculty of Informatics
Date: / -
USI Lugano Campus, room SI-003, Informatics building (Via G. Buffi 13)
You are cordially invited to attend the PhD Dissertation Defense of Alberto GOFFI on Tuesday, January 23rd 2018 at 13h30 in room SI-003 (Informatics building)
Software systems play a more and more important role in our everyday life. Many relevant human activities nowadays involve the execution of a piece of software. To deliver the expected behavior, software has to be reliable. Assessing the quality of software is of primary importance to reduce the risk of runtime errors. Software testing is the most common quality assessing technique for software. Testing consists in running the system under test (SUT) on a finite set of inputs, and checking the correctness of the results. Thoroughly test a software system is expensive and requires a lot of manual work to define test inputs (stimuli used to trigger different software behaviors) and test oracles (the decision procedures checking the correctness of the results).
Researchers have addressed the cost of testing by proposing techniques to automatically generate test inputs. While the generation of test inputs is well supported, there is no way to generate cost-effective test oracles: Existing techniques to produce test oracles are either too expensive to be applied in practice, or produce oracles with limited effectiveness that can only identify blatant failures like system crashes.
Our intuition is that cost-effective test oracles can be generated using information produced as a by-product of the normal development activities. The goal of this thesis is to create test oracles that can detect faults leading to semantic and non-trivial errors, and that are characterized by a reasonable generation cost.
We propose two ways to generate test oracles, one derives oracles from the software redundancy and the other from the natural language comments that document the source code of software systems.
We present a technique that exploits redundant sequences of method calls encoding the software redundancy to automatically generate test oracles named cross-checking oracles (CCOracles). We describe how CCOracles are automatically generated, deployed, and executed. We prove the effectiveness of CCOracles by measuring their fault-finding effectiveness when combined with both automatically generated and hand-written test inputs.
We also present Toradocu, a technique that derives executable specifications from Javadoc comments of Java constructors and methods. From such specifications, Toradocu generates test oracles that are then deployed into existing test suites to assess the outputs of given test inputs. We empirically evaluate Toradocu, showing that Toradocu accurately translates Javadoc comments into procedure specifications. We also show that Toradocu oracles effectively identify semantic faults in the SUT.
- Prof. Mauro Pezzè, Università della Svizzera italiana, Switzerland (Research Advisor)
- Prof. Antonio Carzaniga, Università della Svizzera italiana, Switzerland (Internal Member)
- Prof. Cesare Pautasso, Università della Svizzera italiana, Switzerland (Internal Member)
- Prof. Thomas Gross, ETH Zurich, Switzerland (External Member)
- Prof. Arie Van Deursen, Delft University, The Netherlands (External Member)