Enhancing Test Models by Incorporating Monitored Usage Information

Steffen Herbold, Patrick Harms, Jens Grabowski

Abstract

Test models for Model-Based Testing (MBT) of realistic industrial systems are very complex. This raises the question of how to validate the test models themselves, in order to guarantee that the generated tests are valid. Moreover, if the generated tests shall be automatically executable, the test models must be precise and contain a lot of detailed information, including either a method for the inference of valid test data or a test data repository. Within this presentation, we present an approach where we utilize monitored usage information to enhance test models and, through this, facilitate easier MBT. Our approach is based on the MIDAS DSL, a domain specific language based on Unified Modelling Language (UML) and the UML Testing Profile (UTP) for the testing of Service Oriented Architectures (SOAs). Our approach for the collection of usage information is more than just a collection of traces during the testing. Instead, we can monitor the System Under Test (SUT) within its actual productive environment in order to gain a realistic view on how the system is utilized. Our first contribution is the validation of the test model itself. Here, we compare the information about the SUT structure in the test model with the information about the SUT structure we find in the usage data. From the observed usage data, we see which operations of which services where called with which data. We can then go ahead and check if we find a representation for all services, operations and data types within the test model. In case they are missing, we can warn the test engineer about possibly missing information in the model. This allows rather quick feedback about modelling mistakes in comparison to painstakingly searching for the reason of faults once tests have been generated and executed. Our second contribution goes one step further. Instead of only validating the test model, we actually extend it with additional information. Within the observed usage information, we see both valid test data as well as valid workflows for the SUT execution. By mapping the found information to model instances with the same approach we use for the validation of the test models, we can generate test cases and a repository of valid test data. In case of the MIDAS DSL, this means automated generation of UML Interactions to represent test cases and UML Instance Specifications to represent test data. This approach mimics the capture phase of capture/replay testing. In case the generated test cases are used “as is” for test execution, we implement a full capture replay approach. However, the true power of this approach is that test engineers can use the generated UML Interactions as foundation for manually modelled tests as well as utilize the generated UML Instance Specifications for other automatically or manually generated tests. The whole approach is implemented as part of the MIDAS European project and part of a fully automated testing approach for SOAs, that includes the generation of executable TTCN-3 from the test models and the automated execution of the generated tests.
Document Type: 
Presentations
Howpublished: 
presented at 3rd User Conference on Advanced Automated Testing (UCAAT)
Month: 
10
Year: 
2015
2024 © Software Engineering For Distributed Systems Group

Main menu 2