Experiences with Automated Field Usability Testing Using Generated Task Models


Web portals are the key communication channels for most businesses today. They can range from simple representation of a company, via online shops, to integrated platforms as a service. The latter ones are usually hosted by the company for its customers to provide certain functionality, such as an issue tracking system. As business changes daily, such web portals need to adapt flexibly. This ranges from smaller changes of a certain aspect up to a full relaunch of a website. In this required dynamicity, website managers seldom have sufficient time for applying usability engineering methodologies, such as user testing or expert-oriented evaluation. Instead, changes are done and rolled out as fast as possible and directly to users. In case a change causes usability problems, these may not show up directly. Instead only in the long run they may represent as decreased conversion rates, disappointed users, or more work for the help desk. In such situations, it is hard or even impossible to determine, which of the previous changes may have caused the issues. In our work, we developed a methodology for model-based, automated usability engineering of software including websites and desktop applications. Herein, we record the actions that users perform on the level of mouse clicks, text entries, and even individual key presses. From the recorded data, we generate a model representing the typical tasks users perform with the software. In addition to our analyses, these models may serve as input for usage-based test case generation. Afterwards, we analyse the recorded data and the model for known usability smells. These are patterns of user behaviour that indicate a potential usability issue. The smells have a direct reference to the involved parts and elements of a website, a description of the potential usability issue, and a proposal for its removal. We validated our approach in three case studies and showed that it is capable of providing helpful results. In the presentation, we plan to briefly describe our approach and show some example results. In addition, we will describe the intended usage of our approach for a company’s web portal so that a continuous measurement and assessment of the portal’s usability is done. Depending on the average number of users per day, first representative analysis results can be available in short term in each iteration cycle of the website. In addition, we will present our work in progress focussing on a cloud platform as a service solution. This platform allows users of our approach and our tooling to use a preinstalled and preconfigured infrastructure to perform analyses with just one click. We will also show how easily a recording of a website can be configured using our tooling with state of the art content management systems. We hope to get into fruitful discussion with potential users of our approach resulting in valuable feedback.
Document Type: 
presented at User Conference on Advanced Automated Testing (UCAAT) 2016
Budapest, Hungary

Main menu 2

2011 © Software Engineering For Distributed Systems Group