News

Paper accepted at SAM 2025

02.10.2025

We are happy to announce that our paper Model-Driven Root Cause Analysis for Trustworthy AI: A Data-and-Model-Centric Explanation Framework (by Emmanuel Charleson Dapaah and Jens Grabowski) has been accepted at the The 17th System Analysis and Modelling conference (SAM 2025). This paper presents a model-driven Root Cause Analysis framework that explains ML pipeline performance by attributing outcomes to interpretable factors spanning both data complexity and model configuration. Paper accepted at SAM 2025.

Latest TDL specifications published

14.08.2025

ETSI’s Methods for Testing and Specification (MTS) committee has published an update of its TDL specifications. These specifications are:
The outcomes of our continued work on the Test Description Language (TDL) within TTF T034 of the European Telecommunications Standards Institute (ETSI) have been published as updated versions of the specifications:

You can find the complete series of TDL specifications here.

Paper accepted at LOD 2025

25.04.2025

We are happy to announce that our paper Empirical Evidence for Data-Centric AI: A Comparative Study of Data Complexity and Hyperparameter Effects (by Emmanuel Charleson Dapaah and Jens Grabowski) has been accepted at the The 11th International Conference on Machine Learning, Optimization, and Data Science (LOD 2025). This paper presents a comprehensive empirical study comparing the relative influence of dataset complexity and hyperparameter settings on the performance of five widely-used classification algorithms: Random Forest, Support Vector Machine, Decision Tree, Adaptive Boosting, and Multi-Layer Perceptron. The findings reveal that data-centric factors—especially class overlap (N1)—consistently exert a far stronger impact on both bias and variance than hyperparameter settings.

New TTF for TDL Maintenance

03.03.2025
Dr. Philip Makedonski will be part of a new Testing Task Force (TTF) of the European Telecommunications Standards Institute (ETSI) for the maintenance and evolution of the Test Description Language (TDL). Details on the project can be found here: Enhancing TDL and TOP to Meet Users’ New Demands (TTF T045).

Poster accepted at HCII 2025

19.02.2025
We are pleased to announce that a poster from our research group has been accepted at the 27th International Conference on Human-Computer Interaction (HCII 2025). This year’s conference will take place from June 22nd to 27th in Gothenburg, Sweden. Our accepted poster is:
  • Evaluation of Data Sharing in Interaction-Based Emotion Recognition Research by Carina Bieber, Patrick Harms, and Jens Grabowski

ETSI TR 103 910 Published

15.02.2025
We are pleased to announce that ETSI TR 103 910 has been published. Driven by Fraunhofer FOKUS with contributions from our group and other partners, it provides a procedural understanding of testing ML-based systems, including principles and challenges for testing ML-based systems, quality criteria and test items, as well as suitable test methods and their integration into the life cycle of typical ML-based applications.

Journal First Presentation at ICST 2025

16.12.2024
We are happy to announce that we will present the findings of our journal article A new perspective on the competent programmer hypothesis through the reproduction of real faults with repeated mutations at the 18th IEEE International Conference on Software Testing, Verification and Validation (ICST) 2025. Within the article, we explore the competent programmer hypothesis by examining the connection between mutation testing and real-world faults. Through mutation chains, we assess the directness of this connection by recreating faults. The presentation will be part of the Journal-First Papers track of the conference. 

Posters and Presentations accepted for the UCAAT 2025

12.12.2024
We are happy to announce that the following presentations and posters with contributions from our group were accepted at the 11th User Conference of Advanced Automated Testing (UCAAT) 2025 organised by the European Telecommunications Standards Institute (ETSI) and hosted by CERTH:
  • Importance of Data Sharing for the Validation of Interaction-based Emotion Recognition for User Experience Evaluation
  • Representativeness of Deep Learning Mutants in Simulating Real Faults
  • Exploring the Landscape of Machine Learning Data Bugs: Frequency, Impacts, and Relationships for Enhanced Automation
  • Generating Test Artifacts using Large Language Models
  • Documentation Approaches for AI/ML Systems
  • ETSI TR103910: Test Methodology and Test Specification for AI-enabled Systems
  • TDL Takes on the Edge
  • Towards a Harmonized Documentation Scheme for Trustworthy AI

Workshop accepted at MUC 2024

30.07.2024

We are pleased to announce that a workshop from our research group has been accepted at the Mensch und Computer 2024 conference. This year’s conference will take place from September 1st to 4th at the Karlsruhe Institute of Technology in Karlsruhe, Germany.

The accepted workshop is:

  • Improving UX Evaluation with AI-based Emotion Recognition by Carina Bieber, Patrick Harms, and Carolin Ebermann

Papers accepted at SAM 2024

30.07.2024

We are pleased to announce that two papers from our research group have been accepted at the 16th System Analysis and Modeling Conference (SAM 2024). This year’s conference will take place on September 23rd and 24th, co-located with the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems (MODELS 2024) in Linz, Austria.

Our accepted papers are:

  • Exploring the Fundamentals of Mutations in Deep Neural Networks by Zaheed Ahmed and Philip Makedonski
  • AI-based User Emotion Recognition from Interaction Data: Challenges and Guidelines for Training Data Creation by Carina Bieber, Patrick Harms, Dominick Leppich, and Katrin Proschek

Pages

2025 © Software Engineering For Distributed Systems Group

Main menu 2