Predicting issue types with seBERT

Alexander Trautsch, Steffen Herbold

Abstract

Pre-trained transformer models are the current state-of-the-art for natural language models processing. seBERT is such a model, that was developed based on the BERT architecture, but trained from scratch with software engineering data. We fine-tuned this model for the NLBSE challenge for the task of issue type prediction. Our model dominates the baseline fastText for all three issue types in both recall and precision to achieve an overall F1-score of 85.7%, which is an increase of 4.1% over the baseline.
Document Type: 
Articles in Conference Proceedings
Booktitle: 
Proceedings 1st International Workshop on Natural Language-based Software Engineering (NLBSE)
Publisher: 
ACM
Year: 
2022
2024 © Software Engineering For Distributed Systems Group

Main menu 2