On the validity of pre-trained transformers for natural language processing in the software engineering domain

Julian von der Mosel, Alexander Trautsch, Steffen Herbold

Abstract

Transformers are the current state-of-the-art of natural language processing in many domains and are using traction within software engineering research as well. Such models are pre-trained on large amounts of data, usually from the general domain. However, we only have a limited understanding regarding the validity of transformers within the software engineering domain, i.e., how good such models are at understanding words and sentences within a software engineering context and how this improves the state-of-the-art. Within this article, we shed light on this complex, but crucial issue. We compare BERT transformer models trained with software engineering data with transformers based on general domain data in multiple dimensions: their vocabulary, their ability to understand which words are missing, and their performance in classification tasks. Our results show that for tasks that require understanding of the software engineering context, pre-training with software engineering data is valuable, while general domain models are sufficient for general language understanding, also within the software engineering domain.
Document Type: 
Journal Articles
Publisher: 
IEEE
Journal: 
Transactions on Software Engineering
Year: 
2022
DOI: 
10.1109/TSE.2022.3178469
2024 © Software Engineering For Distributed Systems Group

Main menu 2