Debugging Overfitting (High Variance) & Underfitting (High Bias) In ML Models


In the context of machine learning, the issue of overfitting (high variance) occurs when your model fits perfectly well on your training data, but performs poorly on your dev/test data. Whereas, the issue underfitting (high bias) occurs when your model does not fit the training data well. This is a general issues that can affect the performance of any machine learning model. To address such an issue, we need to find the right trade off between the bias and variance of the model (i.e. a solution to address the bias of our model will lead to an increase in the variance of our model and vice versa). Overfitting & Underfitting of machine learning models is a well studied research area and as such we can find in the literature information on the possible causes (data, hyperparameters, learning algorithm) of an overfitted/underfitted model, as well as best practices on how to address them.

However, existing studies are still lacking concrete information on how to identify the exact root-cause of an overfitted/underfitted model as the issue could be as a result of discrepancies in the dataset, incorrectly configured hyperparameters or an inappropriate learning algorithm. In this project, we investigate how to measure the effect of each possible root cause on an overfitted/underfitted model using causal analysis and based on our results map an overfitted/underfitted model to its exact root-cause.

Project Details

March, 2023 to March, 2024
Project Website: 
2020 © Software Engineering For Distributed Systems Group

Main menu 2