Overfitting

Table of contents

Automate your business at $5/day with Engati

REQUEST A DEMO
Switch to Engati: Smarter choice for WhatsApp Campaigns 🚀
TRY NOW
Overfitting

What does overfitting mean in deep learning?

If a machine learning model was not performing well, it is very likely that the reason behind that was overfitting or underfitting the data. Overfitting is essentially when the model models the training data far too well. It occurs in situations where the model learns the detail as well as the noise in the training data so much that it has a negative impact on the performance of the model on new data.

Now, the model is picking up even the noise or random fluctuations that it finds in the training data and is learning them as concepts. Since these concepts do not apply to new data, the model’s ability to generalize takes a major hit when overfitting occurs.

Overfitting occurs more frequently in nonparametric and nonlinear models because they have greater flexibility when they are learning a target function. Because of that, several nonparametric machine learning algorithms also include parameters or techniques that are used to limit and constrain the amount of detail that the model learns.

Overfitting essentially makes the model relevant only to the dataset on which it was trained and irrelevant to all other datasets.

overfitting
Source: Corporate Finance Institute

What is underfitting in machine learning?

Underfitting is the situation in which the model cannot model the training data or generalize to new data. Underfitting is easy to detect when you have a good performance metric. The solution to underfitting is simply to try alternate machine learning algorithms.


What is a good fit in machine learning?

A good fit is that perfect sweet spot between overfitting and underfitting. As a machine learning algorithm learns, the error for the model on the training data and the error on the test dataset reduces. 

However, if the model is trained too long, the performance on the training dataset may decrease because the model is overfitting and learning the irrelevant detail and noise in the training dataset. The error for the test set also begins to spike as the model’s ability to generalize decreases.

The good fit (the sweet spot) is the point just before the error on the test dataset begins to increase where the model has a good level of skill on the training dataset and the unseen test dataset.

How do you know if you are overfitting?

Detecting overfitting is not really possible till you test the data. You can detect overfitting by looking at validation metrics like accuracy and loss. These metrics tend to rise till they reach a point where they stagnate or start dropping. After the upward trend, when the model reaches a good fit and then goes beyond, the trend starts stagnating or declining. That’s when you know that your model is overfitting.


How do you fix overfitting?

There are different things that you can do to handle and fix overfitting. These include:

  • Reducing the network’s capacity by getting rid of layers or by cutting down on the number of elements in the hidden layers.
  • Applying regularization, which involves adding a cost to the loss function for large weights.
  • Making use of Dropout layers. These will remove certain features at random by setting them to zero.

How overfitting can be avoided?

How overfitting can be avoided?

Here are some of the ways through which you can avoid overfitting:

Training with more data

By training the model with more data you can avoid overfitting because it makes it easier for the algorithms to detect the signal better to minimize errors. When you feed more training data into the model, it will not be possible for the model to overfit all the samples and the model will then be forced to generalize to obtain results.

You should continually collect more data to increase the accuracy of your model. However, this might be expensive, therefore you should ensure that the data being used is relevant and clean.

Data augmentation

This is more affordable than training the model with more data. The aim is to make the available datasets appear more diverse if you can’t collect more data.

Using data augmentation, you can make sample data look a little different every time the model processes it. This makes every data set appear unique to the model and stops the model from learning the characteristics of the data sets.

Adding noise to the input and output data can have a similar effect. Introducing noise to the input helps the model become more stable without having any effect on data quality and privacy. Adding it to the output makes the data more diverse. However, this should be done with moderation so that you don’t end up adding too much noise and making the data incorrect or too different.

Data simplification

If a model is too complex, it could cause overfitting. Even if there is a very large amount of training data, the model would still manage to overfit the training data. Data simplification simply tries to avoid overfitting by reducing the complexity and making the model simpler.

Simplification could involve pruning decision trees, reducing the number of parameters in neural networks, and using dropout on neutral networks. This process can also cause the model to become lighter and run faster.

Ensembling

Ensembling is a machine learning technique that involves combining the predictions made by two or more separate machine learning models. Ensembling uses multiple learning algorithms to get better predictive performance than you could obtain from any of the constituent learning algorithms on their own.

The most widely used ensembling methods are boosting and bagging. 

Boosting involves using simple base models to increase their aggregate complexity. It focuses on training a large number of weak learners that are arranged in a sequence, this makes it possible for every learner in sequence to learn from the learner preceding it. It essentially combines all the weak learners in the sequence to create one strong learner.

Bagging is also known as Bootstrap aggregating. It combines bootstrapping and aggregation to form a single ensemble model. Bagging can be considered to be the opposite of bootstrapping. It involves training a large number of strong learners that are arranged in a parallel pattern and then going on to combine them to optimize their predictions.

Close Icon
Request a Demo!
Get started on Engati with the help of a personalised demo.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
*only for sharing demo link on WhatsApp
Thanks for the information.
We will be shortly getting in touch with you.
Oops! something went wrong!
For any query reach out to us on contact@engati.com
Close Icon
Congratulations! Your demo is recorded.

Select an option on how Engati can help you.

I am looking for a conversational AI engagement solution for the web and other channels.

I would like for a conversational AI engagement solution for WhatsApp as the primary channel

I am an e-commerce store with Shopify. I am looking for a conversational AI engagement solution for my business

I am looking to partner with Engati to build conversational AI solutions for other businesses

continue
Finish
Close Icon
You're a step away from building your Al chatbot

How many customers do you expect to engage in a month?

Less Than 2000

2000-5000

More than 5000

Finish
Close Icon
Thanks for the information.

We will be shortly getting in touch with you.

Close Icon

Contact Us

Please fill in your details and we will contact you shortly.

This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
Thanks for the information.
We will be shortly getting in touch with you.
Oops! Looks like there is a problem.
Never mind, drop us a mail at contact@engati.com