Share on facebook
Share on twitter
Share on linkedin
Share on pinterest

6 Regularization Techniques for Deep Learning

One of the most common problems of training a deep neural network is that it overfits.

Overfitting occurs when the network learns specific patterns in the training data and is unable to generalize well over new observations.

In this article, we’ll discuss some of the regularization techniques for deep learning which are specifically designed to control overfitting.

These regularization techniques prevent overfitting and help our model to work better on unseen data.

EARLY STOPPING:

The first technique we are going to discuss is early stopping. It is perhaps the simplest regularization strategy.

As the name suggests in early stopping, we stop the training early.

By stopping the training of our model early, we can prevent our model from overfitting.

For instance, our model might keep reducing its loss in the training data and keep increasing its loss in the validation data. This is a sign of overfitting.

regularization for deep learning

Image Source
By using the early stopping callback, which is available in Keras, we can monitor specific metrics like validation loss or accuracy. As soon as the chosen metric stops improving for a fixed number of epochs, we are going to stop the training.

If you don’t know what a callback is basically they are just functions that are executed during the training process which returns information from the training algorithm. I have written an article explaining some of the commonly used callbacks. To learn more about them you can read my articles Keras Callbacks and Keras Custom Callbacks.

Below is the signature of the early stopping callback

  • monitor: quantity to be monitored
  • min_delta: minimum change in the monitored quantity to qualify as an improvement
  • patience: number of epochs that produced the monitored quantity with no improvement after which training will be stopped

INJECT NOISE:

Another common type of regularization technique is to inject Gaussian noise to the network.

A common approach is to add noise to the input data of the network during the training procedure.

However, in some situations adding noise to the hidden units or to the network weights also leads to improved generalization performance, thus reducing the effect of overfitting.

In Keras, to introduce Gaussian noise to the network we can use the following function

DROPOUT:

Dropout is an effective way of regularizing neural network, which can be applied to the output of some of the network layers to avoid overfitting.

The key idea here is to randomly drop some proportion of neurons along with their connections from the neural network during training.

The dropout layer has something called a Dropout rate(p), which ranges between 0 and 1 (both included).

If n is the number of neurons in the hidden layer and p is the dropout rate, then only (p*n) neurons will be active at each given time.

We randomly drop the neurons in the hidden layer based on the dropout rate.

Assume the dropout rate, p = 0.5, and there are 256 neurons in our hidden layer. This means at each given time, only half the neurons will be active, that is, p * n = 0.5 * 256 = 128.

For each iteration, a different set of neurons will be dropped out.

dropout

Image Source
To create a Dropout layer in Keras, you can use the following function:

DATA AUGMENTATION: 

The neural network requires a lot of data to train, and our model might start to overfit if our training data is too small.

Data augmentation is a regularization technique that aims to combat this by increasing the size of the training set artificially.

Data augmentation depends on the type of data. For some types of data, it may be easy to create artificial data like images, and for some data like the text, it may not be very easy.

In the case of image data, we can artificially create new images by slightly rotate, resize, flip the image horizontally or vertically, skewing the image, etcetera.

But augmenting the text data is relatively harder due to the high complexity of language. There is an excellent Medium post that covers this in detail by Edward Ma.

In this article, we’ll see how to augment image data by using Keras. Keras makes it very easy for us to do image augmentation using the ImageDataGenerator class.

To learn more about these arguments visit Keras Documentation.

L1 AND L2 REGULARIZATION:

These are, by far, the most common regularization technique. The basic idea is that during the training of our model, we try to impose certain constraints on the model weights and control how much the weights can grow or shrink in the network during training.

We do this by adding another term to the cost function called regularization term.

COST = Loss + Regularization term
The regularization term varies for L1 and L2

For L1,

COST = LOSS + Λ ∑ |wi|
W is all the weights in the network.

Λ is a regularization parameter that adjusts the weight we give to the regularization term. It is a hyperparameter whose value needs to be tuned for better results.

L1 regularizer minimizes the sum of absolute values of the weights.

The L1 regularizer leads to weights that are very close to zero — thus making the network to become dependent only on essential inputs and not on noisy ones.

For L2,

COST = LOSS + Λ ∑ wi2
L2 regularizer minimizes the sum of squared values of the weights.

The L2 regularizer is also known as weight decay as it forces the weights of the network to decay towards zero, but not precisely zero like the L1 regularizer.

Below snippet of code demonstrates how we can add l2 regularization to our network.

BATCH NORMALIZATION:

During training deep neural networks, it is possible that the distribution of each layer’s input changes as the parameters of the previous layers changes. This phenomenon is well known as the internal covariate shift.

We can avoid this problem by normalizing our data in mini-batches, using mean and variance.

Batch normalization normalizes the output from a layer with zero mean and unit variance. In doing so, the input distribution of the data per batch has less effect on the network.

Batch normalization also acts as a regularizer that prevents the model from overfitting. For this reason, in some cases, they are used instead of dropout layers.

To create a batch normalization layer in Keras, you can use the following function:

Now let’s see an example on utilizing some of these regularization techniques

Let’s get started.

We’ll start by importing all the necessary modules.

Next, let’s build our CNN model.

Now for image augmentation let’s define the image data generator.

CONCLUSION:

Regularization is a technique that prevents overfitting and helps our model to work better on unseen data.

In this tutorial, we have discussed various regularization techniques for deep learning.

EARLY STOPPING: As the name suggests in early stopping, we stop the training early. By stopping the training of our model early, we can prevent our model from overfitting.

INJECT NOISE: In this technique, we’ll add Gaussian noise to the network.

DROPOUT: The key idea here is to randomly drop some proportion of neurons along with their connections from the neural network during training.

DATA AUGMENTATION: In this technique, we increase the size of the training set artificially.

L1 AND L2 REGULARIZER: Imposing certain constraints on the model weights and control how much the weights can grow or shrink in the network during training.

BATCH NORMALIZATION: Normalizing the output from a layer with zero mean and unit variance.

Complete code for this tutorial can be found in this Github Repo.

Love What you Read. Subscribe to our Newsletter.

Stay up to date! We’ll send the content straight to your inbox, once a week. We promise not to spam you.

Subscribe Now! We'll keep you updated.