Anonymous
Regularization is used to reduce overfitting in machine learning model. Overfitting is an aspect where the model has over learned on trained data and is too good in regressing and classifying on train data while it's not generalizable enough on test, validation or holdout data. This usually happens because when we employ gradient descent or any other optimization algorithm, they want the error to be 0 and they have succeeded in this aspect. Thus, we utilize a regularization like L1, L2 or dropout to to penalize the weight updates due to gradient descent slowing down the process. This penalizes the model in over-training making the model more generalizable across datasets