Most of us know that ML models often tend to overfit to the training data for various reasons. This could be due to lack of enough training data or the training data not being representative of data we expect to apply the model on. But the result is that we end up building an overly complex model that captures every small intricacy in the training data, but fails to work well on unseen data.
Regularization is a technique to reduce overefitting by building “simpler models” that are likely to also work better on unseen data.
The most popular forms of regularization for linear regression are the Lasso and Ridge regularization. But yet another form of regularization that is easy to apply and readily available in scikit-learn is the Elastic Net regularization, that brings out the best of both worlds.
The above short video that talks about the elastic net regularization.