- [[L1 regularization]], [[L2 regularization]], [[elastic net regularization]] # Idea Regularization makes the coefficients smaller. It reduces [[training accuracy]], but tends to increase [[testing accuracy]]. $loss_{\text{regularized}} = loss_{\text{original}} + penalty_{\text{large coefficient}}$ Because a penalty term is added to the original loss, [[training accuracy]] will be reduced (but not necessarily [[testing accuracy]], which tends to increase). # References - https://campus.datacamp.com/courses/linear-classifiers-in-python/logistic-regression-3?ex=1 # Figures - https://campus.datacamp.com/courses/linear-classifiers-in-python/logistic-regression-3?ex=1 ![[Pasted image 53.png]]