In a situation where the features in your dataset are at very different scales, which regularization technique would you choose and why?

  • L1 Regularization because of complexity
  • L1 Regularization because of sparsity
  • L2 Regularization because of scalability
  • L2 Regularization because of sensitivity to noise
L2 Regularization (Ridge) would be chosen when features are at different scales because it scales the coefficients without completely eliminating them, preserving information. It can prevent overfitting while considering all features.
Add your answer
Loading...

Leave a comment

Your email address will not be published. Required fields are marked *