A data scientist notices that their model performs exceptionally well on the training set but poorly on the validation set. What might be the reason, and what can be a potential solution?
- Data preprocessing is the reason, and fine-tuning hyperparameters can be a potential solution.
- Overfitting is the reason, and regularization techniques can be a potential solution.
- The model is working correctly, and no action is needed.
- Underfitting is the reason, and collecting more data can be a potential solution.
Overfitting occurs when the model learns the training data too well, leading to poor generalization. Regularization techniques like L1 or L2 regularization can prevent overfitting by adding penalties to the model's complexity, helping it perform better on the validation set.
Loading...
Related Quiz
- The value at which the sigmoid function outputs a 0.5 probability, thereby determining the decision boundary in logistic regression, is known as the ________.
- How does the architecture of a CNN ensure translational invariance?
- Which term refers to the error introduced by the tendency of a model to fit the training data too closely, capturing noise?
- Machine learning algorithms trained on medical images to detect anomalies are commonly referred to as ________.
- In the context of text classification, Naive Bayes often works well because it can handle what type of data?