While training a deep neural network for a regression task, the model starts to memorize the training data. What's a suitable approach to address this issue?

  • Increase the learning rate
  • Add more layers to the network
  • Apply dropout regularization
  • Decrease the batch size
Memorization indicates overfitting. Applying dropout regularization (Option C) is a suitable approach to prevent overfitting in deep neural networks. Increasing the learning rate (Option A) can lead to convergence issues. Adding more layers (Option B) can worsen overfitting. Decreasing the batch size (Option D) may not directly address memorization.
Add your answer
Loading...

Leave a comment

Your email address will not be published. Required fields are marked *