Which optimization algorithm is commonly used to adjust weights in neural networks based on the gradient of the loss function?

  • Gradient Descent
  • K-Means Clustering
  • Principal Component Analysis
  • Random Forest
The commonly used optimization algorithm for adjusting weights in neural networks based on the gradient of the loss function is 'Gradient Descent.' It iteratively updates weights to minimize the loss.
Add your answer
Loading...

Leave a comment

Your email address will not be published. Required fields are marked *