In Gradient Boosting, what is adjusted at each step to minimize the residual errors?

  • Learning rate
  • Number of trees
  • Feature importance
  • Maximum depth of trees
In Gradient Boosting, the learning rate (Option A) is adjusted at each step to minimize residual errors. A smaller learning rate makes the model learn more slowly and often leads to better generalization, reducing the risk of overfitting.
Add your answer
Loading...

Leave a comment

Your email address will not be published. Required fields are marked *