Which method in transfer learning involves freezing the earlier layers of a pre-trained model and only training the latter layers for the new task?

  • Fine-tuning
  • Knowledge Transfer
  • Feature Extraction
  • Weight Sharing
The method in transfer learning that involves freezing the earlier layers of a pre-trained model and only training the latter layers for the new task is known as fine-tuning. Fine-tuning allows the model to retain the knowledge from the source task while adapting its later layers for the specific requirements of the target task. This approach is common in transfer learning scenarios.
Add your answer
Loading...

Leave a comment

Your email address will not be published. Required fields are marked *