What is the potential consequence of deploying a non-interpretable machine learning model in a critical sector, such as medical diagnosis?

  • Inability to explain decisions
  • Improved accuracy
  • Faster decision-making
  • Better generalization
Deploying a non-interpretable model can result in a lack of transparency, making it challenging to understand how and why the model makes specific medical diagnosis decisions. This lack of transparency can be risky in critical sectors.
Add your answer
Loading...

Leave a comment

Your email address will not be published. Required fields are marked *