Which type of AI poses hypothetical risks that it might uncontrollably self-improve and harm humanity?

  • AGI (Artificial General Intelligence)
  • Machine Learning AI
  • Narrow AI
  • Superintelligent AI
Superintelligent AI refers to AI systems that surpass human intelligence, and it poses the risk of uncontrollable self-improvement, which could potentially harm humanity if not properly aligned with human values and goals.
Add your answer
Loading...

Leave a comment

Your email address will not be published. Required fields are marked *