In the context of Big Data, which system is designed to provide high availability and fault tolerance by replicating data blocks across multiple nodes?

  • Hadoop Distributed File System (HDFS)
  • Apache Kafka
  • Apache Spark
  • NoSQL databases
The Hadoop Distributed File System (HDFS) is designed for high availability and fault tolerance. It achieves this by replicating data blocks across multiple nodes in a distributed cluster, ensuring data integrity and reliable data storage. This is a fundamental feature of Hadoop's file system.
Add your answer
Loading...

Leave a comment

Your email address will not be published. Required fields are marked *