What is the block size used by HDFS for storing data by default?

  • 128 MB
  • 256 MB
  • 512 MB
  • 64 MB
The default block size used by Hadoop Distributed File System (HDFS) for storing data is 128 MB. This block size is configurable but is set to 128 MB in many Hadoop distributions as it provides a balance between storage efficiency and parallel processing.
Add your answer
Loading...

Leave a comment

Your email address will not be published. Required fields are marked *