Which file format is commonly used in Hadoop for efficient large-scale data processing?

  • Avro
  • CSV
  • JSON
  • XML
Avro is a commonly used file format in Hadoop for efficient large-scale data processing. Avro's compact binary format and schema evolution capabilities make it suitable for storing and exchanging data between Hadoop components. It is particularly useful in scenarios where flexibility and efficiency in handling complex data structures are essential.
Add your answer
Loading...

Leave a comment

Your email address will not be published. Required fields are marked *