* [HUDI-3290] Different file formats for the partition metadata file. Partition metadata files are stored in each partition to help identify the base path of a table. These files are saved in the properties file format. Some query engines do not work when non Parquet/ORC files are found in a partition. Added a new table config 'hoodie.partition.metafile.use.data.format' which when enabled (default false for backward compatibility) ensures that partition metafiles will be saved in the same format as the base files of a dataset. For new datasets, the config can be set via hudi-cli. Deltastreamer has a new parameter --partition-metafile-use-data-format which will create a table with this setting. * Code review comments - Adding a new command to migrate from text to base file formats for meta file. - Reimplementing readFromFS() to first read the text format, then base format - Avoid extra exists() checks in readFromFS() - Added unit tests, enabled parquet format across hoodie-hadoop-mr - Code cleanup, restructuring, naming consistency. * Wiring in all the other Spark code paths to respect this config - Turned on parquet meta format for COW data source tests - Removed the deltastreamer command line to keep it shorter * populate HoodiePartitionMetadata#format after readFromFS() Co-authored-by: Vinoth Chandar <vinoth@apache.org> Co-authored-by: Raymond Xu <2701446+xushiyan@users.noreply.github.com>
Apache Hudi
Apache Hudi (pronounced Hoodie) stands for Hadoop Upserts Deletes and Incrementals.
Hudi manages the storage of large analytical datasets on DFS (Cloud stores, HDFS or any Hadoop FileSystem compatible storage).
Features
- Upsert support with fast, pluggable indexing
- Atomically publish data with rollback support
- Snapshot isolation between writer & queries
- Savepoints for data recovery
- Manages file sizes, layout using statistics
- Async compaction of row & columnar data
- Timeline metadata to track lineage
- Optimize data lake layout with clustering
Hudi supports three types of queries:
- Snapshot Query - Provides snapshot queries on real-time data, using a combination of columnar & row-based storage (e.g Parquet + Avro).
- Incremental Query - Provides a change stream with records inserted or updated after a point in time.
- Read Optimized Query - Provides excellent snapshot query performance via purely columnar storage (e.g. Parquet).
Learn more about Hudi at https://hudi.apache.org
Building Apache Hudi from source
Prerequisites for building Apache Hudi:
- Unix-like system (like Linux, Mac OS X)
- Java 8 (Java 9 or 10 may work)
- Git
- Maven (>=3.3.1)
# Checkout code and build
git clone https://github.com/apache/hudi.git && cd hudi
mvn clean package -DskipTests
# Start command
spark-2.4.4-bin-hadoop2.7/bin/spark-shell \
--jars `ls packaging/hudi-spark-bundle/target/hudi-spark-bundle_2.11-*.*.*-SNAPSHOT.jar` \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
To build the Javadoc for all Java and Scala classes:
# Javadoc generated under target/site/apidocs
mvn clean javadoc:aggregate -Pjavadocs
Build with different Spark versions
The default Spark version supported is 2.4.4. To build for different Spark versions and Scala 2.12, use the corresponding profile
| Label | Artifact Name for Spark Bundle | Maven Profile Option | Notes |
|---|---|---|---|
| Spark 2.4, Scala 2.11 | hudi-spark2.4-bundle_2.11 | -Pspark2.4 |
For Spark 2.4.4, which is the same as the default |
| Spark 2.4, Scala 2.12 | hudi-spark2.4-bundle_2.12 | -Pspark2.4,scala-2.12 |
For Spark 2.4.4, which is the same as the default and Scala 2.12 |
| Spark 3.1, Scala 2.12 | hudi-spark3.1-bundle_2.12 | -Pspark3.1 |
For Spark 3.1.x |
| Spark 3.2, Scala 2.12 | hudi-spark3.2-bundle_2.12 | -Pspark3.2 |
For Spark 3.2.x |
| Spark 3, Scala 2.12 | hudi-spark3-bundle_2.12 | -Pspark3 |
This is the same as Spark 3.2, Scala 2.12 |
| Spark, Scala 2.11 | hudi-spark-bundle_2.11 | Default | The default profile, supporting Spark 2.4.4 |
| Spark, Scala 2.12 | hudi-spark-bundle_2.12 | -Pscala-2.12 |
The default profile (for Spark 2.4.4) with Scala 2.12 |
For example,
# Build against Spark 3.2.x (the default build shipped with the public Spark 3 bundle)
mvn clean package -DskipTests -Pspark3.2
# Build against Spark 3.1.x
mvn clean package -DskipTests -Pspark3.1
# Build against Spark 2.4.4 and Scala 2.12
mvn clean package -DskipTests -Pspark2.4,scala-2.12
What about "spark-avro" module?
Starting from versions 0.11, Hudi no longer requires spark-avro to be specified using --packages
Running Tests
Unit tests can be run with maven profile unit-tests.
mvn -Punit-tests test
Functional tests, which are tagged with @Tag("functional"), can be run with maven profile functional-tests.
mvn -Pfunctional-tests test
To run tests with spark event logging enabled, define the Spark event log directory. This allows visualizing test DAG and stages using Spark History Server UI.
mvn -Punit-tests test -DSPARK_EVLOG_DIR=/path/for/spark/event/log
Quickstart
Please visit https://hudi.apache.org/docs/quick-start-guide.html to quickly explore Hudi's capabilities using spark-shell.