1
0
Danny Chan 7f8630cc57 [HUDI-4167] Remove the timeline refresh with initializing hoodie table (#5716)
The timeline refresh on table initialization invokes the fs view #sync, which has two actions now:

1. reload the timeline of the fs view, so that the next fs view request is based on this timeline metadata
2. if this is a local fs view, clear all the local states; if this is a remote fs view, send request to sync the remote fs view

But, let's see the construction, the meta client is instantiated freshly so the timeline is already the latest,
the table is also constructed freshly, so the fs view has no local states, that means, the #sync is unnecessary totally.

In this patch, the metadata lifecycle and data set fs view are kept in sync, when the fs view is refreshed, the underneath metadata
is also refreshed synchronouly. The freshness of the metadata follows the same rules as data fs view:

1. if the fs view is local, the visibility is based on the client table metadata client's latest commit
2. if the fs view is remote, the timeline server would #sync the fs view and metadata together based on the lagging server local timeline

From the perspective of client, no need to care about the refresh action anymore no matter whether the metadata table is enabled or not.
That make the client logic more clear and less error-prone.

Removes the timeline refresh has another benefit: if avoids unncecessary #refresh of the remote fs view, if all the clients send request to #sync the
remote fs view, the server would encounter conflicts and the client encounters a response error.
2022-06-02 09:48:48 +08:00
2021-07-20 22:07:22 -07:00

Apache Hudi

Apache Hudi (pronounced Hoodie) stands for Hadoop Upserts Deletes and Incrementals. Hudi manages the storage of large analytical datasets on DFS (Cloud stores, HDFS or any Hadoop FileSystem compatible storage).

Hudi logo

https://hudi.apache.org/

Build Test License Maven Central GitHub commit activity Join on Slack Twitter Follow

Features

  • Upsert support with fast, pluggable indexing
  • Atomically publish data with rollback support
  • Snapshot isolation between writer & queries
  • Savepoints for data recovery
  • Manages file sizes, layout using statistics
  • Async compaction of row & columnar data
  • Timeline metadata to track lineage
  • Optimize data lake layout with clustering

Hudi supports three types of queries:

  • Snapshot Query - Provides snapshot queries on real-time data, using a combination of columnar & row-based storage (e.g Parquet + Avro).
  • Incremental Query - Provides a change stream with records inserted or updated after a point in time.
  • Read Optimized Query - Provides excellent snapshot query performance via purely columnar storage (e.g. Parquet).

Learn more about Hudi at https://hudi.apache.org

Building Apache Hudi from source

Prerequisites for building Apache Hudi:

  • Unix-like system (like Linux, Mac OS X)
  • Java 8 (Java 9 or 10 may work)
  • Git
  • Maven (>=3.3.1)
# Checkout code and build
git clone https://github.com/apache/hudi.git && cd hudi
mvn clean package -DskipTests

# Start command
spark-2.4.4-bin-hadoop2.7/bin/spark-shell \
  --jars `ls packaging/hudi-spark-bundle/target/hudi-spark-bundle_2.11-*.*.*-SNAPSHOT.jar` \
  --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'

To build for integration tests that include hudi-integ-test-bundle, use -Dintegration-tests.

To build the Javadoc for all Java and Scala classes:

# Javadoc generated under target/site/apidocs
mvn clean javadoc:aggregate -Pjavadocs

Build with different Spark versions

The default Spark version supported is 2.4.4. Refer to the table below for building with different Spark and Scala versions.

Maven build options Expected Spark bundle jar name Notes
(empty) hudi-spark-bundle_2.11 (legacy bundle name) For Spark 2.4.4 and Scala 2.11 (default options)
-Dspark2.4 hudi-spark2.4-bundle_2.11 For Spark 2.4.4 and Scala 2.11 (same as default)
-Dspark2.4 -Dscala-2.12 hudi-spark2.4-bundle_2.12 For Spark 2.4.4 and Scala 2.12
-Dspark3.1 -Dscala-2.12 hudi-spark3.1-bundle_2.12 For Spark 3.1.x and Scala 2.12
-Dspark3.2 -Dscala-2.12 hudi-spark3.2-bundle_2.12 For Spark 3.2.x and Scala 2.12
-Dspark3 hudi-spark3-bundle_2.12 (legacy bundle name) For Spark 3.2.x and Scala 2.12
-Dscala-2.12 hudi-spark-bundle_2.12 (legacy bundle name) For Spark 2.4.4 and Scala 2.12

For example,

# Build against Spark 3.2.x
mvn clean package -DskipTests -Dspark3.2 -Dscala-2.12

# Build against Spark 3.1.x
mvn clean package -DskipTests -Dspark3.1 -Dscala-2.12

# Build against Spark 2.4.4 and Scala 2.12
mvn clean package -DskipTests -Dspark2.4 -Dscala-2.12

What about "spark-avro" module?

Starting from versions 0.11, Hudi no longer requires spark-avro to be specified using --packages

The default Flink version supported is 1.14. Refer to the table below for building with different Flink and Scala versions.

Maven build options Expected Flink bundle jar name Notes
(empty) hudi-flink1.14-bundle_2.11 For Flink 1.14 and Scala 2.11 (default options)
-Dflink1.14 hudi-flink1.14-bundle_2.11 For Flink 1.14 and Scala 2.11 (same as default)
-Dflink1.14 -Dscala-2.12 hudi-flink1.14-bundle_2.12 For Flink 1.14 and Scala 2.12
-Dflink1.13 hudi-flink1.13-bundle_2.11 For Flink 1.13 and Scala 2.11
-Dflink1.13 -Dscala-2.12 hudi-flink1.13-bundle_2.12 For Flink 1.13 and Scala 2.12

Running Tests

Unit tests can be run with maven profile unit-tests.

mvn -Punit-tests test

Functional tests, which are tagged with @Tag("functional"), can be run with maven profile functional-tests.

mvn -Pfunctional-tests test

To run tests with spark event logging enabled, define the Spark event log directory. This allows visualizing test DAG and stages using Spark History Server UI.

mvn -Punit-tests test -DSPARK_EVLOG_DIR=/path/for/spark/event/log

Quickstart

Please visit https://hudi.apache.org/docs/quick-start-guide.html to quickly explore Hudi's capabilities using spark-shell.

Description
内部版本
Readme 43 MiB
Languages
Java 81.4%
Scala 16.7%
ANTLR 0.9%
Shell 0.8%
Dockerfile 0.2%