1
0
Commit Graph

14 Commits

Author SHA1 Message Date
Danny Chan
e8e6708aea [HUDI-1664] Avro schema inference for Flink SQL table (#2658)
A Flink SQL table has DDL that defines the table schema, we can use that
to infer the Avro schema and there is no need to declare a Avro schema
explicitly anymore.

But we still keep the config option for explicit Avro schema in case
there is corner cases that the inferred schema is not correct
(especially for the nullability).
2021-03-11 19:45:48 +08:00
Danny Chan
12ff562d2b [HUDI-1678] Row level delete for Flink sink (#2659) 2021-03-11 19:44:06 +08:00
Danny Chan
2fdae6835c [HUDI-1663] Streaming read for Flink MOR table (#2640)
Supports two read modes:
* Read the full data set starting from the latest commit instant and
  subsequent incremental data set
* Read data set that starts from a specified commit instant
2021-03-10 22:44:06 +08:00
Danny Chan
89003bc780 [HUDI-1647] Supports snapshot read for Flink (#2613) 2021-03-05 08:49:32 +08:00
Danny Chan
7a11de1276 [HUDI-1632] Supports merge on read write mode for Flink writer (#2593)
Also supports async compaction with pluggable strategies.
2021-03-01 12:29:41 +08:00
Danny Chan
3ceb1b4c83 [HUDI-1624] The state based index should bootstrap from existing base files (#2581) 2021-02-23 13:37:44 +08:00
lamber-ken
c4bbcb7f0e [HUDI-1621] Gets the parallelism from context when init StreamWriteOperatorCoordinator (#2579) 2021-02-17 20:04:38 +08:00
Danny Chan
5d2491d10c [HUDI-1598] Write as minor batches during one checkpoint interval for the new writer (#2553) 2021-02-17 15:24:50 +08:00
lamber-ken
ff0e3f5669 [HUDI-1612] Fix write test flakiness in StreamWriteITCase (#2567)
* [HUDI-1612] Fix write test flakiness in StreamWriteITCase
2021-02-11 23:37:19 +08:00
Danny Chan
4c5b6923cc [HUDI-1557] Make Flink write pipeline write task scalable (#2506)
This is the #step 2 of RFC-24:
https://cwiki.apache.org/confluence/display/HUDI/RFC+-+24%3A+Hoodie+Flink+Writer+Proposal

This PR introduce a BucketAssigner that assigns bucket ID (partition
path & fileID) for each stream record.

There is no need to look up index and partition the records anymore in
the following pipeline for these records,
we actually decide the write target location before the write and each
record computes its location when the BucketAssigner receives it, thus,
the indexing is with streaming style.

Computing locations for a batch of records all at a time is resource
consuming so a pressure to the engine,
we should avoid that in streaming system.
2021-02-06 22:03:52 +08:00
wangxianghu
647e9faf25 [HUDI-1547] CI intermittent failure: TestJsonStringToHoodieRecordMapF… (#2521) 2021-02-04 11:20:01 +08:00
Danny Chan
bc0325f6ea [HUDI-1522] Add a new pipeline for Flink writer (#2430)
* [HUDI-1522] Add a new pipeline for Flink writer
2021-01-28 08:53:13 +08:00
vinoth chandar
5e30fc1b2b [MINOR] Disabling problematic tests temporarily to stabilize CI (#2468) 2021-01-20 14:24:34 -08:00
Gary Li
c5e8a024f6 [HUDI-1418] Set up flink client unit test infra (#2281) 2020-12-31 08:57:22 +08:00