1
0
Commit Graph

71 Commits

Author SHA1 Message Date
Gary Li
6353fc865f [HUDI-2218] Fix missing HoodieWriteStat in HoodieCreateHandle (#3341) 2021-07-30 02:36:57 -07:00
Danny Chan
c4e45a0010 [HUDI-2254] Builtin sort operator for flink bulk insert (#3372) 2021-07-30 16:58:11 +08:00
rmahindra123
8fef50e237 [HUDI-2044] Integrate consumers with rocksDB and compression within External Spillable Map (#3318) 2021-07-28 01:31:03 -04:00
Danny Chan
9d2a65a6a6 [HUDI-2209] Bulk insert for flink writer (#3334) 2021-07-27 10:58:23 +08:00
Sivabalan Narayanan
61148c1c43 [HUDI-2176, 2178, 2179] Adding virtual key support to COW table (#3306) 2021-07-26 17:21:04 -04:00
Gary Li
a5638b995b [MINOR] Close log scanner after compaction completed (#3294) 2021-07-26 17:39:13 +08:00
Danny Chan
2370a9facb [HUDI-2204] Add marker files for flink writer (#3316) 2021-07-22 13:34:15 +08:00
Danny Chan
858e84b5b2 [HUDI-2198] Clean and reset the bootstrap events for coordinator when task failover (#3304) 2021-07-21 10:13:05 +08:00
yuzhao.cyz
50c2b76d72 Revert "[HUDI-2087] Support Append only in Flink stream (#3252)"
This reverts commit 783c9cb3
2021-07-16 21:36:27 +08:00
yuzhaojing
783c9cb369 [HUDI-2087] Support Append only in Flink stream (#3252)
Co-authored-by: 喻兆靖 <yuzhaojing@bilibili.com>
2021-07-10 14:49:35 +08:00
vinoth chandar
b4562e86e4 Revert "[HUDI-2087] Support Append only in Flink stream (#3174)" (#3251)
This reverts commit 371526789d.
2021-07-09 11:20:09 -07:00
yuzhaojing
371526789d [HUDI-2087] Support Append only in Flink stream (#3174)
Co-authored-by: 喻兆靖 <yuzhaojing@bilibili.com>
2021-07-09 16:06:32 +08:00
wenningd
d412fb2fe6 [HUDI-89] Add configOption & refactor all configs based on that (#2833)
Co-authored-by: Wenning Ding <wenningd@amazon.com>
2021-06-30 14:26:30 -07:00
yuzhaojing
37b7c65d8a [HUDI-2084] Resend the uncommitted write metadata when start up (#3168)
Co-authored-by: 喻兆靖 <yuzhaojing@bilibili.com>
2021-06-29 08:53:52 +08:00
Danny Chan
aa6342c3c9 [HUDI-2036] Move the compaction plan scheduling out of flink writer coordinator (#3101)
Since HUDI-1955 was fixed, we can move the scheduling out if the
coordinator to make the coordinator more lightweight.
2021-06-18 09:35:09 +08:00
Danny Chan
cb642ceb75 [HUDI-1999] Refresh the base file view cache for WriteProfile (#3067)
Refresh the view to discover new small files.
2021-06-15 08:18:38 -07:00
swuferhong
0c4f2fdc15 [HUDI-1984] Support independent flink hudi compaction function (#3046) 2021-06-13 15:04:46 +08:00
Danny Chan
a6f5fc5967 [HUDI-1986] Skip creating marker files for flink merge handle (#3047) 2021-06-09 14:17:28 +08:00
pengzhiwei
f760ec543e [HUDI-1659] Basic Implement Of Spark Sql Support For Hoodie (#2645)
Main functions:
Support create table for hoodie.
Support CTAS.
Support Insert for hoodie. Including dynamic partition and static partition insert.
Support MergeInto for hoodie.
Support DELETE
Support UPDATE
Both support spark2 & spark3 based on DataSourceV1.

Main changes:
Add sql parser for spark2.
Add HoodieAnalysis for sql resolve and logical plan rewrite.
Add commands implementation for CREATE TABLE、INSERT、MERGE INTO & CTAS.
In order to push down the update&insert logical to the HoodieRecordPayload for MergeInto, I make same change to the
HoodieWriteHandler and other related classes.
1、Add the inputSchema for parser the incoming record. This is because the inputSchema for MergeInto is different from writeSchema as there are some transforms in the update& insert expression.
2、Add WRITE_SCHEMA to HoodieWriteConfig to pass the write schema for merge into.
3、Pass properties to HoodieRecordPayload#getInsertValue to pass the insert expression and table schema.


Verify this pull request
Add TestCreateTable for test create hoodie tables and CTAS.
Add TestInsertTable for test insert hoodie tables.
Add TestMergeIntoTable for test merge hoodie tables.
Add TestUpdateTable for test update hoodie tables.
Add TestDeleteTable for test delete hoodie tables.
Add TestSqlStatement for test supported ddl/dml currently.
2021-06-07 23:24:32 -07:00
Danny Chan
7c213f9f26 [HUDI-1917] Remove the metadata sync logic in HoodieFlinkWriteClient#preWrite because it is not thread safe (#2971) 2021-05-21 11:29:54 +08:00
Danny Chan
9b01d2f864 [HUDI-1915] Fix the file id for write data buffer before flushing (#2966) 2021-05-20 10:20:08 +08:00
Danny Chan
8869b3b418 [HUDI-1902] Clean the corrupted files generated by FlinkMergeAndReplaceHandle (#2949)
Make the intermediate files of FlinkMergeAndReplaceHandle hidden, when
committing the instant, clean these files in case there was some
corrupted files left(in normal case, the intermediate files should be cleaned
by the FlinkMergeAndReplaceHandle itself).
2021-05-14 15:43:37 +08:00
Danny Chan
ad77cf42ba [HUDI-1900] Always close the file handle for a flink mini-batch write (#2943)
Close the file handle eagerly to avoid corrupted files as much as
possible.
2021-05-14 10:25:18 +08:00
Danny Chan
b98c9ab439 [HUDI-1895] Close the file handles gracefully for flink write function to avoid corrupted files (#2938) 2021-05-12 18:44:10 +08:00
TeRS-K
be9db2c4f5 [HUDI-1055] Remove hardcoded parquet in tests (#2740)
* Remove hardcoded parquet in tests
* Use DataFileUtils.getInstance
* Renaming DataFileUtils to BaseFileUtils

Co-authored-by: Vinoth Chandar <vinoth@apache.org>
2021-05-11 10:01:45 -07:00
Danny Chan
42ec7e30d7 [HUDI-1890] FlinkCreateHandle and FlinkAppendHandle canWrite should always return true (#2933)
The method #canWrite should always return true because they can already
write based on file size, e.g. the BucketAssigner.
2021-05-11 09:14:51 +08:00
Danny Chan
c1b331bcff [HUDI-1886] Avoid to generates corrupted files for flink sink (#2929) 2021-05-10 10:43:03 +08:00
Danny Chan
bfbf993cbe [HUDI-1878] Add max memory option for flink writer task (#2920)
Also removes the rate limiter because it has the similar functionality,
modify the create and merge handle cleans the retry files automatically.
2021-05-08 14:27:56 +08:00
Danny Chan
528f4ca988 [HUDI-1880] Support streaming read with compaction and cleaning (#2921) 2021-05-07 20:04:35 +08:00
Danny Chan
dab5114f16 [HUDI-1804] Continue to write when Flink write task restart because of container killing (#2843)
The `FlinkMergeHande` creates a marker file under the metadata path
each time it initializes, when a write task restarts from killing, it
tries to create the existing file and reports error.

To solve this problem, skip the creation and use the original data file
as base file to merge.
2021-04-19 19:43:41 +08:00
Danny Chan
b6d949b48a [HUDI-1801] FlinkMergeHandle rolling over may miss to rename the latest file handle (#2831)
The FlinkMergeHandle may rename the N-1 th file handle instead of the
latest one, thus to cause data duplication.
2021-04-16 11:40:53 +08:00
Danny Chan
ab4a7b0b4a [HUDI-1788] Insert overwrite (table) for Flink writer (#2808)
Supports `INSERT OVERWRITE` and `INSERT OVERWRITE TABLE` for Flink
writer.
2021-04-14 10:23:37 +08:00
Danny Chan
9c369c607d [HUDI-1757] Assigns the buckets by record key for Flink writer (#2757)
Currently we assign the buckets by record partition path which could
cause hotspot if the partition field is datetime type. Changes to assign
buckets by grouping the record whth their key first, the assignment is
valid if only there is no conflict(two task write to the same bucket).

This patch also changes the coordinator execution to be asynchronous.
2021-04-06 19:06:41 +08:00
Roc Marshal
94a5e72f16 [HUDI-1737][hudi-client] Code Cleanup: Extract common method in HoodieCreateHandle & FlinkCreateHandle (#2745) 2021-04-02 11:39:05 +08:00
Danny Chan
9804662bc8 [HUDI-1738] Emit deletes for flink MOR table streaming read (#2742)
Current we did a soft delete for DELETE row data when writes into hoodie
table. For streaming read of MOR table, the Flink reader detects the
delete records and still emit them if the record key semantics are still
kept.

This is useful and actually a must for streaming ETL pipeline
incremental computation.
2021-04-01 15:25:31 +08:00
vinoyang
fe16d0de7c [MINOR] Delete useless UpsertPartitioner for flink integration (#2746) 2021-03-31 16:36:42 +08:00
Sebastian Bernauer
aa0da72c59 Preparation for Avro update (#2650) 2021-03-30 21:50:17 -07:00
Danny Chan
d415d45416 [HUDI-1729] Asynchronous Hive sync and commits cleaning for Flink writer (#2732) 2021-03-29 10:47:29 +08:00
Shen Hong
ecbd389a3f [HUDI-1478] Introduce HoodieBloomIndex to hudi-java-client (#2608) 2021-03-28 20:28:40 +08:00
n3nash
74241947c1 [HUDI-845] Added locking capability to allow multiple writers (#2374)
* [HUDI-845] Added locking capability to allow multiple writers
1. Added LockProvider API for pluggable lock methodologies
2. Added Resolution Strategy API to allow for pluggable conflict resolution
3. Added TableService client API to schedule table services
4. Added Transaction Manager for wrapping actions within transactions
2021-03-16 16:43:53 -07:00
Danny Chan
20786ab8a2 [HUDI-1681] Support object storage for Flink writer (#2662)
In order to support object storage, we need these changes:

* Use the Hadoop filesystem so that we can find the plugin filesystem
* Do not fetch file size until the file handle is closed
* Do not close the opened filesystem because we want to use the
  filesystem cache
2021-03-12 16:39:24 +08:00
Shen Hong
8b9dea4ad9 [HUDI-1673] Replace scala.Tule2 to Pair in FlinkHoodieBloomIndex (#2642) 2021-03-08 14:30:34 +08:00
Danny Chan
7a11de1276 [HUDI-1632] Supports merge on read write mode for Flink writer (#2593)
Also supports async compaction with pluggable strategies.
2021-03-01 12:29:41 +08:00
Danny Chan
97864a48c1 [HUDI-1637] Avoid to rename for bucket update when there is only one flush action during a checkpoint (#2599)
Some of the object storages do not have strong read-after-write
consistency, we should promote to remove the rename operations in the
future.
2021-02-25 10:21:27 +08:00
n3nash
ffcfb58bac [HUDI-1486] Remove inline inflight rollback in hoodie writer (#2359)
1. Refactor rollback and move cleaning failed commits logic into cleaner
2. Introduce hoodie heartbeat to ascertain failed commits
3. Fix test cases
2021-02-19 20:12:22 -08:00
Sivabalan Narayanan
c9fcf964b2 [HUDI-1315] Adding builder for HoodieTableMetaClient initialization (#2534) 2021-02-20 09:54:26 +08:00
Danny Chan
5d2491d10c [HUDI-1598] Write as minor batches during one checkpoint interval for the new writer (#2553) 2021-02-17 15:24:50 +08:00
Danny Chan
4c5b6923cc [HUDI-1557] Make Flink write pipeline write task scalable (#2506)
This is the #step 2 of RFC-24:
https://cwiki.apache.org/confluence/display/HUDI/RFC+-+24%3A+Hoodie+Flink+Writer+Proposal

This PR introduce a BucketAssigner that assigns bucket ID (partition
path & fileID) for each stream record.

There is no need to look up index and partition the records anymore in
the following pipeline for these records,
we actually decide the write target location before the write and each
record computes its location when the BucketAssigner receives it, thus,
the indexing is with streaming style.

Computing locations for a batch of records all at a time is resource
consuming so a pressure to the engine,
we should avoid that in streaming system.
2021-02-06 22:03:52 +08:00
wangxianghu
d74d8e2084 [HUDI-1335] Introduce FlinkHoodieSimpleIndex to hudi-flink-client (#2271) 2021-02-03 08:59:49 +08:00
Danny Chan
bc0325f6ea [HUDI-1522] Add a new pipeline for Flink writer (#2430)
* [HUDI-1522] Add a new pipeline for Flink writer
2021-01-28 08:53:13 +08:00