1
0
Commit Graph

134 Commits

Author SHA1 Message Date
Y Ethan Guo
23dca6c237 [HUDI-2268] Add upgrade and downgrade to and from 0.9.0 (#3470)
- Added upgrade and downgrade step to and from 0.9.0. Upgrade adds few table properties. Downgrade recreates timeline server based marker files if any.
2021-08-14 20:20:23 -04:00
Y Ethan Guo
9056c68744 [HUDI-2305] Add MARKERS.type and fix marker-based rollback (#3472)
- Rollback infers the directory structure and does rollback based on the strategy used while markers were written. "write markers type" in write config is used to determine marker strategy only for new writes.
2021-08-14 08:18:49 -04:00
Prashant Wason
8eed440694 [HUDI-2119] Ensure the rolled-back instance was previously synced to the Metadata Table when syncing a Rollback Instant. (#3210)
* [HUDI-2119] Ensure the rolled-back instance was previously synced to the Metadata Table when syncing a Rollback Instant.

If the rolled-back instant was synced to the Metadata Table, a corresponding deltacommit with the same timestamp should have been created on the Metadata Table timeline. To ensure we can always perfomr this check, the Metadata Table instants should not be archived until their corresponding instants are present in the dataset timeline. But ensuring this requires a large number of instants to be kept on the metadata table.

In this change, the metadata table will keep atleast the number of instants that the main dataset is keeping. If the instant being rolled back was before the metadata table timeline, the code will throw an exception and the metadata table will have to be re-bootstrapped. This should be a very rare occurance and should occur only when the dataset is being repaired by rolling back multiple commits or restoring to an much older time.

* Fixed checkstyle

* Improvements from review comments.

Fixed  checkstyle
Replaced explicit null check with Option.ofNullable
Removed redundant function getSynedInstantTime

* Renamed getSyncedInstantTime and getSyncedInstantTimeForReader.

Sync is confusing so renamed to getUpdateTime() and getReaderTime().

* Removed getReaderTime which is only for testing as the same method can be accessed during testing differently without making it part of the public interface.

* Fix compilation error

* Reverting changes to HoodieMetadataFileSystemView

Co-authored-by: Vinoth Chandar <vinoth@apache.org>
2021-08-13 21:23:34 -07:00
Y Ethan Guo
4783176554 [HUDI-1138] Add timeline-server-based marker file strategy for improving marker-related latency (#3233)
- Can be enabled for cloud stores like S3. Not supported for hdfs yet, due to partial write failures.
2021-08-11 11:48:13 -04:00
swuferhong
21db6d7a84 [HUDI-1771] Propagate CDC format for hoodie (#3285) 2021-08-10 20:23:23 +08:00
Danny Chan
20feb1a897 [HUDI-2278] Use INT64 timestamp with precision 3 for flink parquet writer (#3414) 2021-08-06 11:06:21 +08:00
Danny Chan
b7586a5632 [HUDI-2274] Allows INSERT duplicates for Flink MOR table (#3403) 2021-08-06 10:30:52 +08:00
yuzhaojing
b8b9d6db83 [HUDI-2087] Support Append only in Flink stream (#3390)
Co-authored-by: 喻兆靖 <yuzhaojing@bilibili.com>
2021-08-04 17:53:20 +08:00
Danny Chan
02331fc223 [HUDI-2258] Metadata table for flink (#3381) 2021-08-04 10:54:55 +08:00
Gary Li
6353fc865f [HUDI-2218] Fix missing HoodieWriteStat in HoodieCreateHandle (#3341) 2021-07-30 02:36:57 -07:00
Danny Chan
c4e45a0010 [HUDI-2254] Builtin sort operator for flink bulk insert (#3372) 2021-07-30 16:58:11 +08:00
rmahindra123
8fef50e237 [HUDI-2044] Integrate consumers with rocksDB and compression within External Spillable Map (#3318) 2021-07-28 01:31:03 -04:00
Danny Chan
9d2a65a6a6 [HUDI-2209] Bulk insert for flink writer (#3334) 2021-07-27 10:58:23 +08:00
Sivabalan Narayanan
61148c1c43 [HUDI-2176, 2178, 2179] Adding virtual key support to COW table (#3306) 2021-07-26 17:21:04 -04:00
Gary Li
a5638b995b [MINOR] Close log scanner after compaction completed (#3294) 2021-07-26 17:39:13 +08:00
Danny Chan
2370a9facb [HUDI-2204] Add marker files for flink writer (#3316) 2021-07-22 13:34:15 +08:00
Danny Chan
858e84b5b2 [HUDI-2198] Clean and reset the bootstrap events for coordinator when task failover (#3304) 2021-07-21 10:13:05 +08:00
yuzhao.cyz
50c2b76d72 Revert "[HUDI-2087] Support Append only in Flink stream (#3252)"
This reverts commit 783c9cb3
2021-07-16 21:36:27 +08:00
yuzhaojing
783c9cb369 [HUDI-2087] Support Append only in Flink stream (#3252)
Co-authored-by: 喻兆靖 <yuzhaojing@bilibili.com>
2021-07-10 14:49:35 +08:00
vinoth chandar
b4562e86e4 Revert "[HUDI-2087] Support Append only in Flink stream (#3174)" (#3251)
This reverts commit 371526789d.
2021-07-09 11:20:09 -07:00
yuzhaojing
371526789d [HUDI-2087] Support Append only in Flink stream (#3174)
Co-authored-by: 喻兆靖 <yuzhaojing@bilibili.com>
2021-07-09 16:06:32 +08:00
wenningd
d412fb2fe6 [HUDI-89] Add configOption & refactor all configs based on that (#2833)
Co-authored-by: Wenning Ding <wenningd@amazon.com>
2021-06-30 14:26:30 -07:00
yuzhaojing
37b7c65d8a [HUDI-2084] Resend the uncommitted write metadata when start up (#3168)
Co-authored-by: 喻兆靖 <yuzhaojing@bilibili.com>
2021-06-29 08:53:52 +08:00
Danny Chan
aa6342c3c9 [HUDI-2036] Move the compaction plan scheduling out of flink writer coordinator (#3101)
Since HUDI-1955 was fixed, we can move the scheduling out if the
coordinator to make the coordinator more lightweight.
2021-06-18 09:35:09 +08:00
Danny Chan
cb642ceb75 [HUDI-1999] Refresh the base file view cache for WriteProfile (#3067)
Refresh the view to discover new small files.
2021-06-15 08:18:38 -07:00
swuferhong
0c4f2fdc15 [HUDI-1984] Support independent flink hudi compaction function (#3046) 2021-06-13 15:04:46 +08:00
Danny Chan
a6f5fc5967 [HUDI-1986] Skip creating marker files for flink merge handle (#3047) 2021-06-09 14:17:28 +08:00
pengzhiwei
f760ec543e [HUDI-1659] Basic Implement Of Spark Sql Support For Hoodie (#2645)
Main functions:
Support create table for hoodie.
Support CTAS.
Support Insert for hoodie. Including dynamic partition and static partition insert.
Support MergeInto for hoodie.
Support DELETE
Support UPDATE
Both support spark2 & spark3 based on DataSourceV1.

Main changes:
Add sql parser for spark2.
Add HoodieAnalysis for sql resolve and logical plan rewrite.
Add commands implementation for CREATE TABLE、INSERT、MERGE INTO & CTAS.
In order to push down the update&insert logical to the HoodieRecordPayload for MergeInto, I make same change to the
HoodieWriteHandler and other related classes.
1、Add the inputSchema for parser the incoming record. This is because the inputSchema for MergeInto is different from writeSchema as there are some transforms in the update& insert expression.
2、Add WRITE_SCHEMA to HoodieWriteConfig to pass the write schema for merge into.
3、Pass properties to HoodieRecordPayload#getInsertValue to pass the insert expression and table schema.


Verify this pull request
Add TestCreateTable for test create hoodie tables and CTAS.
Add TestInsertTable for test insert hoodie tables.
Add TestMergeIntoTable for test merge hoodie tables.
Add TestUpdateTable for test update hoodie tables.
Add TestDeleteTable for test delete hoodie tables.
Add TestSqlStatement for test supported ddl/dml currently.
2021-06-07 23:24:32 -07:00
Danny Chan
7c213f9f26 [HUDI-1917] Remove the metadata sync logic in HoodieFlinkWriteClient#preWrite because it is not thread safe (#2971) 2021-05-21 11:29:54 +08:00
Danny Chan
9b01d2f864 [HUDI-1915] Fix the file id for write data buffer before flushing (#2966) 2021-05-20 10:20:08 +08:00
Danny Chan
8869b3b418 [HUDI-1902] Clean the corrupted files generated by FlinkMergeAndReplaceHandle (#2949)
Make the intermediate files of FlinkMergeAndReplaceHandle hidden, when
committing the instant, clean these files in case there was some
corrupted files left(in normal case, the intermediate files should be cleaned
by the FlinkMergeAndReplaceHandle itself).
2021-05-14 15:43:37 +08:00
Danny Chan
ad77cf42ba [HUDI-1900] Always close the file handle for a flink mini-batch write (#2943)
Close the file handle eagerly to avoid corrupted files as much as
possible.
2021-05-14 10:25:18 +08:00
Danny Chan
b98c9ab439 [HUDI-1895] Close the file handles gracefully for flink write function to avoid corrupted files (#2938) 2021-05-12 18:44:10 +08:00
TeRS-K
be9db2c4f5 [HUDI-1055] Remove hardcoded parquet in tests (#2740)
* Remove hardcoded parquet in tests
* Use DataFileUtils.getInstance
* Renaming DataFileUtils to BaseFileUtils

Co-authored-by: Vinoth Chandar <vinoth@apache.org>
2021-05-11 10:01:45 -07:00
Danny Chan
42ec7e30d7 [HUDI-1890] FlinkCreateHandle and FlinkAppendHandle canWrite should always return true (#2933)
The method #canWrite should always return true because they can already
write based on file size, e.g. the BucketAssigner.
2021-05-11 09:14:51 +08:00
Danny Chan
c1b331bcff [HUDI-1886] Avoid to generates corrupted files for flink sink (#2929) 2021-05-10 10:43:03 +08:00
Danny Chan
bfbf993cbe [HUDI-1878] Add max memory option for flink writer task (#2920)
Also removes the rate limiter because it has the similar functionality,
modify the create and merge handle cleans the retry files automatically.
2021-05-08 14:27:56 +08:00
Danny Chan
528f4ca988 [HUDI-1880] Support streaming read with compaction and cleaning (#2921) 2021-05-07 20:04:35 +08:00
Danny Chan
dab5114f16 [HUDI-1804] Continue to write when Flink write task restart because of container killing (#2843)
The `FlinkMergeHande` creates a marker file under the metadata path
each time it initializes, when a write task restarts from killing, it
tries to create the existing file and reports error.

To solve this problem, skip the creation and use the original data file
as base file to merge.
2021-04-19 19:43:41 +08:00
Danny Chan
b6d949b48a [HUDI-1801] FlinkMergeHandle rolling over may miss to rename the latest file handle (#2831)
The FlinkMergeHandle may rename the N-1 th file handle instead of the
latest one, thus to cause data duplication.
2021-04-16 11:40:53 +08:00
Danny Chan
ab4a7b0b4a [HUDI-1788] Insert overwrite (table) for Flink writer (#2808)
Supports `INSERT OVERWRITE` and `INSERT OVERWRITE TABLE` for Flink
writer.
2021-04-14 10:23:37 +08:00
Danny Chan
9c369c607d [HUDI-1757] Assigns the buckets by record key for Flink writer (#2757)
Currently we assign the buckets by record partition path which could
cause hotspot if the partition field is datetime type. Changes to assign
buckets by grouping the record whth their key first, the assignment is
valid if only there is no conflict(two task write to the same bucket).

This patch also changes the coordinator execution to be asynchronous.
2021-04-06 19:06:41 +08:00
Roc Marshal
94a5e72f16 [HUDI-1737][hudi-client] Code Cleanup: Extract common method in HoodieCreateHandle & FlinkCreateHandle (#2745) 2021-04-02 11:39:05 +08:00
Danny Chan
9804662bc8 [HUDI-1738] Emit deletes for flink MOR table streaming read (#2742)
Current we did a soft delete for DELETE row data when writes into hoodie
table. For streaming read of MOR table, the Flink reader detects the
delete records and still emit them if the record key semantics are still
kept.

This is useful and actually a must for streaming ETL pipeline
incremental computation.
2021-04-01 15:25:31 +08:00
vinoyang
fe16d0de7c [MINOR] Delete useless UpsertPartitioner for flink integration (#2746) 2021-03-31 16:36:42 +08:00
Sebastian Bernauer
aa0da72c59 Preparation for Avro update (#2650) 2021-03-30 21:50:17 -07:00
Danny Chan
d415d45416 [HUDI-1729] Asynchronous Hive sync and commits cleaning for Flink writer (#2732) 2021-03-29 10:47:29 +08:00
Shen Hong
ecbd389a3f [HUDI-1478] Introduce HoodieBloomIndex to hudi-java-client (#2608) 2021-03-28 20:28:40 +08:00
garyli1019
6e803e08b1 Moving to 0.9.0-SNAPSHOT on master branch. 2021-03-24 21:37:14 +08:00
n3nash
74241947c1 [HUDI-845] Added locking capability to allow multiple writers (#2374)
* [HUDI-845] Added locking capability to allow multiple writers
1. Added LockProvider API for pluggable lock methodologies
2. Added Resolution Strategy API to allow for pluggable conflict resolution
3. Added TableService client API to schedule table services
4. Added Transaction Manager for wrapping actions within transactions
2021-03-16 16:43:53 -07:00