Wei
7865da1e15
[MINOR] Fix Javadoc wrong references ( #3115 )
2021-06-18 21:51:54 -07:00
Danny Chan
aa6342c3c9
[HUDI-2036] Move the compaction plan scheduling out of flink writer coordinator ( #3101 )
...
Since HUDI-1955 was fixed, we can move the scheduling out if the
coordinator to make the coordinator more lightweight.
2021-06-18 09:35:09 +08:00
yuzhaojing
f97dd25d41
[HUDI-2019] Set up the file system view storage config for singleton embedded server write config every time ( #3102 )
...
Co-authored-by: 喻兆靖 <yuzhaojing@bilibili.com >
2021-06-17 20:28:03 +08:00
swuferhong
5ce64a81bd
Fix the filter condition is missing in the judgment condition of compaction instance ( #3025 )
2021-06-16 14:28:53 -07:00
yuzhaojing
61efc6af79
[HUDI-2022] Release writer for append handle #close ( #3087 )
...
Co-authored-by: 喻兆靖 <yuzhaojing@bilibili.com >
2021-06-16 09:18:38 +08:00
Jintao Guan
b8fe5b91d5
[HUDI-764] [HUDI-765] ORC reader writer Implementation ( #2999 )
...
Co-authored-by: Qingyun (Teresa) Kang <kteresa@uber.com >
2021-06-15 15:21:43 -07:00
Danny Chan
cb642ceb75
[HUDI-1999] Refresh the base file view cache for WriteProfile ( #3067 )
...
Refresh the view to discover new small files.
2021-06-15 08:18:38 -07:00
yuzhaojing
6e78682cea
[HUDI-2000] Release file writer for merge handle #close ( #3068 )
...
Co-authored-by: 喻兆靖 <yuzhaojing@bilibili.com >
2021-06-13 18:09:48 +08:00
swuferhong
0c4f2fdc15
[HUDI-1984] Support independent flink hudi compaction function ( #3046 )
2021-06-13 15:04:46 +08:00
Danny Chan
125415a8b8
[HUDI-1994] Release the new records iterator for append handle #close ( #3058 )
2021-06-10 19:09:23 +08:00
Danny Chan
afbafe7046
[HUDI-1992] Release the new records map for merge handle #close ( #3056 )
2021-06-09 21:12:56 +08:00
Danny Chan
a6f5fc5967
[HUDI-1986] Skip creating marker files for flink merge handle ( #3047 )
2021-06-09 14:17:28 +08:00
wangxianghu
7261f08507
[HUDI-1929] Support configure KeyGenerator by type ( #2993 )
2021-06-08 09:26:10 -04:00
pengzhiwei
f760ec543e
[HUDI-1659] Basic Implement Of Spark Sql Support For Hoodie ( #2645 )
...
Main functions:
Support create table for hoodie.
Support CTAS.
Support Insert for hoodie. Including dynamic partition and static partition insert.
Support MergeInto for hoodie.
Support DELETE
Support UPDATE
Both support spark2 & spark3 based on DataSourceV1.
Main changes:
Add sql parser for spark2.
Add HoodieAnalysis for sql resolve and logical plan rewrite.
Add commands implementation for CREATE TABLE、INSERT、MERGE INTO & CTAS.
In order to push down the update&insert logical to the HoodieRecordPayload for MergeInto, I make same change to the
HoodieWriteHandler and other related classes.
1、Add the inputSchema for parser the incoming record. This is because the inputSchema for MergeInto is different from writeSchema as there are some transforms in the update& insert expression.
2、Add WRITE_SCHEMA to HoodieWriteConfig to pass the write schema for merge into.
3、Pass properties to HoodieRecordPayload#getInsertValue to pass the insert expression and table schema.
Verify this pull request
Add TestCreateTable for test create hoodie tables and CTAS.
Add TestInsertTable for test insert hoodie tables.
Add TestMergeIntoTable for test merge hoodie tables.
Add TestUpdateTable for test update hoodie tables.
Add TestDeleteTable for test delete hoodie tables.
Add TestSqlStatement for test supported ddl/dml currently.
2021-06-07 23:24:32 -07:00
Vinay Patil
cf90f17732
[HUDI-1281] Add deltacommit to ActionType ( #3018 )
...
Co-authored-by: veenaypatil <vinay18.patil@gmail.com >
2021-06-04 22:30:48 -07:00
Wei
e6a71ea544
[MINOR] Access the static member getLastHeartbeatTime via the class instead ( #3015 )
2021-05-31 18:54:05 +08:00
Wei
219b92c8ae
[MINOR] The collection can use forEach() directly ( #3016 )
2021-05-31 18:52:30 +08:00
Wei
d965b0550f
[MINOR] 'return' is unnecessary as the last statement in a 'void' method ( #3012 )
2021-05-31 11:43:10 +08:00
rmpifer
0709c62a6b
[HUDI-1800] Exclude file slices in pending compaction when performing small file sizing ( #2902 )
...
Co-authored-by: Ryan Pifer <ryanpife@amazon.com >
2021-05-29 08:06:01 -04:00
Danny Chan
7fed7352bd
[HUDI-1865] Make embedded time line service singleton ( #2899 )
2021-05-27 13:38:33 +08:00
wangxianghu
e7020748b5
[HUDI-1920] Set archived as the default value of HOODIE_ARCHIVELOG_FOLDER_PROP_NAME ( #2978 )
2021-05-25 16:29:55 +08:00
Susu Dong
685f77b5dd
[HUDI-1740] Fix insert-overwrite API archival ( #2784 )
...
- fix problem of archiving replace commits
- Fix problem when getting empty replacecommit.requested
- Improved the logic of handling empty and non-empty requested/inflight commit files. Added unit tests to cover both empty and non-empty inflight files cases and cleaned up some unused test util methods
Co-authored-by: yorkzero831 <yorkzero8312@gmail.com >
Co-authored-by: zheren.yu <zheren.yu@paypay-corp.co.jp >
2021-05-21 13:52:13 -07:00
Y Ethan Guo
a96034d38d
[HUDI-1888] Fix NPE when the nested partition path field has null value ( #2957 )
2021-05-21 08:28:11 -04:00
Danny Chan
7c213f9f26
[HUDI-1917] Remove the metadata sync logic in HoodieFlinkWriteClient#preWrite because it is not thread safe ( #2971 )
2021-05-21 11:29:54 +08:00
Danny Chan
9b01d2f864
[HUDI-1915] Fix the file id for write data buffer before flushing ( #2966 )
2021-05-20 10:20:08 +08:00
wangxianghu
ced068e1ee
[MINOR] Remove unused method in BaseSparkCommitActionExecutor ( #2965 )
2021-05-20 10:18:07 +08:00
Roc Marshal
fcedbfcb58
[MINOR][hudi-client] Code-cleanup,remove redundant variable declarations ( #2956 )
2021-05-17 13:34:42 +08:00
Danny Chan
8869b3b418
[HUDI-1902] Clean the corrupted files generated by FlinkMergeAndReplaceHandle ( #2949 )
...
Make the intermediate files of FlinkMergeAndReplaceHandle hidden, when
committing the instant, clean these files in case there was some
corrupted files left(in normal case, the intermediate files should be cleaned
by the FlinkMergeAndReplaceHandle itself).
2021-05-14 15:43:37 +08:00
xoln ann
12443e4187
[HUDI-1446] Support skip bootstrapIndex's init in abstract fs view init ( #2520 )
...
Co-authored-by: zhongliang <zhongliang@kuaishou.com >
Co-authored-by: Sivabalan Narayanan <sivabala@uber.com >
2021-05-14 00:29:26 -04:00
Danny Chan
ad77cf42ba
[HUDI-1900] Always close the file handle for a flink mini-batch write ( #2943 )
...
Close the file handle eagerly to avoid corrupted files as much as
possible.
2021-05-14 10:25:18 +08:00
Danny Chan
b98c9ab439
[HUDI-1895] Close the file handles gracefully for flink write function to avoid corrupted files ( #2938 )
2021-05-12 18:44:10 +08:00
lw0090
5a8b2a4f86
[HUDI-1768] add spark datasource unit test for schema validate add column ( #2776 )
2021-05-11 16:49:18 -04:00
TeRS-K
be9db2c4f5
[HUDI-1055] Remove hardcoded parquet in tests ( #2740 )
...
* Remove hardcoded parquet in tests
* Use DataFileUtils.getInstance
* Renaming DataFileUtils to BaseFileUtils
Co-authored-by: Vinoth Chandar <vinoth@apache.org >
2021-05-11 10:01:45 -07:00
Danny Chan
42ec7e30d7
[HUDI-1890] FlinkCreateHandle and FlinkAppendHandle canWrite should always return true ( #2933 )
...
The method #canWrite should always return true because they can already
write based on file size, e.g. the BucketAssigner.
2021-05-11 09:14:51 +08:00
Danny Chan
c1b331bcff
[HUDI-1886] Avoid to generates corrupted files for flink sink ( #2929 )
2021-05-10 10:43:03 +08:00
Danny Chan
bfbf993cbe
[HUDI-1878] Add max memory option for flink writer task ( #2920 )
...
Also removes the rate limiter because it has the similar functionality,
modify the create and merge handle cleans the retry files automatically.
2021-05-08 14:27:56 +08:00
Danny Chan
528f4ca988
[HUDI-1880] Support streaming read with compaction and cleaning ( #2921 )
2021-05-07 20:04:35 +08:00
Sivabalan Narayanan
0284cdecce
[HUDI-1876] wiring in Hadoop Conf with AvroSchemaConverters instantiation ( #2914 )
2021-05-05 21:31:44 -07:00
Raymond Xu
3418a92de8
[HUDI-1620] Fix Metrics UT ( #2894 )
...
Make sure shutdown Metrics between unit test cases to ensure isolation
2021-04-30 11:20:41 -07:00
satishkotha
386767693d
[HUDI-1833] rollback pending clustering even if there is greater commit ( #2863 )
...
* [HUDI-1833] rollback pending clustering even if there are greater commits
2021-04-27 14:21:42 -07:00
satishkotha
2999586509
[HUDI-1690] use jsc union instead of rdd union ( #2872 )
2021-04-26 23:35:01 -07:00
Roc Marshal
9bbb458e88
[MINOR] Remove redundant method-calling. ( #2881 )
2021-04-27 09:34:09 +08:00
Danny Chan
d047e91d86
[HUDI-1837] Add optional instant range to log record scanner for log ( #2870 )
2021-04-26 16:53:18 +08:00
Chanh Le
a1e636dc6b
[HUDI-1551] Add support for BigDecimal and Integer when partitioning based on time. ( #2851 )
...
Co-authored-by: trungchanh.le <trungchanh.le@bybit.com >
2021-04-22 21:56:20 +08:00
jsbali
b31c520c66
[HUDI-1714] Added tests to TestHoodieTimelineArchiveLog for the archival of compl… ( #2677 )
...
* Added tests to TestHoodieTimelineArchiveLog for the archival of completed clean and rollback actions.
* Adding code review changes
* [HUDI-1714] Minor Fixes
2021-04-21 10:27:43 -07:00
Sebastian Bernauer
9a288ccbeb
[MINOR] Added metric reporter Prometheus to HoodieBackedTableMetadataWriter ( #2842 )
2021-04-19 16:04:59 -07:00
li36909
6b4b878d08
[HUDI-1744] rollback fails on mor table when the partition path hasn't any files ( #2749 )
...
Co-authored-by: lrz <lrz@lrzdeMacBook-Pro.local >
2021-04-19 15:44:11 -07:00
Aditya Tiwari
ec2334ceac
[HUDI-1716]: Resolving default values for schema from dataframe ( #2765 )
...
- Adding default values and setting null as first entry in UNION data types in avro schema.
Co-authored-by: Aditya Tiwari <aditya.tiwari@flipkart.com >
2021-04-19 10:05:20 -04:00
Danny Chan
dab5114f16
[HUDI-1804] Continue to write when Flink write task restart because of container killing ( #2843 )
...
The `FlinkMergeHande` creates a marker file under the metadata path
each time it initializes, when a write task restarts from killing, it
tries to create the existing file and reports error.
To solve this problem, skip the creation and use the original data file
as base file to merge.
2021-04-19 19:43:41 +08:00
Danny Chan
b6d949b48a
[HUDI-1801] FlinkMergeHandle rolling over may miss to rename the latest file handle ( #2831 )
...
The FlinkMergeHandle may rename the N-1 th file handle instead of the
latest one, thus to cause data duplication.
2021-04-16 11:40:53 +08:00