1
0
Commit Graph

307 Commits

Author SHA1 Message Date
swuferhong
cb5cd35991 [HUDI-2043] HoodieDefaultTimeline$filterPendingCompactionTImeline() method have wrong filter condition (#3109) 2021-06-21 17:53:54 -07:00
Wei
53396061cc [MINOR] Fix wrong package name (#3114) 2021-06-19 11:50:01 +08:00
Jintao Guan
b8fe5b91d5 [HUDI-764] [HUDI-765] ORC reader writer Implementation (#2999)
Co-authored-by: Qingyun (Teresa) Kang <kteresa@uber.com>
2021-06-15 15:21:43 -07:00
Raymond Xu
f922837064 [HUDI-1950] Fix Azure CI failure in TestParquetUtils (#2984)
* fix azure pipeline configs

* add pentaho.org in maven repositories

* Make sure file paths with scheme in TestParquetUtils

* add azure build status to README
2021-06-15 03:45:17 -07:00
Prashant Wason
515ce8eb36 [MINOR] Fixed the log which should only be printed when the Metadata Table is disabled. (#3080) 2021-06-15 16:18:15 +08:00
Xuedong Luan
673d62f3c3 [MINOR] Add Tencent Cloud HDFS storage support for hudi (#3064) 2021-06-11 09:16:51 +08:00
JunZhang
e0108e972e [MINOR] Add Baidu BOS storage support for hudi (#3061)
Co-authored-by: zhangjun30 <zhangjun30@baidu.com>
2021-06-10 15:51:36 +08:00
Vinay Patil
11360f707e [HUDI-1892] Fix NPE when avro field value is null (#3051) 2021-06-08 18:12:18 -04:00
pengzhiwei
f760ec543e [HUDI-1659] Basic Implement Of Spark Sql Support For Hoodie (#2645)
Main functions:
Support create table for hoodie.
Support CTAS.
Support Insert for hoodie. Including dynamic partition and static partition insert.
Support MergeInto for hoodie.
Support DELETE
Support UPDATE
Both support spark2 & spark3 based on DataSourceV1.

Main changes:
Add sql parser for spark2.
Add HoodieAnalysis for sql resolve and logical plan rewrite.
Add commands implementation for CREATE TABLE、INSERT、MERGE INTO & CTAS.
In order to push down the update&insert logical to the HoodieRecordPayload for MergeInto, I make same change to the
HoodieWriteHandler and other related classes.
1、Add the inputSchema for parser the incoming record. This is because the inputSchema for MergeInto is different from writeSchema as there are some transforms in the update& insert expression.
2、Add WRITE_SCHEMA to HoodieWriteConfig to pass the write schema for merge into.
3、Pass properties to HoodieRecordPayload#getInsertValue to pass the insert expression and table schema.


Verify this pull request
Add TestCreateTable for test create hoodie tables and CTAS.
Add TestInsertTable for test insert hoodie tables.
Add TestMergeIntoTable for test merge hoodie tables.
Add TestUpdateTable for test update hoodie tables.
Add TestDeleteTable for test delete hoodie tables.
Add TestSqlStatement for test supported ddl/dml currently.
2021-06-07 23:24:32 -07:00
Vinay Patil
f3d7b49bfe [HUDI-1148] Remove Hadoop Conf Logs (#3040) 2021-06-07 14:49:55 -07:00
Vinay Patil
cf90f17732 [HUDI-1281] Add deltacommit to ActionType (#3018)
Co-authored-by: veenaypatil <vinay18.patil@gmail.com>
2021-06-04 22:30:48 -07:00
Wei
f6eee77636 [MINOR] Remove the implementation of Serializable from HoodieException (#3020) 2021-06-03 19:46:33 +08:00
hk__lrzy
83b0301c1a [HUDI-1943] Lose properties when hoodieWriteConfig initializtion (#3006)
* [hudi-flink]fix lose properties problem

Co-authored-by: haoke <haoke@bytedance.com>
2021-06-01 16:09:48 +08:00
Yao WANG
7a63175a70 fix the grammer err of the comment (#3013)
Co-authored-by: ywang46 <ywang46@paypal.com>
2021-05-31 11:44:25 +08:00
rmpifer
0709c62a6b [HUDI-1800] Exclude file slices in pending compaction when performing small file sizing (#2902)
Co-authored-by: Ryan Pifer <ryanpife@amazon.com>
2021-05-29 08:06:01 -04:00
Raymond Xu
afa6bc0b10 [HUDI-1723] Fix path selector listing files with the same mod date (#2845) 2021-05-25 10:19:10 -04:00
wangxianghu
e7020748b5 [HUDI-1920] Set archived as the default value of HOODIE_ARCHIVELOG_FOLDER_PROP_NAME (#2978) 2021-05-25 16:29:55 +08:00
wangxianghu
6539813733 [MINOR] Update the javadoc of EngineType (#2979) 2021-05-22 19:44:08 +08:00
Susu Dong
685f77b5dd [HUDI-1740] Fix insert-overwrite API archival (#2784)
- fix problem of archiving replace commits
- Fix problem when getting empty replacecommit.requested
- Improved the logic of handling empty and non-empty requested/inflight commit files. Added unit tests to cover both empty and non-empty inflight files cases and cleaned up some unused test util methods

Co-authored-by: yorkzero831 <yorkzero8312@gmail.com>
Co-authored-by: zheren.yu <zheren.yu@paypay-corp.co.jp>
2021-05-21 13:52:13 -07:00
zhangminglei
fe3f5c2d56 [HUDI-1913] Using streams instead of loops for input/output (#2962) 2021-05-19 09:13:38 +08:00
Danny Chan
46a2399a45 [HUDI-1902] Global index for flink writer (#2958)
Supports deduplication for record keys with different partition path.
2021-05-18 13:55:38 +08:00
xoln ann
12443e4187 [HUDI-1446] Support skip bootstrapIndex's init in abstract fs view init (#2520)
Co-authored-by: zhongliang <zhongliang@kuaishou.com>
Co-authored-by: Sivabalan Narayanan <sivabala@uber.com>
2021-05-14 00:29:26 -04:00
TeRS-K
be9db2c4f5 [HUDI-1055] Remove hardcoded parquet in tests (#2740)
* Remove hardcoded parquet in tests
* Use DataFileUtils.getInstance
* Renaming DataFileUtils to BaseFileUtils

Co-authored-by: Vinoth Chandar <vinoth@apache.org>
2021-05-11 10:01:45 -07:00
Volodymyr Burenin
8a48d16e41 [HUDI-1707] Reduces log level for too verbose messages from info to debug level. (#2714)
* Reduces log level for too verbose messages from info to debug level.
* Sort config output.
* Code Review : Small restructuring + rebasing to master
 - Fixing flaky multi delta streamer test
 - Using isDebugEnabled() checks
 - Some changes to shorten log message without moving to DEBUG

Co-authored-by: volodymyr.burenin <volodymyr.burenin@cloudkitchens.com>
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
2021-05-10 07:16:02 -07:00
Danny Chan
d047e91d86 [HUDI-1837] Add optional instant range to log record scanner for log (#2870) 2021-04-26 16:53:18 +08:00
jsbali
b31c520c66 [HUDI-1714] Added tests to TestHoodieTimelineArchiveLog for the archival of compl… (#2677)
* Added tests to TestHoodieTimelineArchiveLog for the archival of completed clean and rollback actions.

* Adding code review changes

* [HUDI-1714] Minor Fixes
2021-04-21 10:27:43 -07:00
vinoyang
c24d90d25a [MINOR] Expose the detailed exception object (#2861) 2021-04-21 22:41:42 +08:00
Xu Guang Lv
1d53d6e6c2 [HUDI-1803] Support BAIDU AFS storage format in hudi (#2836) 2021-04-16 16:43:14 +08:00
Sivabalan Narayanan
8d29863c86 [HUDI-1615] Fixing usage of NULL schema for delete operation in HoodieSparkSqlWriter (#2777) 2021-04-14 15:35:39 +08:00
Danny Chan
ab4a7b0b4a [HUDI-1788] Insert overwrite (table) for Flink writer (#2808)
Supports `INSERT OVERWRITE` and `INSERT OVERWRITE TABLE` for Flink
writer.
2021-04-14 10:23:37 +08:00
Roc Marshal
b554835053 [MINOR] fix typo. (#2804) 2021-04-11 10:31:07 +08:00
xiarixiaoyao
8d4a7fe33e [HUDI-1783] Support Huawei Cloud Object Storage (#2796) 2021-04-10 13:02:11 +08:00
hongdd
ecdbd2517f [HUDI-699] Fix CompactionCommand and add unit test for CompactionCommand (#2325) 2021-04-08 15:35:33 +08:00
hiscat
3a926aacf6 [HUDI-1773] HoodieFileGroup code optimize (#2781) 2021-04-07 18:16:03 +08:00
hiscat
f4f9dd9d83 [HUDI-1772] HoodieFileGroupId compareTo logical error(fileId self compare) (#2780) 2021-04-07 18:10:38 +08:00
hiscat
d035fcbb3c [HUDI-1767] Add setter to HoodieKey and HoodieRecordLocation to have better SE/DE performance for Flink (#2779) 2021-04-07 14:13:31 +08:00
li36909
8527590772 [HUDI-1750] Fail to load user's class if user move hudi-spark-bundle jar into spark classpath (#2753) 2021-04-06 22:33:32 -04:00
Danny Chan
9c369c607d [HUDI-1757] Assigns the buckets by record key for Flink writer (#2757)
Currently we assign the buckets by record partition path which could
cause hotspot if the partition field is datetime type. Changes to assign
buckets by grouping the record whth their key first, the assignment is
valid if only there is no conflict(two task write to the same bucket).

This patch also changes the coordinator execution to be asynchronous.
2021-04-06 19:06:41 +08:00
pengzhiwei
684622c7c9 [HUDI-1591] Implement Spark's FileIndex for Hudi to support queries via Hudi DataSource using non-globbed table path and partition pruning (#2651) 2021-04-01 11:12:28 -07:00
Danny Chan
9804662bc8 [HUDI-1738] Emit deletes for flink MOR table streaming read (#2742)
Current we did a soft delete for DELETE row data when writes into hoodie
table. For streaming read of MOR table, the Flink reader detects the
delete records and still emit them if the record key semantics are still
kept.

This is useful and actually a must for streaming ETL pipeline
incremental computation.
2021-04-01 15:25:31 +08:00
Sebastian Bernauer
aa0da72c59 Preparation for Avro update (#2650) 2021-03-30 21:50:17 -07:00
n3nash
01a1d7997b [HUDI-1712] Rename & standardize config to match other configs (#2708) 2021-03-24 17:24:02 +08:00
n3nash
d7b18783bd [HUDI-1709] Improving config names and adding hive metastore uri config (#2699) 2021-03-22 01:22:06 -07:00
n3nash
74241947c1 [HUDI-845] Added locking capability to allow multiple writers (#2374)
* [HUDI-845] Added locking capability to allow multiple writers
1. Added LockProvider API for pluggable lock methodologies
2. Added Resolution Strategy API to allow for pluggable conflict resolution
3. Added TableService client API to schedule table services
4. Added Transaction Manager for wrapping actions within transactions
2021-03-16 16:43:53 -07:00
Sivabalan Narayanan
b038623ed3 [HUDI 1615] Fixing null schema in bulk_insert row writer path (#2653)
* [HUDI-1615] Avoid passing in null schema from row writing/deltastreamer
* Fixing null schema in bulk insert row writer path
* Fixing tests

Co-authored-by: vc <vinoth@apache.org>
2021-03-16 09:44:11 -07:00
Prashant Wason
3b36cb805d [HUDI-1552] Improve performance of key lookups from base file in Metadata Table. (#2494)
* [HUDI-1552] Improve performance of key lookups from base file in Metadata Table.

1. Cache the KeyScanner across lookups so that the HFile index does not have to be read for each lookup.
2. Enable block caching in KeyScanner.
3. Move the lock to a limited scope of the code to reduce lock contention.
4. Removed reuse configuration

* Properly close the readers, when metadata table is accessed from executors

 - Passing a reuse boolean into HoodieBackedTableMetadata
 - Preserve the fast return behavior when reusing and opening from multiple threads (no contention)
 - Handle concurrent close() and open readers, for reuse=false, by always synchronizing

Co-authored-by: Vinoth Chandar <vinoth@apache.org>
2021-03-15 13:42:57 -07:00
Sivabalan Narayanan
e93c6a5693 [HUDI-1496] Fixing input stream detection of GCS FileSystem (#2500)
* Adding SchemeAwareFSDataInputStream for abstract out special handling for GCSFileSystem
* Moving wrapping of fsDataInputStream to separate method in HoodieLogFileReader

Co-authored-by: Vinoth Chandar <vinoth@apache.org>
2021-03-14 00:57:57 -08:00
Danny Chan
2fdae6835c [HUDI-1663] Streaming read for Flink MOR table (#2640)
Supports two read modes:
* Read the full data set starting from the latest commit instant and
  subsequent incremental data set
* Read data set that starts from a specified commit instant
2021-03-10 22:44:06 +08:00
satishkotha
c4a66324cd [HUDI-1651] Fix archival of requested replacecommit (#2622) 2021-03-09 15:56:44 -08:00
satishkotha
11ad4ed26b [HUDI-1661] Exclude clustering commits from getExtraMetadataFromLatest API (#2632) 2021-03-05 13:42:19 -08:00