* fix azure pipeline configs
* add pentaho.org in maven repositories
* Make sure file paths with scheme in TestParquetUtils
* add azure build status to README
Main functions:
Support create table for hoodie.
Support CTAS.
Support Insert for hoodie. Including dynamic partition and static partition insert.
Support MergeInto for hoodie.
Support DELETE
Support UPDATE
Both support spark2 & spark3 based on DataSourceV1.
Main changes:
Add sql parser for spark2.
Add HoodieAnalysis for sql resolve and logical plan rewrite.
Add commands implementation for CREATE TABLE、INSERT、MERGE INTO & CTAS.
In order to push down the update&insert logical to the HoodieRecordPayload for MergeInto, I make same change to the
HoodieWriteHandler and other related classes.
1、Add the inputSchema for parser the incoming record. This is because the inputSchema for MergeInto is different from writeSchema as there are some transforms in the update& insert expression.
2、Add WRITE_SCHEMA to HoodieWriteConfig to pass the write schema for merge into.
3、Pass properties to HoodieRecordPayload#getInsertValue to pass the insert expression and table schema.
Verify this pull request
Add TestCreateTable for test create hoodie tables and CTAS.
Add TestInsertTable for test insert hoodie tables.
Add TestMergeIntoTable for test merge hoodie tables.
Add TestUpdateTable for test update hoodie tables.
Add TestDeleteTable for test delete hoodie tables.
Add TestSqlStatement for test supported ddl/dml currently.
- fix problem of archiving replace commits
- Fix problem when getting empty replacecommit.requested
- Improved the logic of handling empty and non-empty requested/inflight commit files. Added unit tests to cover both empty and non-empty inflight files cases and cleaned up some unused test util methods
Co-authored-by: yorkzero831 <yorkzero8312@gmail.com>
Co-authored-by: zheren.yu <zheren.yu@paypay-corp.co.jp>
* Reduces log level for too verbose messages from info to debug level.
* Sort config output.
* Code Review : Small restructuring + rebasing to master
- Fixing flaky multi delta streamer test
- Using isDebugEnabled() checks
- Some changes to shorten log message without moving to DEBUG
Co-authored-by: volodymyr.burenin <volodymyr.burenin@cloudkitchens.com>
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
* Added tests to TestHoodieTimelineArchiveLog for the archival of completed clean and rollback actions.
* Adding code review changes
* [HUDI-1714] Minor Fixes
Currently we assign the buckets by record partition path which could
cause hotspot if the partition field is datetime type. Changes to assign
buckets by grouping the record whth their key first, the assignment is
valid if only there is no conflict(two task write to the same bucket).
This patch also changes the coordinator execution to be asynchronous.
Current we did a soft delete for DELETE row data when writes into hoodie
table. For streaming read of MOR table, the Flink reader detects the
delete records and still emit them if the record key semantics are still
kept.
This is useful and actually a must for streaming ETL pipeline
incremental computation.
* [HUDI-845] Added locking capability to allow multiple writers
1. Added LockProvider API for pluggable lock methodologies
2. Added Resolution Strategy API to allow for pluggable conflict resolution
3. Added TableService client API to schedule table services
4. Added Transaction Manager for wrapping actions within transactions
* [HUDI-1552] Improve performance of key lookups from base file in Metadata Table.
1. Cache the KeyScanner across lookups so that the HFile index does not have to be read for each lookup.
2. Enable block caching in KeyScanner.
3. Move the lock to a limited scope of the code to reduce lock contention.
4. Removed reuse configuration
* Properly close the readers, when metadata table is accessed from executors
- Passing a reuse boolean into HoodieBackedTableMetadata
- Preserve the fast return behavior when reusing and opening from multiple threads (no contention)
- Handle concurrent close() and open readers, for reuse=false, by always synchronizing
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
* Adding SchemeAwareFSDataInputStream for abstract out special handling for GCSFileSystem
* Moving wrapping of fsDataInputStream to separate method in HoodieLogFileReader
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
Supports two read modes:
* Read the full data set starting from the latest commit instant and
subsequent incremental data set
* Read data set that starts from a specified commit instant