Currently we assign the buckets by record partition path which could
cause hotspot if the partition field is datetime type. Changes to assign
buckets by grouping the record whth their key first, the assignment is
valid if only there is no conflict(two task write to the same bucket).
This patch also changes the coordinator execution to be asynchronous.
Current we did a soft delete for DELETE row data when writes into hoodie
table. For streaming read of MOR table, the Flink reader detects the
delete records and still emit them if the record key semantics are still
kept.
This is useful and actually a must for streaming ETL pipeline
incremental computation.
* [HUDI-1653] Add support for composite keys in NonpartitionedKeyGenerator
* update NonpartitionedKeyGenerator to support composite record keys
* update NonpartitionedKeyGenerator
* [HUDI-845] Added locking capability to allow multiple writers
1. Added LockProvider API for pluggable lock methodologies
2. Added Resolution Strategy API to allow for pluggable conflict resolution
3. Added TableService client API to schedule table services
4. Added Transaction Manager for wrapping actions within transactions
* [HUDI-1552] Improve performance of key lookups from base file in Metadata Table.
1. Cache the KeyScanner across lookups so that the HFile index does not have to be read for each lookup.
2. Enable block caching in KeyScanner.
3. Move the lock to a limited scope of the code to reduce lock contention.
4. Removed reuse configuration
* Properly close the readers, when metadata table is accessed from executors
- Passing a reuse boolean into HoodieBackedTableMetadata
- Preserve the fast return behavior when reusing and opening from multiple threads (no contention)
- Handle concurrent close() and open readers, for reuse=false, by always synchronizing
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
In order to support object storage, we need these changes:
* Use the Hadoop filesystem so that we can find the plugin filesystem
* Do not fetch file size until the file handle is closed
* Do not close the opened filesystem because we want to use the
filesystem cache
- introduce configs to control how compaction is triggered
- Compaction can be triggered using time, number of delta commits and/or combinations
- Default behaviour remains the same.
This is the #step 2 of RFC-24:
https://cwiki.apache.org/confluence/display/HUDI/RFC+-+24%3A+Hoodie+Flink+Writer+Proposal
This PR introduce a BucketAssigner that assigns bucket ID (partition
path & fileID) for each stream record.
There is no need to look up index and partition the records anymore in
the following pipeline for these records,
we actually decide the write target location before the write and each
record computes its location when the BucketAssigner receives it, thus,
the indexing is with streaming style.
Computing locations for a batch of records all at a time is resource
consuming so a pressure to the engine,
we should avoid that in streaming system.
* Added HoodieConcatHandle to skip merging for "insert" operation when the corresponding config is set
Co-authored-by: Sivabalan Narayanan <sivabala@uber.com>
Addresses leaks, perf degradation observed during testing. These were regressions from the original rfc-15 PoC implementation.
* Pass a single instance of HoodieTableMetadata everywhere
* Fix tests and add config for enabling metrics
- Removed special casing of assumeDatePartitioning inside FSUtils#getAllPartitionPaths()
- Consequently, IOException is never thrown and many files had to be adjusted
- More diligent handling of open file handles in metadata table
- Added config for controlling reuse of connections
- Added config for turning off fallback to listing, so we can see tests fail
- Changed all ipf listing code to cache/amortize the open/close for better performance
- Timelineserver also reuses connections, for better performance
- Without timelineserver, when metadata table is opened from executors, reuse is not allowed
- HoodieMetadataConfig passed into HoodieTableMetadata#create as argument.
- Fix TestHoodieBackedTableMetadata#testSync
- Adds field to RollbackMetadata that capture the logs written for rollback blocks
- Adds field to RollbackMetadata that capture new logs files written by unsynced deltacommits
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
* [HUDI-1479] Use HoodieEngineContext to parallelize fetching of partition paths
* Adding testClass for FileSystemBackedTableMetadata
Co-authored-by: Nishith Agarwal <nagarwal@uber.com>
* [HUDI-1481] add structured streaming and delta streamer clustering unit test
* [HUDI-1399] support a independent clustering spark job to asynchronously clustering
* [HUDI-1399] support a independent clustering spark job to asynchronously clustering
* [HUDI-1498] Read clustering plan from requested file for inflight instant (#2389)
* [HUDI-1399] support a independent clustering spark job with schedule generate instant time
Co-authored-by: satishkotha <satishkotha@uber.com>
* [HUDI-1276] [HUDI-1459] Make Clustering/ReplaceCommit and Metadata table be compatible
* Use filesystemview and json format from metadata. Add tests
Co-authored-by: Satish Kotha <satishkotha@uber.com>
- Syncing to metadata table, setting operation type, starting async cleaner done in preWrite()
- Fixes an issues where delete() was not starting async cleaner correctly
- Fixed tests and enabled metadata table for TestAsyncCompaction
- TestHoodieBackedMetadata#testSync etc now run for MOR tables
- HUDI-1502 is still pending and has issues for MOR/rollbacks
- Also addressed bunch of code review comments.