* [HUDI-2923] Fixing metadata table reader when metadata compaction is inflight
* Fixing retry of pending compaction in metadata table and enhancing tests
* Rebased `DFSPropertiesConfiguration` to access Hadoop config in liue of FS to avoid confusion
* Fixed `readConfig` to take Hadoop's `Configuration` instead of FS;
Fixing usages
* Added test for local FS access
* Rebase to use `FSUtils.getFs`
* Combine properties provided as a file along w/ overrides provided from the CLI
* Added helper utilities to `HoodieClusteringConfig`;
Make sure corresponding config methods fallback to defaults;
* Fixed DeltaStreamer usage to respect properly combined configuration;
Abstracted `HoodieClusteringConfig.from` convenience utility to init Clustering config from `Properties`
* Tidying up
* `lint`
* Reverting changes to `HoodieWriteConfig`
* Tdiying up
* Fixed incorrect merge of the props
* Converted `HoodieConfig` to wrap around `Properties` into `TypedProperties`
* Fixed compilation
* Fixed compilation
* Making error -> warn logs from timeline server with concurrent writers for inconsistent state
* Fixing bad request response exception for timeline out of sync
* Addressing feedback. removed write concurrency mode depedency
* [HUDI-2443] Hudi KVComparator for all HFile writer usages
- Hudi relies on custom class shading for Hbase's KeyValue.KVComparator to
avoid versioning and class loading issues. There are few places which are
still using the Hbase's comparator class directly and version upgrades
would make them obsolete. Refactoring the HoodieKVComparator and making
all HFile writer creation using the same shaded class.
* [HUDI-2443] Hudi KVComparator for all HFile writer usages
- Moving HoodieKVComparator from common.bootstrap.index to common.util
* [HUDI-2443] Hudi KVComparator for all HFile writer usages
- Retaining the old HoodieKVComparatorV2 for boostrap case. Adding the
new comparator as HoodieKVComparatorV2 to differentiate from the old
one.
* [HUDI-2443] Hudi KVComparator for all HFile writer usages
- Renamed HoodieKVComparatorV2 to HoodieMetadataKVComparator and moved it
under the package org.apache.hudi.metadata.
* Make comparator classname configurable
* Revert new config and address other review comments
Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>
- Adds support for generating commit timestamps with millisecs granularity.
- Older commit timestamps (in secs granularity) will be suffixed with 999 and parsed with millisecs format.
* [HUDI-2795] Add mechanism to safely update,delete and recover table properties
- Fail safe mechanism, that lets queries succeed off a backup file
- Readers who are not upgraded to this version of code will just fail until recovery is done.
- Added unit tests that exercises all these scenarios.
- Adding CLI for recovery, updation to table command.
- [Pending] Add some hash based verfication to ensure any rare partial writes for HDFS
* Fixing upgrade/downgrade infrastructure to use new updation method
- Metadata table today has virtual keys disabled, thereby populating the metafields
for each record written out and increasing the overall storage space used. Hereby
adding virtual keys support for metadata table so that metafields are disabled
for metadata table records.
- Adding a custom KeyGenerator for Metadata table so as to not rely on the
default Base/SimpleKeyGenerators which currently look for record key
and partition field set in the table config.
- AbstractHoodieLogRecordReader's version of processing next data block and
createHoodieRecord() will be a generic version and making the derived class
HoodieMetadataMergedLogRecordReader take care of the special creation of
records from explictly passed in partition names.
- ExternalSpillableMap does the payload/value size estimation on the first put to
determine when to spill over to disk map. The payload size re-estimation also
happens after a minimum threshold of puts. This size re-estimation goes my the
current in-memory map size for calculating average payload size and does attempts
divide by zero operation when the map is size is empty. Avoiding the
ArithmeticException during the payload size re-estimate by checking the map size
upfront.
* [HUDI-1295] Hash ID generator util for Hudi table columns, partition and files
- Adding a new utility class HashID to generate 32,64,128 bits hashes for any
given message of string or byte array type. This class internally uses
MessageDigest and xxhash libraries.
- Adding stateful hash holders for Hudi table columns, partition and files to
pass around for metaindex and to convert to base64encoded strings whenever
needed
* [HUDI-2285] Adding Synchronous updates to metadata before completion of commits in data timelime.
- This patch adds synchronous updates to metadata table. In other words, every write is first committed to metadata table followed by data table. While reading metadata table, we ignore any delta commits that are present only in metadata table and not in data table timeline.
- Compaction of metadata table is fenced by the condition that we trigger compaction only when there are no inflight requests in datatable. This ensures that all base files in metadata table is always in sync with data table(w/o any holes) and only there could be some extra invalid commits among delta log files in metadata table.
- Due to this, archival of data table also fences itself up until compacted instant in metadata table.
All writes to metadata table happens within the datatable lock. So, metadata table works in one writer mode only. This might be tough to loosen since all writers write to same FILES partition and so, will result in a conflict anyways.
- As part of this, have added acquiring locks in data table for those operations which were not before while committing (rollback, clean, compaction, cluster). To note, we were not doing any conflict resolution. All we are doing here is to commit by taking a lock. So that all writes to metadata table is always a single writer.
- Also added building block to add buckets for partitions, which will be leveraged by other indexes like record level index, etc. For now, FILES partition has only one bucket. In general, any number of buckets per partition is allowed and each partition has a fixed fileId prefix with incremental suffix for each bucket within each partition.
Have fixed [HUDI-2476]. This fix is about retrying a failed compaction if it succeeded in metadata for first time, but failed w/ data table.
- Enabling metadata table by default.
- Adding more tests for metadata table
Co-authored-by: Prashant Wason <pwason@uber.com>
- Added commit metadata infra to test table so that we can test entire metadata using test table itself. These tests don't care about the contents of files as such and hence we should be able to test all code paths for metadata using test table.
Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>
- Fixing packaging, naming of classes
- Use of log4j over slf4j for uniformity
- More follow-on fixes
- Added a version to control/coordinator events.
- Eliminated the config added to write config
- Fixed fetching of checkpoints based on table type
- Clean up of naming, code placement
Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
* [HUDI-1989] Refactor clustering tests for MoR table
* refactor assertion helper
* add CheckedFunction
* SparkClientFunctionalTestHarness.java
* put back original test case
* move testcases out from TestHoodieMergeOnReadTable.java
* add TestHoodieSparkMergeOnReadTableRollback.java
* use SparkClientFunctionalTestHarness
* add tag
A failed deltacommit on the metadata table will be automatically rolled back. Assuming the failed commit was "t10", the rollback will happen the next time at "t11". Post rollback, when we try to sync the dataset to the metadata table, we should look for all unsynched instants including t11. Current code ignores t11 since the latest commit timestamp on metadata table is t11 (due to rollback).
* Add UUID to the folder name for External Spillable File System
* Fix to ensure that Disk maps folders do not interefere across users
* Fix test
* Fix test
* Rebase with latest mater and address comments
* Add Shutdown Hooks for the Disk Map
Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>