- This adds a restore plan and serializes it to restore.requested meta file in timeline. This also means that we are introducing schedule and execution phases for restore which was not present before.
- This adds support in spark-datasource to just schedule table services inline so that users can leverage async execution w/o the need for lock service providers.
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Today, base files have bloom filter at their footers and index lookups
have to load the base file to perform any bloom lookups. Though we have
interval tree based file purging, we still end up in significant amount
of base file read for the bloom filter for the end index lookups for the
keys. This index lookup operation can be made more performant by having
all the bloom filters in a new metadata partition and doing pointed
lookups based on keys.
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Adding indexing support for clean, restore and rollback operations.
Each of these operations will now be converted to index records for
bloom filter and column stats additionally.
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Making hoodie key consistent for both column stats and bloom index by
including fileId instead of fileName, in both read and write paths.
- Performance optimization for looking up records in the metadata table.
- Avoiding multi column sorting needed for HoodieBloomMetaIndexBatchCheckFunction
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- HoodieBloomMetaIndexBatchCheckFunction cleanup to remove unused classes
- Base file checking before reading the file footer for bloom or column stats
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Updating the bloom index and column stats index to have full file name
included in the key instead of just file id.
- Minor test fixes.
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Fixed flink commit method to handle metadata table all partition update records
- TestBloomIndex fixes
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- SparkHoodieBloomIndexHelper code simplification for various config modes
- Signature change for getBloomFilters() and getColumnStats(). Callers can
just pass in interested partition and file names, the index key is then
constructed internally based on the passed in parameters.
- KeyLookupHandle and KeyLookupResults code refactoring
- Metadata schema changes - removed the reserved field
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Removing HoodieColumnStatsMetadata and using HoodieColumnRangeMetadata instead.
Fixed the users of the the removed class.
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Extending meta index test to cover deletes, compactions, clean
and restore table operations. Also, fixed the getBloomFilters()
and getColumnStats() to account for deleted entries.
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Addressing review comments - java doc for new classes, keys sorting for
lookup, index methods renaming.
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Consolidated the bloom filter checking for keys in to one
HoodieMetadataBloomIndexCheckFunction instead of a spearate batch
and lazy mode. Removed all the configs around it.
- Made the metadata table partition file group count configurable.
- Fixed the HoodieKeyLookupHandle to have auto closable file reader
when checking bloom filter and range keys.
- Config property renames. Test fixes.
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Enabling column stats indexing for all columns by default
- Handling column stat generation errors and test update
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Metadata table partition file group count taken from the slices when
the table is bootstrapped.
- Prep records for the commit refactored to the base class
- HoodieFileReader interface changes for filtering keys
- Multi column and data types support for colums stats index
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- rebase to latest master and merge fixes for the build and test failures
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Extending the metadata column stats type payload schema to include
more statistics about the column ranges to help query integration.
* [HUDI-1295] Metadata Index - Bloom filter and Column stats index to speed up index lookups
- Addressing review comments
This change is addressing issues in regards to Metadata Table observing ingesting duplicated records leading to it persisting incorrect file-sizes for the files referred to in those records.
There are multiple issues that were leading to that:
- [HUDI-3322] Incorrect Rollback Plan generation: Rollback Plan generated for MOR tables was overly expansively listing all log-files with the latest base-instant as the ones that have been affected by the rollback, leading to invalid MT records being ingested referring to those.
- [HUDI-3343] Metadata Table including Uncommitted Log Files during Bootstrap: Since MT is bootstrapped at the end of the commit operation execution (after FS activity, but before committing to the timeline), it was actually incorrectly ingesting some files that were part of the intermediate state of the operation being committed.
This change will unblock Stack of PRs based off #4556
* [HUDI-2763] Metadata table records - support for key deduplication and virtual keys
- The backing log format for the metadata table is HFile, a KeyValue type.
Since the key field in the metadata record payload is a duplicate of the
Key in the Cell, the redundant key field in the record can be emptied
to save on the cost.
- HoodieHFileWriter and HoodieHFileDataBlock will now serialize records
with the key field emptied by default. HFile writer tries to find if
the record has metadata payload schema field 'key' and if so it does
the key trimming from the record payload.
- HoodieHFileReader when reading the serialized records back from disk,
it materializes the missing keyFields if any. HFile reader tries to
find if the record has metadata payload schema fiels 'key' and if so
it does the key materialization in the record payload.
- Tests have been added to verify the default virtual keys and key
deduplication support for the metadata table records.
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
- There is a chance that the actual write eventually failed in data table but commit was successful in Metadata table, and if compaction was triggered in MDT, compaction could have included the uncommitted data. But once compacted, it may never be ignored while reading from metadata table. So, this patch fixes the bug. Metadata table compaction is triggered before applying the commit to metadata table to circumvent this issue.
* [HUDI-2909] Handle logical type in TimestampBasedKeyGenerator
Timestampbased key generator was returning diff values for row writer and non row writer path. this patch fixes it and is guarded by a config flag (`hoodie.datasource.write.keygenerator.consistent.logical.timestamp.enabled`)
* [HUDI-2154] Add index key field to HoodieKey
* [HUDI-2157] Add the bucket index and its read/write implemention of Spark engine.
* revert HUDI-2154 add index key field to HoodieKey
* fix all comments and introduce a new tricky way to get index key at runtime
support double insert for bucket index
* revert spark read optimizer based on bucket index
* add the storage layout
* index tag, hash function and add ut
* fix ut
* address partial comments
* Code review feedback
* add layout config and docs
* fix ut
* rename hoodie.layout and rebase master
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
- Made FileSystemBasedLockProviderTestClass thread safe and fixed the
tryLock retry logic.
- Made TestHoodieClientMultiWriter. testHoodieClientBasicMultiWriter
deterministic in verifying the HoodieWriteConflictException.
* [HUDI-2923] Fixing metadata table reader when metadata compaction is inflight
* Fixing retry of pending compaction in metadata table and enhancing tests
- Changes the default config of marker type (HoodieWriteConfig.MARKERS_TYPE or hoodie.write.markers.type) from DIRECT to TIMELINE_SERVER_BASED for Spark Engine.
- Adds engine-specific marker type configs: Spark -> TIMELINE_SERVER_BASED, Flink -> DIRECT, Java -> DIRECT.
- Uses DIRECT markers as well for Spark structured streaming due to timeline server only available for the first mini-batch.
- Fixes the marker creation method for non-partitioned table in TimelineServerBasedWriteMarkers.
- Adds the fallback to direct markers even when TIMELINE_SERVER_BASED is configured, in WriteMarkersFactory: when HDFS is used, or embedded timeline server is disabled, the fallback to direct markers happens.
- Fixes the closing of timeline service.
- Fixes tests that depend on markers, mainly by starting the timeline service for each test.
* `ZCurveOptimizeHelper` > `ZOrderingIndexHelper`;
Moved Z-index helper under `hudi.index.zorder` package
* Tidying up `ZOrderingIndexHelper`
* Fixing compilation
* Fixed index new/original table merging sequence to always prefer values from new index;
Cleaned up `HoodieSparkUtils`
* Added test for `mergeIndexSql`
* Abstracted Z-index name composition w/in `ZOrderingIndexHelper`;
* Fixed `DataSkippingUtils` to interrupt prunning in case data filter contains non-indexed column reference
* Properly handle exceptions origination during pruning in `HoodieFileIndex`
* Make sure no errors are logged upon encountering `AnalysisException`
* Cleaned up Z-index updating sequence;
Tidying up comments, java-docs;
* Fixed Z-index to properly handle changes of the list of clustered columns
* Tidying up
* `lint`
* Suppressing `JavaDocStyle` first sentence check
* Fixed compilation
* Fixing incorrect `DecimalType` conversion
* Refactored test `TestTableLayoutOptimization`
- Added Z-index table composition test (against fixtures)
- Separated out GC test;
Tidying up
* Fixed tests re-shuffling column order for Z-Index table `DataFrame` to align w/ the one by one loaded from JSON
* Scaffolded `DataTypeUtils` to do basic checks of Spark types;
Added proper compatibility checking b/w old/new index-tables
* Added test for Z-index tables merging
* Fixed import being shaded by creating internal `hudi.util` package
* Fixed packaging for `TestOptimizeTable`
* Revised `updateMetadataIndex` seq to provide Z-index updating process w/ source table schema
* Make sure existing Z-index table schema is sync'd to source table's one
* Fixed shaded refs
* Fixed tests
* Fixed type conversion of Parquet provided metadata values into Spark expected schemas
* Fixed `composeIndexSchema` utility to propose proper schema
* Added more tests for Z-index:
- Checking that Z-index table is built correctly
- Checking that Z-index tables are merged correctly (during update)
* Fixing source table
* Fixing tests to read from Parquet w/ proper schema
* Refactored `ParquetUtils` utility reading stats from Parquet footers
* Fixed incorrect handling of Decimals extracted from Parquet footers
* Worked around issues in javac failign to compile stream's collection
* Fixed handling of `Date` type
* Fixed handling of `DateType` to be parsed as `LocalDate`
* Updated fixture;
Make sure test loads Z-index fixture using proper schema
* Removed superfluous scheme adjusting when reading from Parquet, since Spark is actually able to perfectly restore schema (given Parquet was previously written by Spark as well)
* Fixing race-condition in Parquet's `DateStringifier` trying to share `SimpleDataFormat` object which is inherently not thread-safe
* Tidying up
* Make sure schema is used upon reading to validate input files are in the appropriate format;
Tidying up;
* Worked around javac (1.8) inability to infer expression type properly
* Updated fixtures;
Tidying up
* Fixing compilation after rebase
* Assert clustering have in Z-order layout optimization testing
* Tidying up exception messages
* XXX
* Added test validating Z-index lookup filter correctness
* Added more test-cases;
Tidying up
* Added tests for string expressions
* Fixed incorrect Z-index filter lookup translations
* Added more test-cases
* Added proper handling on complex negations of AND/OR expressions by pushing NOT operator down into inner expressions for appropriate handling
* Added `-target:jvm-1.8` for `hudi-spark` module
* Adding more tests
* Added tests for non-indexed columns
* Properly handle non-indexed columns by falling back to a re-write of containing expression as `TrueLiteral` instead
* Fixed tests
* Removing the parquet test files and disabling corresponding tests
Co-authored-by: Vinoth Chandar <vinoth@apache.org>