- Introduced an internal metadata table, that stores file listings.
- metadata table is kept upto date with
- Fixed handling of CleanerPlan.
- [HUDI-842] Reduce parallelism to speed up the test.
- [HUDI-842] Implementation of CLI commands for metadata operations and lookups.
- [HUDI-842] Extend rollback metadata to include the files which have been appended to.
- [HUDI-842] Support for rollbacks in MOR Table.
- MarkerBasedRollbackStrategy needs to correctly provide the list of files for which rollback blocks were appended.
- [HUDI-842] Added unit test for rollback of partial commits (inflight but not completed yet).
- [HUDI-842] Handled the error case where metadata update succeeds but dataset commit fails.
- [HUDI-842] Schema evolution strategy for Metadata Table. Each type of metadata saved (FilesystemMetadata, ColumnIndexMetadata, etc.) will be a separate field with default null. The type of the record will identify the valid field. This way, we can grow the schema when new type of information is saved within in which still keeping it backward compatible.
- [HUDI-842] Fix non-partitioned case and speedup initial creation of metadata table.Choose only 1 partition for jsc as the number of records is low (hundreds to thousands). There is more overhead of creating large number of partitions for JavaRDD and it slows down operations like WorkloadProfile.
For the non-partitioned case, use "." as the name of the partition to prevent empty keys in HFile.
- [HUDI-842] Reworked metrics pusblishing.
- Code has been split into reader and writer side. HoodieMetadata code to be accessed by using HoodieTable.metadata() to get instance of metdata for the table.
Code is serializable to allow executors to use the functionality.
- [RFC-15] Add metrics to track the time for each file system call.
- [RFC-15] Added a distributed metrics registry for spark which can be used to collect metrics from executors. This helps create a stats dashboard which shows the metadata table improvements in real-time for production tables.
- [HUDI-1321] Created HoodieMetadataConfig to specify configuration for the metadata table. This is safer than full-fledged properties for the metadata table (like HoodieWriteConfig) as it makes burdensome to tune the metadata. With limited configuration, we can control the performance of the metadata table closely.
[HUDI-1319][RFC-15] Adding interfaces for HoodieMetadata, HoodieMetadataWriter (apache#2266)
- moved MetadataReader to HoodieBackedTableMetadata, under the HoodieTableMetadata interface
- moved MetadataWriter to HoodieBackedTableMetadataWriter, under the HoodieTableMetadataWriter
- Pulled all the metrics into HoodieMetadataMetrics
- Writer now wraps the metadata, instead of extending it
- New enum for MetadataPartitionType
- Streamlined code flow inside HoodieBackedTableMetadataWriter w.r.t initializing metadata state
- [HUDI-1319] Make async operations work with metadata table (apache#2332)
- Changes the syncing model to only move over completed instants on data timeline
- Syncing happens postCommit and on writeClient initialization
- Latest delta commit on the metadata table is sufficient as the watermark for data timeline archival
- Cleaning/Compaction use a suffix to the last instant written to metadata table, such that we keep the 1-1
- .. mapping between data and metadata timelines.
- Got rid of a lot of the complexity around checking for valid commits during open of base/log files
- Tests now use local FS, to simulate more failure scenarios
- Some failure scenarios exposed HUDI-1434, which is needed for MOR to work correctly
co-authored by: Vinoth Chandar <vinoth@apache.org>
* [HUDI-1434] fix incorrect log file path in HoodieWriteStat
* HoodieWriteHandle#close() returns a list of WriteStatus objs
* Handle rolled-over log files and return a WriteStatus per log file written
- Combined data and delete block logging into a single call
- Lazily initialize and manage write status based on returned AppendResult
- Use FSUtils.getFileSize() to set final file size, consistent with other handles
- Added tests around returned values in AppendResult
- Added validation of the file sizes returned in write stat
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
Some field types changes are allowed (e.g. int -> long) while maintaining schema backward compatibility within HUDI. The check was reversed with the reader schema being passed for the write schema.
* [HUDI-1350] Support Partition level delete API in HUDI
* [HUDI-1350] Support Partition level delete API in HUDI base InsertOverwriteCommitAction
* [HUDI-1350] Support Partition level delete API in HUDI base InsertOverwriteCommitAction
* Added ability to pass in `properties` to payload methods, so they can perform table/record specific merges
* Added default methods so existing payload classes are backwards compatible.
* Adding DefaultHoodiePayload to honor ordering while merging two records
* Fixing default payload based on feedback
* Fix flaky MOR unit test
* Update Spark APIs to make it be compatible with both spark2 & spark3
* Refactor bulk insert v2 part to make Hudi be able to compile with Spark3
* Add spark3 profile to handle fasterxml & spark version
* Create hudi-spark-common module & refactor hudi-spark related modules
Co-authored-by: Wenning Ding <wenningd@amazon.com>
Remove APIs in `HoodieTestUtils`
- `createCommitFiles`
- `createDataFile`
- `createNewLogFile`
- `createCompactionRequest`
Migrated usages in `TestCleaner#testPendingCompactions`.
Also improved some API names in `HoodieTestTable`.
Migrate deprecated APIs in HoodieTestUtils to HoodieTestTable for test classes
- TestClientRollback
- TestCopyOnWriteRollbackActionExecutor
Use FileCreateUtils APIs in CompactionTestUtils.
Then remove unused deprecated APIs after migration.
* [HUDI-995] Use HoodieTestTable in more classes
Migrate test data prep logic in
- TestStatsCommand
- TestHoodieROTablePathFilter
Re-implement methods for create new commit times in HoodieTestUtils and HoodieClientTestHarness
- Move relevant APIs to HoodieTestTable
- Migrate usages
After changing to HoodieTestTable APIs, removed unused deprecated APIs in HoodieTestUtils
Add new Payload(OverwriteNonDefaultsWithLatestAvroPayload) for updating specified fields in storage
## Brief change log
update current value for several fields that you want to change.
The default payload OverwriteWithLatestAvroPayload overwrite the whole record when
compared to `orderingVal`.This doesn't meet our need when we just want to change specified fields.
For example: (suppose Default value is null)
```
current Value
Field: name age gender
Value: karl 20 male
```
```
insert Value
Field: name age gender
Value: null 30 null
```
```
After insert:
Field: name age gender
Value: karl 30 male
```
## Verify this pull request
Added TestOverwriteNonDefaultsWithLatestAvroPayload to verify the change.
* [HUDI-1181] Fix decimal type display issue for record key field
* Remove getNestedFieldVal method from DataSourceUtils
* resolve comments
Co-authored-by: Wenning Ding <wenningd@amazon.com>
When unit tests are run on shared machines (e.g. jenkins cluster), the unit tests sometimes fail due to BindException in starting HDFS Cluster. This is because the port chosen may have been bound by another process using the same machine. The fix is to retry the port selection a few times.
- Introduce HoodieWriteableTestTable for writing records into files
- Migrate writeParquetFiles() in HoodieClientTestUtils to HoodieWriteableTestTable
- Adopt HoodieWrittableTestTable for test cases in
- ITTestRepairsCommand.java
- TestHoodieIndex.java
- TestHoodieKeyLocationFetchHandle.java
- TestHoodieGlobalBloomIndex.java
- TestHoodieBloomIndex.java
- Renamed HoodieTestTable and FileCreateUtils APIs
- dataFile changed to baseFile
* [HUDI-960] Implementation of the HFile base and log file format.
1. Includes HFileWriter and HFileReader
2. Includes HFileInputFormat for both snapshot and realtime input format for Hive
3. Unit test for new code
4. IT for using HFile format and querying using Hive (Presto and SparkSQL are not supported)
Advantage:
HFile file format saves data as binary key-value pairs. This implementation chooses the following values:
1. Key = Hoodie Record Key (as bytes)
2. Value = Avro encoded GenericRecord (as bytes)
HFile allows efficient lookup of a record by key or range of keys. Hence, this base file format is well suited to applications like RFC-15, RFC-08 which will benefit from the ability to lookup records by key or search in a range of keys without having to read the entire data/log format.
Limitations:
HFile storage format has certain limitations when used as a general purpose data storage format.
1. Does not have a implemented reader for Presto and SparkSQL
2. Is not a columnar file format and hence may lead to lower compression levels and greater IO on query side due to lack of column pruning
Other changes:
- Remove databricks/avro from pom
- Fix HoodieClientTestUtils from not using scala imports/reflection based conversion etc
- Breaking up limitFileSize(), per parquet and hfile base files
- Added three new configs for HoodieHFileConfig - prefetchBlocksOnOpen, cacheDataInL1, dropBehindCacheCompaction
- Throw UnsupportedException in HFileReader.getRecordKeys()
- Updated HoodieCopyOnWriteTable to create the correct merge handle (HoodieSortedMergeHandle for HFile and HoodieMergeHandle otherwise)
* Fixing checkstyle
Co-authored-by: Vinoth Chandar <vinoth@apache.org>