Addresses leaks, perf degradation observed during testing. These were regressions from the original rfc-15 PoC implementation.
* Pass a single instance of HoodieTableMetadata everywhere
* Fix tests and add config for enabling metrics
- Removed special casing of assumeDatePartitioning inside FSUtils#getAllPartitionPaths()
- Consequently, IOException is never thrown and many files had to be adjusted
- More diligent handling of open file handles in metadata table
- Added config for controlling reuse of connections
- Added config for turning off fallback to listing, so we can see tests fail
- Changed all ipf listing code to cache/amortize the open/close for better performance
- Timelineserver also reuses connections, for better performance
- Without timelineserver, when metadata table is opened from executors, reuse is not allowed
- HoodieMetadataConfig passed into HoodieTableMetadata#create as argument.
- Fix TestHoodieBackedTableMetadata#testSync
* [HUDI-1479] Use HoodieEngineContext to parallelize fetching of partition paths
* Adding testClass for FileSystemBackedTableMetadata
Co-authored-by: Nishith Agarwal <nagarwal@uber.com>
- TestHoodieBackedMetadata#testSync etc now run for MOR tables
- HUDI-1502 is still pending and has issues for MOR/rollbacks
- Also addressed bunch of code review comments.
- Introduced an internal metadata table, that stores file listings.
- metadata table is kept upto date with
- Fixed handling of CleanerPlan.
- [HUDI-842] Reduce parallelism to speed up the test.
- [HUDI-842] Implementation of CLI commands for metadata operations and lookups.
- [HUDI-842] Extend rollback metadata to include the files which have been appended to.
- [HUDI-842] Support for rollbacks in MOR Table.
- MarkerBasedRollbackStrategy needs to correctly provide the list of files for which rollback blocks were appended.
- [HUDI-842] Added unit test for rollback of partial commits (inflight but not completed yet).
- [HUDI-842] Handled the error case where metadata update succeeds but dataset commit fails.
- [HUDI-842] Schema evolution strategy for Metadata Table. Each type of metadata saved (FilesystemMetadata, ColumnIndexMetadata, etc.) will be a separate field with default null. The type of the record will identify the valid field. This way, we can grow the schema when new type of information is saved within in which still keeping it backward compatible.
- [HUDI-842] Fix non-partitioned case and speedup initial creation of metadata table.Choose only 1 partition for jsc as the number of records is low (hundreds to thousands). There is more overhead of creating large number of partitions for JavaRDD and it slows down operations like WorkloadProfile.
For the non-partitioned case, use "." as the name of the partition to prevent empty keys in HFile.
- [HUDI-842] Reworked metrics pusblishing.
- Code has been split into reader and writer side. HoodieMetadata code to be accessed by using HoodieTable.metadata() to get instance of metdata for the table.
Code is serializable to allow executors to use the functionality.
- [RFC-15] Add metrics to track the time for each file system call.
- [RFC-15] Added a distributed metrics registry for spark which can be used to collect metrics from executors. This helps create a stats dashboard which shows the metadata table improvements in real-time for production tables.
- [HUDI-1321] Created HoodieMetadataConfig to specify configuration for the metadata table. This is safer than full-fledged properties for the metadata table (like HoodieWriteConfig) as it makes burdensome to tune the metadata. With limited configuration, we can control the performance of the metadata table closely.
[HUDI-1319][RFC-15] Adding interfaces for HoodieMetadata, HoodieMetadataWriter (apache#2266)
- moved MetadataReader to HoodieBackedTableMetadata, under the HoodieTableMetadata interface
- moved MetadataWriter to HoodieBackedTableMetadataWriter, under the HoodieTableMetadataWriter
- Pulled all the metrics into HoodieMetadataMetrics
- Writer now wraps the metadata, instead of extending it
- New enum for MetadataPartitionType
- Streamlined code flow inside HoodieBackedTableMetadataWriter w.r.t initializing metadata state
- [HUDI-1319] Make async operations work with metadata table (apache#2332)
- Changes the syncing model to only move over completed instants on data timeline
- Syncing happens postCommit and on writeClient initialization
- Latest delta commit on the metadata table is sufficient as the watermark for data timeline archival
- Cleaning/Compaction use a suffix to the last instant written to metadata table, such that we keep the 1-1
- .. mapping between data and metadata timelines.
- Got rid of a lot of the complexity around checking for valid commits during open of base/log files
- Tests now use local FS, to simulate more failure scenarios
- Some failure scenarios exposed HUDI-1434, which is needed for MOR to work correctly
co-authored by: Vinoth Chandar <vinoth@apache.org>
- This change breaks `hudi-client` into `hudi-client-common` and `hudi-spark-client` modules
- Simple usages of Spark using jsc.parallelize() has been redone using EngineContext#map, EngineContext#flatMap etc
- Code changes in the PR, break classes into `BaseXYZ` parent classes with no spark dependencies living in `hudi-client-common`
- Classes on `hudi-spark-client` are named `SparkXYZ` extending the parent classes with all the Spark dependencies
- To simplify/cleanup, HoodieIndex#fetchRecordLocation has been removed and its usages in tests replaced with alternatives
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
- Adding ability to use native spark row writing for bulk_insert
- Controlled by `ENABLE_ROW_WRITER_OPT_KEY` datasource write option
- Introduced KeyGeneratorInterface in hudi-client, moved KeyGenerator back to hudi-spark
- Simplified the new API additions to just two new methods : getRecordKey(row), getPartitionPath(row)
- Fixed all built-in key generators with new APIs
- Made the field position map lazily created upon the first call to row based apis
- Implemented native row based key generators for CustomKeyGenerator
- Fixed all the tests, with these new APIs
Co-authored-by: Balaji Varadarajan <varadarb@uber.com>
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
- This pull request adds upgrade/downgrade infra for smooth transition from list based rollback to marker based rollback*
- A new property called hoodie.table.version is added to hoodie.properties file as part of this. Whenever hoodie is launched with newer table version i.e 1(or moving from pre 0.6.0 to 0.6.0), an upgrade step will be executed automatically to adhere to marker based rollback.*
- This automatic upgrade step will happen just once per dataset as the hoodie.table.version will be updated in property file after upgrade is completed once*
- Similarly, a command line tool for Downgrading is added if incase some user wants to downgrade hoodie from table version 1 to 0 or move from hoodie 0.6.0 to pre 0.6.0*
- *Added UpgradeDowngrade to assist in upgrading or downgrading hoodie table*
- *Added Interfaces for upgrade and downgrade and concrete implementations for upgrading from 0 to 1 and downgrading from 1 to 0.*
- *Made some changes to ListingBasedRollbackHelper to expose just rollback stats w/o performing actual rollback, which will be consumed by Upgrade infra*
- Reworking failure handling for upgrade/downgrade
- Changed tests accordingly, added one test around left over cleanup
- New tables now write table version into hoodie.properties
- Clean up code naming, abstractions.
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
- [HUDI-418] Bootstrap Index Implementation using HFile with unit-test
- [HUDI-421] FileSystem View Changes to support Bootstrap with unit-tests
- [HUDI-424] Implement Query Side Integration for querying tables containing bootstrap file slices
- [HUDI-423] Implement upsert functionality for handling updates to these bootstrap file slices
- [HUDI-421] Bootstrap Write Client with tests
- [HUDI-425] Added HoodieDeltaStreamer support
- [HUDI-899] Add a knob to change partition-path style while performing metadata bootstrap
- [HUDI-900] Metadata Bootstrap Key Generator needs to handle complex keys correctly
- [HUDI-424] Simplify Record reader implementation
- [HUDI-423] Implement upsert functionality for handling updates to these bootstrap file slices
- [HUDI-420] Hoodie Demo working with hive and sparkSQL. Also, Hoodie CLI working with bootstrap tables
Co-authored-by: Mehrotra <uditme@amazon.com>
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
Co-authored-by: Balaji Varadarajan <varadarb@uber.com>
Notable changes:
1. HoodieFileWriter and HoodieFileReader abstractions for writer/reader side of a base file format
2. HoodieDataBlock abstraction for creation specific data blocks for base file formats. (e.g. Parquet has HoodieAvroDataBlock)
3. All hardocded references to Parquet / Parquet based classes have been abstracted to call methods which accept a base file format
4. HiveSyncTool accepts the base file format as a CLI parameter
5. HoodieDeltaStreamer accepts the base file format as a CLI parameter
6. HoodieSparkSqlWriter accepts the base file format as a parameter
- Savepoint and compaction classes moved to table.action.* packages
- HoodieWriteClient#savepoint(...) returns void
- Renamed HoodieCommitArchiveLog -> HoodieTimelineArchiveLog
- Fixed tests to take into account the additional validation done
- Moved helper code into CompactHelpers and SavepointHelpers
HUDI specific validation of schema evolution should ensure that a newer schema can be used for the dataset by checking that the data written using the old schema can be read using the new schema.
Code changes:
1. Added a new config in HoodieWriteConfig to enable schema validation check (disabled by default)
2. Moved code that reads schema from base/log files into hudi-common from hudi-hive-sync
3. Added writerSchema to the extraMetadata of compaction commits in MOR table. This is same as that for commits on COW table.
Testing changes:
4. Extended TestHoodieClientBase to add insertBatch API which allows inserting a new batch of unique records into a HUDI table
5. Added a unit test to verify schema evolution for both COW and MOR tables.
6. Added unit tests for schema compatiblity checks.
- rollback() and restore() table level APIs introduced
- Restore is implemented by wrapping calls to rollback executor
- Existing tests transparently cover this, since its just a refactor
- Brings more order and cohesion to the classes in hudi-common
- Utils classes related to a particular concept (avro, timeline,...) are placed near to the package
- common.fs package now contains all the filesystem level classes including wrapper filesystem
- bloom.filter package renamed to just bloom
- config package contains classes that help store properties
- common.fs.inline package contains all the inline filesystem classes/impl
- common.table.timeline now consolidates all timeline related classes
- common.table.view consolidates all the classes related to filesystem view metadata
- common.table.timeline.versioning contains all classes related to versioning of timeline
- Fix few unit tests as a result
- Moved the test packages around to match the source file move
- Rename AvroUtils to TimelineMetadataUtils & minor fixes/typos