- Fixing packaging, naming of classes
- Use of log4j over slf4j for uniformity
- More follow-on fixes
- Added a version to control/coordinator events.
- Eliminated the config added to write config
- Fixed fetching of checkpoints based on table type
- Clean up of naming, code placement
Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
* [HUDI-1989] Refactor clustering tests for MoR table
* refactor assertion helper
* add CheckedFunction
* SparkClientFunctionalTestHarness.java
* put back original test case
* move testcases out from TestHoodieMergeOnReadTable.java
* add TestHoodieSparkMergeOnReadTableRollback.java
* use SparkClientFunctionalTestHarness
* add tag
Based on the discussion on stackoverflow:
https://stackoverflow.com/questions/1771679/difference-between-threads-context-class-loader-and-normal-classloader
The Thread.currentThread().getContextClassLoader() should never be used
because the context classloader is not immutable, user can overwrite it
when thread switches, it is also nullable.
The objection here: https://stackoverflow.com/a/36228195 says the
Thread.currentThread().getContextClassLoader() is a JDK design error
and the context classloader is never suggested to be used. The API that
needs classloader should ask the user to set up the right classloader.
- Added upgrade and downgrade step to and from 0.9.0. Upgrade adds few table properties. Downgrade recreates timeline server based marker files if any.
- Rollback infers the directory structure and does rollback based on the strategy used while markers were written. "write markers type" in write config is used to determine marker strategy only for new writes.
* [HUDI-2119] Ensure the rolled-back instance was previously synced to the Metadata Table when syncing a Rollback Instant.
If the rolled-back instant was synced to the Metadata Table, a corresponding deltacommit with the same timestamp should have been created on the Metadata Table timeline. To ensure we can always perfomr this check, the Metadata Table instants should not be archived until their corresponding instants are present in the dataset timeline. But ensuring this requires a large number of instants to be kept on the metadata table.
In this change, the metadata table will keep atleast the number of instants that the main dataset is keeping. If the instant being rolled back was before the metadata table timeline, the code will throw an exception and the metadata table will have to be re-bootstrapped. This should be a very rare occurance and should occur only when the dataset is being repaired by rolling back multiple commits or restoring to an much older time.
* Fixed checkstyle
* Improvements from review comments.
Fixed checkstyle
Replaced explicit null check with Option.ofNullable
Removed redundant function getSynedInstantTime
* Renamed getSyncedInstantTime and getSyncedInstantTimeForReader.
Sync is confusing so renamed to getUpdateTime() and getReaderTime().
* Removed getReaderTime which is only for testing as the same method can be accessed during testing differently without making it part of the public interface.
* Fix compilation error
* Reverting changes to HoodieMetadataFileSystemView
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
* [HUDI-1292] Created a config to enable/disable syncing of metadata table.
- Metadata Table should only be synced from a single pipeline to prevent conflicts.
- Skip syncing metadata table for clustering and compaction
- Renamed useFileListingMetadata
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
Registry.add() API adds the new value to existing metric value. For some use-cases We need a API to set/replace the existing value.
Metadata Table is synced in preWrite() and postWrite() functions of commit. As part of the sync, the current sizes and basefile/logfile counts are published as metrics. If we use the Registry.add() API, the count and sizes are incorrectly published as sum of the two values. This is corrected by using the Registry.set() API instead.
A failed deltacommit on the metadata table will be automatically rolled back. Assuming the failed commit was "t10", the rollback will happen the next time at "t11". Post rollback, when we try to sync the dataset to the metadata table, we should look for all unsynched instants including t11. Current code ignores t11 since the latest commit timestamp on metadata table is t11 (due to rollback).
* Adding support to ingest records with old schema after table's schema is evolved
* Rebasing against latest master
- Trimming test file to be < 800 lines
- Renaming config names
* Addressing feedback
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
* Add UUID to the folder name for External Spillable File System
* Fix to ensure that Disk maps folders do not interefere across users
* Fix test
* Fix test
* Rebase with latest mater and address comments
* Add Shutdown Hooks for the Disk Map
Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>
* [HUDI-1848] Adding support for HMS for running DDL queries in hive-sync-tool
* [HUDI-1848] Fixing test cases
* [HUDI-1848] CR changes
* [HUDI-1848] Fix checkstyle violations
* [HUDI-1848] Fixed a bug when metastore api fails for complex schemas with multiple levels.
* [HUDI-1848] Adding the complex schema and resolving merge conflicts
* [HUDI-1848] Adding some more javadocs
* [HUDI-1848] Added javadocs for DDLExecutor impls
* [HUDI-1848] Fixed style issue