* Fix README with current limitations of hive sync
* Fix README with current limitations of hive sync
* Fix dep issue
* Fix Copy on Write flow
Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>
- Changes the default config of marker type (HoodieWriteConfig.MARKERS_TYPE or hoodie.write.markers.type) from DIRECT to TIMELINE_SERVER_BASED for Spark Engine.
- Adds engine-specific marker type configs: Spark -> TIMELINE_SERVER_BASED, Flink -> DIRECT, Java -> DIRECT.
- Uses DIRECT markers as well for Spark structured streaming due to timeline server only available for the first mini-batch.
- Fixes the marker creation method for non-partitioned table in TimelineServerBasedWriteMarkers.
- Adds the fallback to direct markers even when TIMELINE_SERVER_BASED is configured, in WriteMarkersFactory: when HDFS is used, or embedded timeline server is disabled, the fallback to direct markers happens.
- Fixes the closing of timeline service.
- Fixes tests that depend on markers, mainly by starting the timeline service for each test.
* `ZCurveOptimizeHelper` > `ZOrderingIndexHelper`;
Moved Z-index helper under `hudi.index.zorder` package
* Tidying up `ZOrderingIndexHelper`
* Fixing compilation
* Fixed index new/original table merging sequence to always prefer values from new index;
Cleaned up `HoodieSparkUtils`
* Added test for `mergeIndexSql`
* Abstracted Z-index name composition w/in `ZOrderingIndexHelper`;
* Fixed `DataSkippingUtils` to interrupt prunning in case data filter contains non-indexed column reference
* Properly handle exceptions origination during pruning in `HoodieFileIndex`
* Make sure no errors are logged upon encountering `AnalysisException`
* Cleaned up Z-index updating sequence;
Tidying up comments, java-docs;
* Fixed Z-index to properly handle changes of the list of clustered columns
* Tidying up
* `lint`
* Suppressing `JavaDocStyle` first sentence check
* Fixed compilation
* Fixing incorrect `DecimalType` conversion
* Refactored test `TestTableLayoutOptimization`
- Added Z-index table composition test (against fixtures)
- Separated out GC test;
Tidying up
* Fixed tests re-shuffling column order for Z-Index table `DataFrame` to align w/ the one by one loaded from JSON
* Scaffolded `DataTypeUtils` to do basic checks of Spark types;
Added proper compatibility checking b/w old/new index-tables
* Added test for Z-index tables merging
* Fixed import being shaded by creating internal `hudi.util` package
* Fixed packaging for `TestOptimizeTable`
* Revised `updateMetadataIndex` seq to provide Z-index updating process w/ source table schema
* Make sure existing Z-index table schema is sync'd to source table's one
* Fixed shaded refs
* Fixed tests
* Fixed type conversion of Parquet provided metadata values into Spark expected schemas
* Fixed `composeIndexSchema` utility to propose proper schema
* Added more tests for Z-index:
- Checking that Z-index table is built correctly
- Checking that Z-index tables are merged correctly (during update)
* Fixing source table
* Fixing tests to read from Parquet w/ proper schema
* Refactored `ParquetUtils` utility reading stats from Parquet footers
* Fixed incorrect handling of Decimals extracted from Parquet footers
* Worked around issues in javac failign to compile stream's collection
* Fixed handling of `Date` type
* Fixed handling of `DateType` to be parsed as `LocalDate`
* Updated fixture;
Make sure test loads Z-index fixture using proper schema
* Removed superfluous scheme adjusting when reading from Parquet, since Spark is actually able to perfectly restore schema (given Parquet was previously written by Spark as well)
* Fixing race-condition in Parquet's `DateStringifier` trying to share `SimpleDataFormat` object which is inherently not thread-safe
* Tidying up
* Make sure schema is used upon reading to validate input files are in the appropriate format;
Tidying up;
* Worked around javac (1.8) inability to infer expression type properly
* Updated fixtures;
Tidying up
* Fixing compilation after rebase
* Assert clustering have in Z-order layout optimization testing
* Tidying up exception messages
* XXX
* Added test validating Z-index lookup filter correctness
* Added more test-cases;
Tidying up
* Added tests for string expressions
* Fixed incorrect Z-index filter lookup translations
* Added more test-cases
* Added proper handling on complex negations of AND/OR expressions by pushing NOT operator down into inner expressions for appropriate handling
* Added `-target:jvm-1.8` for `hudi-spark` module
* Adding more tests
* Added tests for non-indexed columns
* Properly handle non-indexed columns by falling back to a re-write of containing expression as `TrueLiteral` instead
* Fixed tests
* Removing the parquet test files and disabling corresponding tests
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
* Rebased `DFSPropertiesConfiguration` to access Hadoop config in liue of FS to avoid confusion
* Fixed `readConfig` to take Hadoop's `Configuration` instead of FS;
Fixing usages
* Added test for local FS access
* Rebase to use `FSUtils.getFs`
* Combine properties provided as a file along w/ overrides provided from the CLI
* Added helper utilities to `HoodieClusteringConfig`;
Make sure corresponding config methods fallback to defaults;
* Fixed DeltaStreamer usage to respect properly combined configuration;
Abstracted `HoodieClusteringConfig.from` convenience utility to init Clustering config from `Properties`
* Tidying up
* `lint`
* Reverting changes to `HoodieWriteConfig`
* Tdiying up
* Fixed incorrect merge of the props
* Converted `HoodieConfig` to wrap around `Properties` into `TypedProperties`
* Fixed compilation
* Fixed compilation
* Making error -> warn logs from timeline server with concurrent writers for inconsistent state
* Fixing bad request response exception for timeline out of sync
* Addressing feedback. removed write concurrency mode depedency
* [HUDI-2480] FileSlice after pending compaction-requested instant-time is ignored by MOR snapshot reader
* include file slice after a pending compaction for spark reader
Co-authored-by: garyli1019 <yanjia.gary.li@gmail.com>
* [HUDI-2443] Hudi KVComparator for all HFile writer usages
- Hudi relies on custom class shading for Hbase's KeyValue.KVComparator to
avoid versioning and class loading issues. There are few places which are
still using the Hbase's comparator class directly and version upgrades
would make them obsolete. Refactoring the HoodieKVComparator and making
all HFile writer creation using the same shaded class.
* [HUDI-2443] Hudi KVComparator for all HFile writer usages
- Moving HoodieKVComparator from common.bootstrap.index to common.util
* [HUDI-2443] Hudi KVComparator for all HFile writer usages
- Retaining the old HoodieKVComparatorV2 for boostrap case. Adding the
new comparator as HoodieKVComparatorV2 to differentiate from the old
one.
* [HUDI-2443] Hudi KVComparator for all HFile writer usages
- Renamed HoodieKVComparatorV2 to HoodieMetadataKVComparator and moved it
under the package org.apache.hudi.metadata.
* Make comparator classname configurable
* Revert new config and address other review comments
Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>