1
0
Commit Graph

169 Commits

Author SHA1 Message Date
Y Ethan Guo
9a05940a74 [HUDI-3366] Remove hardcoded logic of disabling metadata table in tests (#4792) 2022-02-15 16:41:47 -05:00
Yann Byron
fe02c64fea fix build & ci (#4822) 2022-02-15 03:40:40 -08:00
Yann Byron
cb6ca7f0d1 [HUDI-3204] fix problem that spark on TimestampKeyGenerator has no re… (#4714) 2022-02-14 23:38:38 -05:00
Yann Byron
3b401d839c [HUDI-3200] deprecate hoodie.file.index.enable and unify to use BaseFileOnlyViewRelation to handle (#4798) 2022-02-14 17:38:01 -08:00
YueZhang
0a97a9893a [HUDI-3398] Fix TableSchemaResolver for all file formats and metadata table (#4782)
Co-authored-by: yuezhang <yuezhang@freewheel.tv>
2022-02-14 16:02:47 -08:00
leesf
0db1e978c6 [HUDI-3254] Introduce HoodieCatalog to manage tables for Spark Datasource V2 (#4611) 2022-02-14 06:26:58 -08:00
董可伦
94806d5cf7 [HUDI-3272] If mode==ignore && tableExists, do not execute write logic and sync hive (#4632) 2022-02-14 09:22:00 +05:30
Y Ethan Guo
6aba00e84f [MINOR] Fix typos in Spark client related classes (#4781) 2022-02-13 06:41:58 -08:00
Yann Byron
b431246710 [HUDI-3338] Custom relation instead of HadoopFsRelation (#4709)
Currently, HadoopFsRelation will use the value of the real partition path as the value of the partition field. However, different from the normal table, Hudi will persist the partition value in the parquet file. And in some cases, it's different between the value of the real partition path and the value of the partition field.
So here we implement BaseFileOnlyViewRelation which lets Hudi manage its own relation.
2022-02-11 10:48:44 -08:00
Alexey Kudinkin
464027ec37 [HUDI-3239] Convert BaseHoodieTableFileIndex to Java (#4669)
Converting BaseHoodieTableFileIndex to Java, removing Scala as a dependency from "hudi-common"
2022-02-09 18:42:08 -05:00
Alexey Kudinkin
3f263b82ce [HUDI-3206] Unify Hive's MOR implementations to avoid duplication (#4559)
Unify Hive's MOR implementations to avoid duplication to avoid duplication across implementations for different file-formats (Parquet, HFile, etc)

- Extracted HoodieRealtimeFileInputFormatBase (extending COW HoodieFileInputFormatBase base)
- Rebased Parquet, HFile implementations onto HoodieRealtimeFileInputFormatBase
- Tidying up
2022-02-07 14:06:28 -05:00
ForwardXu
773b317983 [HUDI-2941] Show _hoodie_operation in spark sql results (#4649) 2022-02-07 06:28:13 -08:00
Y Ethan Guo
b8601a9f58 [HUDI-2656] Generalize HoodieIndex for flexible record data type (#3893)
Co-authored-by: Raymond Xu <2701446+xushiyan@users.noreply.github.com>
2022-02-03 20:24:04 -08:00
Alexey Kudinkin
d681824982 [HUDI-3337] Fixing Parquet Column Range metadata extraction (#4705)
- Parquet Column Range metadata extraction utility was simplistically assuming that Decimal types are only represented by INT32, while they representation varies depending on precision.

- More details could be found here:
https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#DECIMAL
2022-02-02 20:58:05 -05:00
jsbali
7ce0f4522b [HUDI-2711] Fallback to fulltable scan for IncrementalRelation if underlying files have been cleared or moved by cleaner (#3946)
Co-authored-by: sivabalan <n.siva.b@gmail.com>
2022-01-31 23:03:18 -05:00
Yann Byron
26c3f797b0 [HUDI-3237] gracefully fail to change column data type (#4677) 2022-01-24 16:33:36 -08:00
Alexey Kudinkin
bc7882cbe9 [HUDI-2872][HUDI-2646] Refactoring layout optimization (clustering) flow to support linear ordering (#4606)
Refactoring layout optimization (clustering) flow to
- Enable support for linear (lexicographic) ordering as one of the ordering strategies (along w/ Z-order, Hilbert)
- Reconcile Layout Optimization and Clustering configuration to be more congruent
2022-01-24 16:53:54 -05:00
董可伦
56cd8ffae0 [HUDI-2837] Add support for using database name in incremental query (#4083) 2022-01-22 22:11:27 -08:00
Y Ethan Guo
4b9085057a [HUDI-3268] Fix NPE while reading table with Spark datasource (#4630) 2022-01-21 08:46:07 -05:00
Yann Byron
31b57a256f [HUDI-3236] use fields'comments persisted in catalog to fill in schema (#4587) 2022-01-19 21:44:35 -08:00
Thinking Chen
caeea946fb [HUDI-3245] Convert uppercase letters to lowercase in storage configs (#4602) 2022-01-18 14:51:09 -05:00
Yann Byron
a09c231911 [HUDI-2903] get table schema from the last commit with data written (#4180) 2022-01-18 10:50:30 -05:00
Alexey Kudinkin
75caa7d3d8 [HUDI-3179] Extracted common AbstractHoodieTableFileIndex to be shared across engines (#4520) 2022-01-16 22:46:20 -08:00
Yann Byron
d2dda55794 [HUDI-2968] add UT for update/delete on non-pk condition (#4568) 2022-01-16 12:02:12 -08:00
Yann Byron
5e0171a5ee [HUDI-3198] Improve Spark SQL create table from existing hudi table (#4584)
To modify SQL statement for creating hudi table based on an existing hudi path.

From:

```sql
create table hudi_tbl using hudi tblproperties (primaryKey='id', preCombineField='ts', type='cow') partitioned by (pt) location '/path/to/hudi'
```

To:
```sql
create table hudi_tbl using hudi location '/path/to/hudi'
```
2022-01-14 10:15:29 -08:00
leesf
5ce45c440b [HUDI-3172] Refactor hudi existing modules to make more code reuse in V2 Implementation (#4514)
* Introduce hudi-spark3-common and hudi-spark2-common modules to place classes that would be reused in different spark versions, also introduce hudi-spark3.1.x to support spark 3.1.x.
* Introduce hudi format under hudi-spark2, hudi-spark3, hudi-spark3.1.x modules and change the hudi format in original hudi-spark module to hudi_v1 format.
* Manually tested on Spark 3.1.2 and Spark 3.2.0 SQL.
* Added a README.md file under hudi-spark-datasource module.
2022-01-14 13:42:35 +08:00
董可伦
017ddbbfac [MINOR] Fix typos (#4567) 2022-01-11 23:17:10 -08:00
Yann Byron
36790709f7 [HUDI-3125] spark-sql write timestamp directly (#4471) 2022-01-08 23:43:25 -08:00
Sagar Sumit
827549949c [HUDI-2909] Handle logical type in TimestampBasedKeyGenerator (#4203)
* [HUDI-2909] Handle logical type in TimestampBasedKeyGenerator

Timestampbased key generator was returning diff values for row writer and non row writer path. this patch fixes it and is guarded by a config flag (`hoodie.datasource.write.keygenerator.consistent.logical.timestamp.enabled`)
2022-01-08 10:22:44 -05:00
Sivabalan Narayanan
7329d229d5 Adding tests to validate different key generators (#4473) 2022-01-04 10:48:04 +05:30
harshal
2b2ae34cb9 [HUDI-2558] Fixing Clustering w/ sort columns with null values fails (#4404) 2022-01-03 12:19:43 +05:30
Yann Byron
1622b52c9c [HUDI-3136] Fix merge/insert/show partitions error on Spark3.2 (#4490) 2022-01-02 02:42:10 -08:00
Shawy Geng
a4e622ac61 [HUDI-1951] Add bucket hash index, compatible with the hive bucket (#3173)
* [HUDI-2154] Add index key field to HoodieKey

* [HUDI-2157] Add the bucket index and its read/write implemention of Spark engine.
* revert HUDI-2154 add index key field to HoodieKey
* fix all comments and introduce a new tricky way to get index key at runtime
support double insert for bucket index
* revert spark read optimizer based on bucket index
* add the storage layout
* index tag, hash function and add ut
* fix ut
* address partial comments
* Code review feedback
* add layout config and docs
* fix ut
* rename hoodie.layout and rebase master

Co-authored-by: Vinoth Chandar <vinoth@apache.org>
2021-12-30 12:38:26 -08:00
Yann Byron
05942e018c [HUDI-2811] Support Spark 3.2 (#4270) 2021-12-28 00:12:44 -08:00
ForwardXu
282aa68552 [HUDI-3099] Purge drop partition for spark sql (#4436) 2021-12-28 09:38:26 +08:00
ForwardXu
5d93edc539 [HUDI-3060] drop table for spark sql (#4364) 2021-12-22 19:17:43 +08:00
harshal patil
7d046f914a [HUDI-3008] Fixing HoodieFileIndex partition column parsing for nested fields 2021-12-21 11:54:52 +05:30
Sivabalan Narayanan
03f71ef1a2 [HUDI-2970] Adding tests for archival of replace commit actions (#4268) 2021-12-18 23:59:39 -08:00
xiarixiaoyao
9246b16492 [HUDI-2958] Automatically set spark.sql.parquet.writelegacyformat, when using bulkinsert to insert data which contains decimalType (#4253) 2021-12-17 08:58:02 -05:00
xiarixiaoyao
294d712948 [HUDI-3001] Clean up the marker directory when finish bootstrap operation. (#4298) 2021-12-16 12:36:01 -08:00
Alexey Kudinkin
2d864f7524 [HUDI-2814] Make Z-index more generic Column-Stats Index (#4106) 2021-12-10 14:56:09 -08:00
Yann Byron
2f96f4300b Revert "[HUDI-2495] Resolve inconsistent key generation for timestamp types by GenericRecord and Row (#3944)" (#4201) 2021-12-03 11:13:38 -05:00
Alexey Kudinkin
bed7f9897a [HUDI-2911] Removing default value for PARTITIONPATH_FIELD_NAME resulting in incorrect KeyGenerator configuration (#4195) 2021-12-03 07:33:38 -05:00
董可伦
a398aad1fc [HUDI-2642] Add support ignoring case in update sql operation (#3882) 2021-11-29 22:36:36 -08:00
董可伦
3433f00cb5 [MINOR] Fix typo,rename 'getUrlEncodePartitoning' to 'getUrlEncodePartitioning' (#4130) 2021-11-29 18:31:22 -08:00
xiarixiaoyao
780a2ac5b2 [HUDI-2102] Support hilbert curve for hudi (#3952)
Co-authored-by: Y Ethan Guo <ethan.guoyihua@gmail.com>
2021-11-26 23:20:19 -08:00
Raymond Xu
3a8d64e584 [HUDI-2868] Fix skipped HoodieSparkSqlWriterSuite (#4125)
- Co-authored-by: Yann Byron <biyan900116@gmail.com>
2021-11-26 22:59:20 -05:00
Y Ethan Guo
d1e83e4ba0 [HUDI-2767] Enabling timeline-server-based marker as default (#4112)
- Changes the default config of marker type (HoodieWriteConfig.MARKERS_TYPE or hoodie.write.markers.type) from DIRECT to TIMELINE_SERVER_BASED for Spark Engine.
- Adds engine-specific marker type configs: Spark -> TIMELINE_SERVER_BASED, Flink -> DIRECT, Java -> DIRECT.
- Uses DIRECT markers as well for Spark structured streaming due to timeline server only available for the first mini-batch.
- Fixes the marker creation method for non-partitioned table in TimelineServerBasedWriteMarkers.
- Adds the fallback to direct markers even when TIMELINE_SERVER_BASED is configured, in WriteMarkersFactory: when HDFS is used, or embedded timeline server is disabled, the fallback to direct markers happens.
- Fixes the closing of timeline service.
- Fixes tests that depend on markers, mainly by starting the timeline service for each test.
2021-11-26 16:41:05 -05:00
Alexey Kudinkin
5755ff25a4 [HUDI-2814] Addressing issues w/ Z-order Layout Optimization (#4060)
* `ZCurveOptimizeHelper` > `ZOrderingIndexHelper`;
Moved Z-index helper under `hudi.index.zorder` package

* Tidying up `ZOrderingIndexHelper`

* Fixing compilation

* Fixed index new/original table merging sequence to always prefer values from new index;
Cleaned up `HoodieSparkUtils`

* Added test for `mergeIndexSql`

* Abstracted Z-index name composition w/in `ZOrderingIndexHelper`;

* Fixed `DataSkippingUtils` to interrupt prunning in case data filter contains non-indexed column reference

* Properly handle exceptions origination during pruning in `HoodieFileIndex`

* Make sure no errors are logged upon encountering `AnalysisException`

* Cleaned up Z-index updating sequence;
Tidying up comments, java-docs;

* Fixed Z-index to properly handle changes of the list of clustered columns

* Tidying up

* `lint`

* Suppressing `JavaDocStyle` first sentence check

* Fixed compilation

* Fixing incorrect `DecimalType` conversion

* Refactored test `TestTableLayoutOptimization`
  - Added Z-index table composition test (against fixtures)
  - Separated out GC test;
Tidying up

* Fixed tests re-shuffling column order for Z-Index table `DataFrame` to align w/ the one by one loaded from JSON

* Scaffolded `DataTypeUtils` to do basic checks of Spark types;
Added proper compatibility checking b/w old/new index-tables

* Added test for Z-index tables merging

* Fixed import being shaded by creating internal `hudi.util` package

* Fixed packaging for `TestOptimizeTable`

* Revised `updateMetadataIndex` seq to provide Z-index updating process w/ source table schema

* Make sure existing Z-index table schema is sync'd to source table's one

* Fixed shaded refs

* Fixed tests

* Fixed type conversion of Parquet provided metadata values into Spark expected schemas

* Fixed `composeIndexSchema` utility to propose proper schema

* Added more tests for Z-index:
  - Checking that Z-index table is built correctly
  - Checking that Z-index tables are merged correctly (during update)

* Fixing source table

* Fixing tests to read from Parquet w/ proper schema

* Refactored `ParquetUtils` utility reading stats from Parquet footers

* Fixed incorrect handling of Decimals extracted from Parquet footers

* Worked around issues in javac failign to compile stream's collection

* Fixed handling of `Date` type

* Fixed handling of `DateType` to be parsed as `LocalDate`

* Updated fixture;
Make sure test loads Z-index fixture using proper schema

* Removed superfluous scheme adjusting when reading from Parquet, since Spark is actually able to perfectly restore schema (given Parquet was previously written by Spark as well)

* Fixing race-condition in Parquet's `DateStringifier` trying to share `SimpleDataFormat` object which is inherently not thread-safe

* Tidying up

* Make sure schema is used upon reading to validate input files are in the appropriate format;
Tidying up;

* Worked around javac (1.8) inability to infer expression type properly

* Updated fixtures;
Tidying up

* Fixing compilation after rebase

* Assert clustering have in Z-order layout optimization testing

* Tidying up exception messages

* XXX

* Added test validating Z-index lookup filter correctness

* Added more test-cases;
Tidying up

* Added tests for string expressions

* Fixed incorrect Z-index filter lookup translations

* Added more test-cases

* Added proper handling on complex negations of AND/OR expressions by pushing NOT operator down into inner expressions for appropriate handling

* Added `-target:jvm-1.8` for `hudi-spark` module

* Adding more tests

* Added tests for non-indexed columns

* Properly handle non-indexed columns by falling back to a re-write of containing expression as  `TrueLiteral` instead

* Fixed tests

* Removing the parquet test files and disabling corresponding tests

Co-authored-by: Vinoth Chandar <vinoth@apache.org>
2021-11-26 10:02:15 -08:00
Alexey Kudinkin
60b23b9797 [HUDI-2788] Fixing issues w/ Z-order Layout Optimization (#4026)
* Simplyfying, tidying up

* Fixed packaging for `TestOptimizeTable`

* Cleaned up `HoodiFileIndex` file filtering seq;
Removed optimization manually reading Parquet table circumventing Spark

* Refactored `DataSkippingUtils`:
  - Fixed checks to validate all statistics cols are present
  - Fixed some predicates being constructed incorrectly
  - Rewrote comments for easier comprehension, added more notes
  - Tidying up

* Tidying up tests

* `lint`

* Fixing compilation

* `TestOptimizeTable` > `TestTableLayoutOptimization`;
Added assertions to test data skipping paths

* Fixed tests to properly hit data-skipping path

* Fixed pruned files candidates lookup seq to conservatively included all non-indexed files

* Added java-doc

* Fixed compilation
2021-11-24 10:10:28 -08:00