1
0

[HUDI-4176] Fixing TableSchemaResolver to avoid repeated HoodieCommitMetadata parsing (#5733)

As has been outlined in HUDI-4176, we've hit a roadblock while testing Hudi on a large dataset (~1Tb) having pretty fat commits where Hudi's commit metadata could reach into 100s of Mbs.
Given the size some of ours commit metadata instances Spark's parsing and resolving phase (when spark.sql(...) is involved, but before returned Dataset is dereferenced) starts to dominate some of our queries' execution time.

- Rebased onto new APIs to avoid excessive Hadoop's Path allocations
- Eliminated hasOperationField completely to avoid repeatitive computations
- Cleaning up duplication in HoodieActiveTimeline
- Added caching for common instances of HoodieCommitMetadata
- Made tableStructSchema lazy;
This commit is contained in:
Alexey Kudinkin
2022-06-06 10:14:26 -07:00
committed by GitHub
parent 132c0aa8c7
commit 4f7ea8c79a
14 changed files with 318 additions and 326 deletions

View File

@@ -210,7 +210,7 @@ class TestTableSchemaResolverWithSparkSQL {
.setConf(spark.sessionState.newHadoopConf())
.build()
assertTrue(new TableSchemaResolver(metaClient).isHasOperationField)
assertTrue(new TableSchemaResolver(metaClient).hasOperationField)
schemaValuationBasedOnDataFile(metaClient, schema.toString())
}

View File

@@ -615,7 +615,7 @@ class TestInsertTable extends HoodieSparkSqlTestBase {
.setConf(spark.sessionState.newHadoopConf())
.build()
assertResult(true)(new TableSchemaResolver(metaClient).isHasOperationField)
assertResult(true)(new TableSchemaResolver(metaClient).hasOperationField)
spark.sql(
s"""