[HUDI-1399] support a independent clustering spark job to asynchronously clustering (#2379)
* [HUDI-1481] add structured streaming and delta streamer clustering unit test * [HUDI-1399] support a independent clustering spark job to asynchronously clustering * [HUDI-1399] support a independent clustering spark job to asynchronously clustering * [HUDI-1498] Read clustering plan from requested file for inflight instant (#2389) * [HUDI-1399] support a independent clustering spark job with schedule generate instant time Co-authored-by: satishkotha <satishkotha@uber.com>
This commit is contained in:
@@ -152,7 +152,18 @@ public class TableSchemaResolver {
|
||||
* @throws Exception
|
||||
*/
|
||||
public Schema getTableAvroSchema() throws Exception {
|
||||
Option<Schema> schemaFromCommitMetadata = getTableSchemaFromCommitMetadata(true);
|
||||
return getTableAvroSchema(true);
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets schema for a hoodie table in Avro format, can choice if include metadata fields.
|
||||
*
|
||||
* @param includeMetadataFields choice if include metadata fields
|
||||
* @return Avro schema for this table
|
||||
* @throws Exception
|
||||
*/
|
||||
public Schema getTableAvroSchema(boolean includeMetadataFields) throws Exception {
|
||||
Option<Schema> schemaFromCommitMetadata = getTableSchemaFromCommitMetadata(includeMetadataFields);
|
||||
return schemaFromCommitMetadata.isPresent() ? schemaFromCommitMetadata.get() : getTableAvroSchemaFromDataFile();
|
||||
}
|
||||
|
||||
|
||||
@@ -109,7 +109,7 @@ public interface HoodieTimeline extends Serializable {
|
||||
/**
|
||||
* Filter this timeline to just include the in-flights excluding compaction instants.
|
||||
*
|
||||
* @return New instance of HoodieTimeline with just in-flights excluding compaction inflights
|
||||
* @return New instance of HoodieTimeline with just in-flights excluding compaction instants
|
||||
*/
|
||||
HoodieTimeline filterPendingExcludingCompaction();
|
||||
|
||||
|
||||
Reference in New Issue
Block a user