[HUDI-842] Implementation of HUDI RFC-15.
- Introduced an internal metadata table, that stores file listings. - metadata table is kept upto date with - Fixed handling of CleanerPlan. - [HUDI-842] Reduce parallelism to speed up the test. - [HUDI-842] Implementation of CLI commands for metadata operations and lookups. - [HUDI-842] Extend rollback metadata to include the files which have been appended to. - [HUDI-842] Support for rollbacks in MOR Table. - MarkerBasedRollbackStrategy needs to correctly provide the list of files for which rollback blocks were appended. - [HUDI-842] Added unit test for rollback of partial commits (inflight but not completed yet). - [HUDI-842] Handled the error case where metadata update succeeds but dataset commit fails. - [HUDI-842] Schema evolution strategy for Metadata Table. Each type of metadata saved (FilesystemMetadata, ColumnIndexMetadata, etc.) will be a separate field with default null. The type of the record will identify the valid field. This way, we can grow the schema when new type of information is saved within in which still keeping it backward compatible. - [HUDI-842] Fix non-partitioned case and speedup initial creation of metadata table.Choose only 1 partition for jsc as the number of records is low (hundreds to thousands). There is more overhead of creating large number of partitions for JavaRDD and it slows down operations like WorkloadProfile. For the non-partitioned case, use "." as the name of the partition to prevent empty keys in HFile. - [HUDI-842] Reworked metrics pusblishing. - Code has been split into reader and writer side. HoodieMetadata code to be accessed by using HoodieTable.metadata() to get instance of metdata for the table. Code is serializable to allow executors to use the functionality. - [RFC-15] Add metrics to track the time for each file system call. - [RFC-15] Added a distributed metrics registry for spark which can be used to collect metrics from executors. This helps create a stats dashboard which shows the metadata table improvements in real-time for production tables. - [HUDI-1321] Created HoodieMetadataConfig to specify configuration for the metadata table. This is safer than full-fledged properties for the metadata table (like HoodieWriteConfig) as it makes burdensome to tune the metadata. With limited configuration, we can control the performance of the metadata table closely. [HUDI-1319][RFC-15] Adding interfaces for HoodieMetadata, HoodieMetadataWriter (apache#2266) - moved MetadataReader to HoodieBackedTableMetadata, under the HoodieTableMetadata interface - moved MetadataWriter to HoodieBackedTableMetadataWriter, under the HoodieTableMetadataWriter - Pulled all the metrics into HoodieMetadataMetrics - Writer now wraps the metadata, instead of extending it - New enum for MetadataPartitionType - Streamlined code flow inside HoodieBackedTableMetadataWriter w.r.t initializing metadata state - [HUDI-1319] Make async operations work with metadata table (apache#2332) - Changes the syncing model to only move over completed instants on data timeline - Syncing happens postCommit and on writeClient initialization - Latest delta commit on the metadata table is sufficient as the watermark for data timeline archival - Cleaning/Compaction use a suffix to the last instant written to metadata table, such that we keep the 1-1 - .. mapping between data and metadata timelines. - Got rid of a lot of the complexity around checking for valid commits during open of base/log files - Tests now use local FS, to simulate more failure scenarios - Some failure scenarios exposed HUDI-1434, which is needed for MOR to work correctly co-authored by: Vinoth Chandar <vinoth@apache.org>
This commit is contained in:
committed by
vinoth chandar
parent
c3e9243ea1
commit
298808baaf
@@ -0,0 +1,226 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one
|
||||
* or more contributor license agreements. See the NOTICE file
|
||||
* distributed with this work for additional information
|
||||
* regarding copyright ownership. The ASF licenses this file
|
||||
* to you under the Apache License, Version 2.0 (the
|
||||
* "License"); you may not use this file except in compliance
|
||||
* with the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.apache.hudi.cli.commands;
|
||||
|
||||
import org.apache.hadoop.fs.FileStatus;
|
||||
import org.apache.hadoop.fs.Path;
|
||||
import org.apache.hudi.cli.HoodieCLI;
|
||||
import org.apache.hudi.cli.utils.SparkUtil;
|
||||
import org.apache.hudi.client.common.HoodieSparkEngineContext;
|
||||
import org.apache.hudi.common.util.HoodieTimer;
|
||||
import org.apache.hudi.common.util.ValidationUtils;
|
||||
import org.apache.hudi.config.HoodieMetadataConfig;
|
||||
import org.apache.hudi.config.HoodieWriteConfig;
|
||||
import org.apache.hudi.metadata.HoodieBackedTableMetadata;
|
||||
import org.apache.hudi.metadata.HoodieTableMetadata;
|
||||
import org.apache.hudi.metrics.SparkHoodieBackedTableMetadataWriter;
|
||||
|
||||
import org.apache.spark.api.java.JavaSparkContext;
|
||||
import org.springframework.shell.core.CommandMarker;
|
||||
import org.springframework.shell.core.annotation.CliCommand;
|
||||
import org.springframework.shell.core.annotation.CliOption;
|
||||
import org.springframework.stereotype.Component;
|
||||
|
||||
import java.io.FileNotFoundException;
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* CLI commands to operate on the Metadata Table.
|
||||
*/
|
||||
@Component
|
||||
public class MetadataCommand implements CommandMarker {
|
||||
|
||||
private JavaSparkContext jsc;
|
||||
private static String metadataBaseDirectory;
|
||||
|
||||
/**
|
||||
* Sets the directory to store/read Metadata Table.
|
||||
*
|
||||
* This can be used to store the metadata table away from the dataset directory.
|
||||
* - Useful for testing as well as for using via the HUDI CLI so that the actual dataset is not written to.
|
||||
* - Useful for testing Metadata Table performance and operations on existing datasets before enabling.
|
||||
*/
|
||||
public static void setMetadataBaseDirectory(String metadataDir) {
|
||||
ValidationUtils.checkState(metadataBaseDirectory == null,
|
||||
"metadataBaseDirectory is already set to " + metadataBaseDirectory);
|
||||
metadataBaseDirectory = metadataDir;
|
||||
}
|
||||
|
||||
public static String getMetadataTableBasePath(String tableBasePath) {
|
||||
if (metadataBaseDirectory != null) {
|
||||
return metadataBaseDirectory;
|
||||
}
|
||||
return HoodieTableMetadata.getMetadataTableBasePath(tableBasePath);
|
||||
}
|
||||
|
||||
@CliCommand(value = "metadata set", help = "Set options for Metadata Table")
|
||||
public String set(@CliOption(key = {"metadataDir"},
|
||||
help = "Directory to read/write metadata table (can be different from dataset)", unspecifiedDefaultValue = "")
|
||||
final String metadataDir) {
|
||||
if (!metadataDir.isEmpty()) {
|
||||
setMetadataBaseDirectory(metadataDir);
|
||||
}
|
||||
|
||||
return "Ok";
|
||||
}
|
||||
|
||||
@CliCommand(value = "metadata create", help = "Create the Metadata Table if it does not exist")
|
||||
public String create() throws IOException {
|
||||
HoodieCLI.getTableMetaClient();
|
||||
Path metadataPath = new Path(getMetadataTableBasePath(HoodieCLI.basePath));
|
||||
try {
|
||||
FileStatus[] statuses = HoodieCLI.fs.listStatus(metadataPath);
|
||||
if (statuses.length > 0) {
|
||||
throw new RuntimeException("Metadata directory (" + metadataPath.toString() + ") not empty.");
|
||||
}
|
||||
} catch (FileNotFoundException e) {
|
||||
// Metadata directory does not exist yet
|
||||
HoodieCLI.fs.mkdirs(metadataPath);
|
||||
}
|
||||
|
||||
HoodieTimer timer = new HoodieTimer().startTimer();
|
||||
HoodieWriteConfig writeConfig = getWriteConfig();
|
||||
initJavaSparkContext();
|
||||
SparkHoodieBackedTableMetadataWriter.create(HoodieCLI.conf, writeConfig, new HoodieSparkEngineContext(jsc));
|
||||
return String.format("Created Metadata Table in %s (duration=%.2f secs)", metadataPath, timer.endTimer() / 1000.0);
|
||||
}
|
||||
|
||||
@CliCommand(value = "metadata delete", help = "Remove the Metadata Table")
|
||||
public String delete() throws Exception {
|
||||
HoodieCLI.getTableMetaClient();
|
||||
Path metadataPath = new Path(getMetadataTableBasePath(HoodieCLI.basePath));
|
||||
try {
|
||||
FileStatus[] statuses = HoodieCLI.fs.listStatus(metadataPath);
|
||||
if (statuses.length > 0) {
|
||||
HoodieCLI.fs.delete(metadataPath, true);
|
||||
}
|
||||
} catch (FileNotFoundException e) {
|
||||
// Metadata directory does not exist
|
||||
}
|
||||
|
||||
return String.format("Removed Metdata Table from %s", metadataPath);
|
||||
}
|
||||
|
||||
@CliCommand(value = "metadata init", help = "Update the metadata table from commits since the creation")
|
||||
public String init(@CliOption(key = {"readonly"}, unspecifiedDefaultValue = "false",
|
||||
help = "Open in read-only mode") final boolean readOnly) throws Exception {
|
||||
HoodieCLI.getTableMetaClient();
|
||||
Path metadataPath = new Path(getMetadataTableBasePath(HoodieCLI.basePath));
|
||||
try {
|
||||
HoodieCLI.fs.listStatus(metadataPath);
|
||||
} catch (FileNotFoundException e) {
|
||||
// Metadata directory does not exist
|
||||
throw new RuntimeException("Metadata directory (" + metadataPath.toString() + ") does not exist.");
|
||||
}
|
||||
|
||||
HoodieTimer timer = new HoodieTimer().startTimer();
|
||||
if (!readOnly) {
|
||||
HoodieWriteConfig writeConfig = getWriteConfig();
|
||||
initJavaSparkContext();
|
||||
SparkHoodieBackedTableMetadataWriter.create(HoodieCLI.conf, writeConfig, new HoodieSparkEngineContext(jsc));
|
||||
}
|
||||
|
||||
String action = readOnly ? "Opened" : "Initialized";
|
||||
return String.format(action + " Metadata Table in %s (duration=%.2fsec)", metadataPath, (timer.endTimer()) / 1000.0);
|
||||
}
|
||||
|
||||
@CliCommand(value = "metadata stats", help = "Print stats about the metadata")
|
||||
public String stats() throws IOException {
|
||||
HoodieCLI.getTableMetaClient();
|
||||
HoodieBackedTableMetadata metadata = new HoodieBackedTableMetadata(HoodieCLI.conf, HoodieCLI.basePath, "/tmp", true, false, false);
|
||||
Map<String, String> stats = metadata.stats();
|
||||
|
||||
StringBuffer out = new StringBuffer("\n");
|
||||
out.append(String.format("Base path: %s\n", getMetadataTableBasePath(HoodieCLI.basePath)));
|
||||
for (Map.Entry<String, String> entry : stats.entrySet()) {
|
||||
out.append(String.format("%s: %s\n", entry.getKey(), entry.getValue()));
|
||||
}
|
||||
|
||||
return out.toString();
|
||||
}
|
||||
|
||||
@CliCommand(value = "metadata list-partitions", help = "Print a list of all partitions from the metadata")
|
||||
public String listPartitions() throws IOException {
|
||||
HoodieCLI.getTableMetaClient();
|
||||
HoodieBackedTableMetadata metadata = new HoodieBackedTableMetadata(HoodieCLI.conf, HoodieCLI.basePath, "/tmp", true, false, false);
|
||||
|
||||
StringBuffer out = new StringBuffer("\n");
|
||||
if (!metadata.enabled()) {
|
||||
out.append("=== Metadata Table not initilized. Using file listing to get list of partitions. ===\n\n");
|
||||
}
|
||||
|
||||
long t1 = System.currentTimeMillis();
|
||||
List<String> partitions = metadata.getAllPartitionPaths();
|
||||
long t2 = System.currentTimeMillis();
|
||||
|
||||
int[] count = {0};
|
||||
partitions.stream().sorted((p1, p2) -> p2.compareTo(p1)).forEach(p -> {
|
||||
out.append(p);
|
||||
if (++count[0] % 15 == 0) {
|
||||
out.append("\n");
|
||||
} else {
|
||||
out.append(", ");
|
||||
}
|
||||
});
|
||||
|
||||
out.append(String.format("\n\n=== List of partitions retrieved in %.2fsec ===", (t2 - t1) / 1000.0));
|
||||
|
||||
return out.toString();
|
||||
}
|
||||
|
||||
@CliCommand(value = "metadata list-files", help = "Print a list of all files in a partition from the metadata")
|
||||
public String listFiles(
|
||||
@CliOption(key = {"partition"}, help = "Name of the partition to list files", mandatory = true)
|
||||
final String partition) throws IOException {
|
||||
HoodieCLI.getTableMetaClient();
|
||||
HoodieBackedTableMetadata metaReader = new HoodieBackedTableMetadata(HoodieCLI.conf, HoodieCLI.basePath, "/tmp", true, false, false);
|
||||
|
||||
StringBuffer out = new StringBuffer("\n");
|
||||
if (!metaReader.enabled()) {
|
||||
out.append("=== Metadata Table not initialized. Using file listing to get list of files in partition. ===\n\n");
|
||||
}
|
||||
|
||||
long t1 = System.currentTimeMillis();
|
||||
FileStatus[] statuses = metaReader.getAllFilesInPartition(new Path(HoodieCLI.basePath, partition));
|
||||
long t2 = System.currentTimeMillis();
|
||||
|
||||
Arrays.stream(statuses).sorted((p1, p2) -> p2.getPath().getName().compareTo(p1.getPath().getName())).forEach(p -> {
|
||||
out.append("\t" + p.getPath().getName());
|
||||
out.append("\n");
|
||||
});
|
||||
|
||||
out.append(String.format("\n=== Files in partition retrieved in %.2fsec ===", (t2 - t1) / 1000.0));
|
||||
|
||||
return out.toString();
|
||||
}
|
||||
|
||||
private HoodieWriteConfig getWriteConfig() {
|
||||
return HoodieWriteConfig.newBuilder().withPath(HoodieCLI.basePath)
|
||||
.withMetadataConfig(HoodieMetadataConfig.newBuilder().enable(true).build()).build();
|
||||
}
|
||||
|
||||
private void initJavaSparkContext() {
|
||||
if (jsc == null) {
|
||||
jsc = SparkUtil.initJavaSparkConf("HoodieClI");
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user