1
0
Vinoth Chandar 5847f0c934 Fix HUDI-27 : Support num_cores > 1 for writing through spark
- Users using spark.executor.cores > 1 used to fail due to "FileSystem closed"
 - This is due to HoodieWrapperFileSystem closing the wrapped filesytem obj
 - FileSystem.getInternal caching code races threads and closes the extra fs instance(s)
 - Bumped up num cores in tests to 8, speeds up tests by 3-4 mins
2019-03-28 15:56:21 -07:00
2018-12-18 12:52:39 -08:00
2016-12-29 16:53:39 -08:00
2018-12-31 10:31:12 -08:00
2019-03-18 07:46:25 -07:00
2019-02-15 21:28:39 -08:00

Hudi

Hudi (pronounced Hoodie) stands for Hadoop Upserts anD Incrementals. Hudi manages storage of large analytical datasets on HDFS and serve them out via two types of tables

  • Read Optimized Table - Provides excellent query performance via purely columnar storage (e.g. Parquet)
  • Near-Real time Table (WIP) - Provides queries on real-time data, using a combination of columnar & row based storage (e.g Parquet + Avro)

For more, head over here

Description
内部版本
Readme 43 MiB
Languages
Java 81.4%
Scala 16.7%
ANTLR 0.9%
Shell 0.8%
Dockerfile 0.2%