3129770fd085c2390cf4da14e9d3de21482d1c35
- Concurreny handled via taskID, failure recovery handled via renames - Falls back to search 3 levels up - Cli tool has command to add this to existing tables
Hoodie
Hoodie manages storage of large analytical datasets on HDFS and serve them out via two types of tables
- Read Optimized Table - Provides excellent query performance via purely columnar storage (e.g. Parquet)
- Near-Real time Table (WIP) - Provides queries on real-time data, using a combination of columnar & row based storage (e.g Parquet + Avro)
For more, head over here
Description
Languages
Java
81.4%
Scala
16.7%
ANTLR
0.9%
Shell
0.8%
Dockerfile
0.2%