1
0
Files
hudi/docs/index.md
2017-03-02 11:35:09 -08:00

1.7 KiB
Raw Blame History

title, keywords, tags, sidebar, permalink, summary
title keywords tags sidebar permalink summary
Hoodie Overview homepage
getting_started
mydoc_sidebar index.html Hoodie lowers data latency across the board, while simultaenously achieving orders of magnitude of efficiency over traditional batch processing.

Hoodie manages storage of large analytical datasets on HDFS and serve them out via two types of tables

  • Read Optimized Table - Provides excellent query performance via purely columnar storage (e.g. Parquet)
  • Near-Real time Table - Provides queries on real-time data, using a combination of columnar & row based storage (e.g Parquet + Avro)

{% include image.html file="hoodie_intro_1.png" alt="hoodie_intro_1.png" %}

By carefully managing how data is laid out in storage & how its exposed to queries, Hoodie is able to power a rich data ecosystem where external sources can be ingested into Hadoop in near real-time. The ingested data is then available for interactive SQL Engines like Presto & Spark, while at the same time capable of being consumed incrementally from processing/ETL frameworks like Hive & Spark to build derived (Hoodie) datasets.

Hoodie broadly consists of a self contained Spark library to build datasets and integrations with existing query engines for data access.

{% include callout.html content="Hoodie is a new project. Near Real-Time Table implementation is currently underway. Get involved here" type="info" %}