Currently we assign the buckets by record partition path which could
cause hotspot if the partition field is datetime type. Changes to assign
buckets by grouping the record whth their key first, the assignment is
valid if only there is no conflict(two task write to the same bucket).
This patch also changes the coordinator execution to be asynchronous.
Current we did a soft delete for DELETE row data when writes into hoodie
table. For streaming read of MOR table, the Flink reader detects the
delete records and still emit them if the record key semantics are still
kept.
This is useful and actually a must for streaming ETL pipeline
incremental computation.
Read optimized query returns the records from:
* COW table: the latest parquet files
* MOR table: parquet file records from the latest compaction committed
* [HUDI-1653] Add support for composite keys in NonpartitionedKeyGenerator
* update NonpartitionedKeyGenerator to support composite record keys
* update NonpartitionedKeyGenerator
The SQL PRIMARY KEY semantics is very same with Hoodie record key, using
PRIMARY KEY is more straight-forward way instead of a table option:
hoodie.datasource.write.recordkey.field.
After this change, both PRIMARY KEY and table option can define hoodie
record key, while the PRIMARY KEY has higher priority if both are
defined.
Note: a column with PRIMARY KEY constraint is forced to be non-nullable.
We should implement the interface HoodieTableSource.explainSource to
track the table source signature diff for all kinds of pushing down,
such as filter pushing or limit pushing.
* [HUDI-845] Added locking capability to allow multiple writers
1. Added LockProvider API for pluggable lock methodologies
2. Added Resolution Strategy API to allow for pluggable conflict resolution
3. Added TableService client API to schedule table services
4. Added Transaction Manager for wrapping actions within transactions
* [HUDI-1552] Improve performance of key lookups from base file in Metadata Table.
1. Cache the KeyScanner across lookups so that the HFile index does not have to be read for each lookup.
2. Enable block caching in KeyScanner.
3. Move the lock to a limited scope of the code to reduce lock contention.
4. Removed reuse configuration
* Properly close the readers, when metadata table is accessed from executors
- Passing a reuse boolean into HoodieBackedTableMetadata
- Preserve the fast return behavior when reusing and opening from multiple threads (no contention)
- Handle concurrent close() and open readers, for reuse=false, by always synchronizing
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
* Adding SchemeAwareFSDataInputStream for abstract out special handling for GCSFileSystem
* Moving wrapping of fsDataInputStream to separate method in HoodieLogFileReader
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
In order to support object storage, we need these changes:
* Use the Hadoop filesystem so that we can find the plugin filesystem
* Do not fetch file size until the file handle is closed
* Do not close the opened filesystem because we want to use the
filesystem cache
A Flink SQL table has DDL that defines the table schema, we can use that
to infer the Avro schema and there is no need to declare a Avro schema
explicitly anymore.
But we still keep the config option for explicit Avro schema in case
there is corner cases that the inferred schema is not correct
(especially for the nullability).
Supports two read modes:
* Read the full data set starting from the latest commit instant and
subsequent incremental data set
* Read data set that starts from a specified commit instant