[SparkDpp] Support complete types (#4524)

For[Spark Load]
1 support decimal andl largeint
2 add validate logic for char/varchar/decimal
3 check data load from hive with strict mode
4 support decimal/date/datetime aggregator
This commit is contained in:
wangbo
2020-09-13 11:57:33 +08:00
committed by GitHub
parent 4caa6f9b33
commit 2c24fe80fa
6 changed files with 467 additions and 49 deletions

View File

@ -1026,7 +1026,7 @@ OLAPStatus PushBrokerReader::next(ContiguousRow* row) {
const void* value = _tuple->get_slot(slot->tuple_offset());
// try execute init method defined in aggregateInfo
// by default it only copies data into cell
_schema->column(i)->consume(&cell, (const char*)value, is_null,
_schema->column(i)->consume(&cell, (const char*)value, is_null,
_mem_pool.get(), _runtime_state->obj_pool());
// if column(i) is a value column, try execute finalize method defined in aggregateInfo
// to convert data into final format