Support to decode nested array column in parquet reader:
1. FE should generate the right nested column type. FE doesn't check the nesting depth and legality, like map\<array\<int\>, int\>.
2. `ParquetColumnReader` has removed the filtering of page index to support nested array type.
It's too difficult to skip values in nested complex types. Maybe we should support the filtering of page index and lazy read in later PR.
3. `ExternalFileScanNode` has a bug in creating default value expression.
4. Maybe it's slow to read repetition levels in a while loop. I'll optimize this in next PR.
5. Array column has temporary `SchemaElement` in its thrift definition,
we have removed them and keep its parent in former implementation.
The remaining parent should inherit the repetition and definition level of its child.
Improve performance of parquet reader filter calculation.
- Use `filter_data` instead of `(*filter_ptr)` to merge filter to improve performance.
- Use mutable column filter func instead of original new column filter func which introduced by #16850.
- Avoid column ref-count increasing which caused unnecessary copying by passing column pointer ref.
This pr do two things:
1. fix:
It use `column[0]` to judge class type in JdbcExecutor, but column[0] may be null !
2. Enhencement
In the original logic, all fields in jdbc catalog table will be set Nullable.
However, it is inefficient for nullable fields. Actually, we can know if the fields in data source table
is nullable through jdbc. So we can set the corresponding fields in Doris jdbc catalog to nullable or not.
The LoadScanProvider doesn't get Hidden Columns from stream load parameter.
This may cause stream load delete operation fail. This pr is to pass the hidden columns to LoadScanProvider.
sub_bitmap return type should be ALWAYS_NULLABLE, not depend on children.
For example
sub_bitmap(bitmap_empty(), 1, 2) return NULL, but all children are not null.
Sense io error.
Retry query when io error.
Greylist: When finds one disk is completely broken, or the diff of tablet number in BE and FE meta is too large,reduce the query priority of the BE.
Fix Redhat 4.x OS /proc/meminfo has no MemAvailable, disable MemAvailable to control memory.
vm_rss_str and mem_available_str recorded when gc is triggered, to avoid memory changes during gc and cause inaccurate logs.
join probe catch bad_alloc, this may alloc 64G memory at a time, avoid OOM.
Modify document doris_be_all_segments_num and doris_be_all_rowsets_num names.
the body of create view stmt is parsed twice.
in the second parse, we get sql string from CreateViewStmt.viewDefStmt.toSql() function, which missed selectlist.
consider sql select *
from
(select * from test_1) a
inner join
(select * from test_2) b
on a.id = b.id
inner join
(select * from test_3) c
on a.id = c.id
Because a.id is from a subquery, to find its source table, need use function getSrcSlotRef().
Fulltext index is the inverted index of the specified tokenizer, before this pr, fulltext index only can evaluate match predicate, this pr to support evaluate equal predicate and list predicate.
In version 1.2.1, user can set `"hadoop.username" = "xxx"` to specify a remote user to access hdfs
when creating hive catalog.
But in version 1.2.2, we upgrade the hadoop version from 2.8 to 3.3, some behavior changed and the
user specified remote user is useless.
This PR try to fix this by using `UserGroupInformation` to delegate.
- change for Nereids
1. add a variable length parameter to the ctor of Count for a good error reporting of Count(a, b)
2. refactor StringRegexPredicate, let it inherit from ScalarFunction
3. remove useless class TypeCollection
4. use catalog.Type.Collection to check expression arguments type
5. change type coercion for TimestampArithmetic, divide, integral divide, comparison predicate, case when and in predicate. Let them same as legacy planner.
- change for legacy planner
1. change the common type of floating and Decimal from Decimal to Double
MoW will mark all duplicate primary key as deleted, so we can add a DCHECK while compaction, if MoW's delete bitmap works incorrectly, we're able to detect this kind of issue ASAP.
In Debug version, DCHECK will make BE crush, in release version, compaction will fail and finally load will fail due to -235
mainly include:
- brpc service adds two types of thread pools. The number of "light" and "heavy" thread pools is different
Classify the interfaces of be. Those related to data transmission are classified as heavy interfaces and others as light interfaces
- Add some monitoring to the thread pool, including the queue size and the number of active threads. Use these
- indicators to guide the configuration of the number of threads
1. change clucene version from 2.4.4->2.4.6
2. update build-thirdparty.sh clucene's build block, adding USE_BTHREAD CMAKE flag, this flag is inherited from doris's USE_BTHREAD_SCANNER.
Issue Number: close#17003
## Problem summary
The linker couldn't find some symbols because the implementation of a template member function doris::vectorized::Decoder::init_decimal_converter is missing in the header file in which the corresponding declaration is placed.
1. disable join reorder in nereids if session variable disable_join_reorder is true.
2. add a session variable max_table_count_use_cascades_join_reorder to control join reorder algorithm in nereids. if dp hyper is used only when enable_dphyp_optimizer is true and the joined table count more than max_table_count_use_cascades_join_reorder, which default value is 10.
fmt::format dosen't support non-template object as args, even if it implements
`to_string()` or `operator<<`. so orignal code may cause `false` to be printed
instead of real cause of the failure. So to_string() need to be manually invoked.
Signed-off-by: freemandealer <freeman.zhang1992@gmail.com>
change signatures of lead(), lag(), first_value(), last_value() to be equal with legacy optimizer;
these four functions only support Type.trivialTypes as returnType and input column type
When there are multi-table join query, there will be many in or not_in predicate of runtime filter pushed down to the storage layer. According to our test, if apply those predicates by inverted index, the performance will be degraded because there are many conditions in in_predicate. Therefore, the inverted index not apply on in or not_in predicate which is produced by runtime_filter.
Based on that situation, this pr will do:
not apply inverted index on in or not_in predicate which is produced by runtime_filter.
This code in VCollectIterator::build_heap is possible to cause double free if cumu_iter->init() fails and returns early, becuase some LevelIterator* exists both in VCollectIterator::_children and cumu_iter::_children.