consider sql select *
from
(select * from test_1) a
inner join
(select * from test_2) b
on a.id = b.id
inner join
(select * from test_3) c
on a.id = c.id
Because a.id is from a subquery, to find its source table, need use function getSrcSlotRef().
Fulltext index is the inverted index of the specified tokenizer, before this pr, fulltext index only can evaluate match predicate, this pr to support evaluate equal predicate and list predicate.
In version 1.2.1, user can set `"hadoop.username" = "xxx"` to specify a remote user to access hdfs
when creating hive catalog.
But in version 1.2.2, we upgrade the hadoop version from 2.8 to 3.3, some behavior changed and the
user specified remote user is useless.
This PR try to fix this by using `UserGroupInformation` to delegate.
- change for Nereids
1. add a variable length parameter to the ctor of Count for a good error reporting of Count(a, b)
2. refactor StringRegexPredicate, let it inherit from ScalarFunction
3. remove useless class TypeCollection
4. use catalog.Type.Collection to check expression arguments type
5. change type coercion for TimestampArithmetic, divide, integral divide, comparison predicate, case when and in predicate. Let them same as legacy planner.
- change for legacy planner
1. change the common type of floating and Decimal from Decimal to Double
MoW will mark all duplicate primary key as deleted, so we can add a DCHECK while compaction, if MoW's delete bitmap works incorrectly, we're able to detect this kind of issue ASAP.
In Debug version, DCHECK will make BE crush, in release version, compaction will fail and finally load will fail due to -235
mainly include:
- brpc service adds two types of thread pools. The number of "light" and "heavy" thread pools is different
Classify the interfaces of be. Those related to data transmission are classified as heavy interfaces and others as light interfaces
- Add some monitoring to the thread pool, including the queue size and the number of active threads. Use these
- indicators to guide the configuration of the number of threads
1. change clucene version from 2.4.4->2.4.6
2. update build-thirdparty.sh clucene's build block, adding USE_BTHREAD CMAKE flag, this flag is inherited from doris's USE_BTHREAD_SCANNER.
Issue Number: close#17003
## Problem summary
The linker couldn't find some symbols because the implementation of a template member function doris::vectorized::Decoder::init_decimal_converter is missing in the header file in which the corresponding declaration is placed.
1. disable join reorder in nereids if session variable disable_join_reorder is true.
2. add a session variable max_table_count_use_cascades_join_reorder to control join reorder algorithm in nereids. if dp hyper is used only when enable_dphyp_optimizer is true and the joined table count more than max_table_count_use_cascades_join_reorder, which default value is 10.
fmt::format dosen't support non-template object as args, even if it implements
`to_string()` or `operator<<`. so orignal code may cause `false` to be printed
instead of real cause of the failure. So to_string() need to be manually invoked.
Signed-off-by: freemandealer <freeman.zhang1992@gmail.com>
change signatures of lead(), lag(), first_value(), last_value() to be equal with legacy optimizer;
these four functions only support Type.trivialTypes as returnType and input column type
When there are multi-table join query, there will be many in or not_in predicate of runtime filter pushed down to the storage layer. According to our test, if apply those predicates by inverted index, the performance will be degraded because there are many conditions in in_predicate. Therefore, the inverted index not apply on in or not_in predicate which is produced by runtime_filter.
Based on that situation, this pr will do:
not apply inverted index on in or not_in predicate which is produced by runtime_filter.
This code in VCollectIterator::build_heap is possible to cause double free if cumu_iter->init() fails and returns early, becuase some LevelIterator* exists both in VCollectIterator::_children and cumu_iter::_children.
This code in VCollectIterator::build_heap is possible to cause double free if cumu_iter->init() fails and returns early, becuase some LevelIterator* exists both in VCollectIterator::_children and cumu_iter::_children.
In previous implementation, when querying tvf, FE will get schema from BE.
And BE will try to open the first file to get its schema info, but for orc or parquet format,
if the file is empty, it will return error.
But even for an empty file, we can still get schema info from file's footer.
So we should handle the empty file to get schema info correctly.
Also modify the catalog doc to add some FAQ.
There are 2 kinds for scanner thread pool, local and remote.
Local is for local file read, specially for olap scanner.
Remote is for other external data source, such as file scanner, jdbc scanner.
This PR mainly changes:
For olap scanner, use cold or hot rowset to decide whether to use local or remote pool.
For other scanner, user remote pool by default.
Add a new BE config doris_max_remote_scanner_thread_pool_thread_num, default is 512,
indicate the max thread number of the remote scanner thread pool
This will alleviate the problem of interaction between olap queries with load job and external queries.
Introduce a tool to start multiple FEs on single node.
Use case:
```
$ ./multi-fe
./multi-fe start|stop|clean [OPTIONS ...]
start -n <NUM> -l <LIBRARY_PATH> -p <BASE_PORT>
Start the FE cluster.
-n The number of FEs.
-l The FE library path (default: doris/output/fe/lib)
-p The base port to generate all needed ports (default: 9030).
stop Stop the FE cluster.
clean Stop the data (rm -rf "$(pwd)"/fe*).
```
Currently, when filtering a column, a new column will be created to store the filtering result, which will cause some performance loss。 ssb-flat without pushdown expr from 19s to 15s.
When querying information_schema database, BE will call FE RPC
to get schema info such as db name list, table name list, etc.
But some external catalog when failed to get these info because of wrong connection info.
We should catch these kind of exception and skip it, so that it can continue to
get schema info of other catalogs.
Otherwise, the whole query on information_schema will fail, even if user just want to get
info of internal catalog.
And set jdbc connection timeout to 5s, to avoid thrift rpc timeout from BE to FE(default is 30s)