Parallel scanning can result in some read amplification, for example, select * from xx where limit 1 actually requires only one row of data. However, due to parallel scanning of multiple tablets, read amplification occurs, leading to performance bottlenecks in high-concurrency scenarios. This PR Adding a SessionVariable to enforce serial scanning can help mitigate this issue.
Nereids planner include all columns index in TFileScanRangeParams, this may cause the column projection incorrect for
text format table. Because csv reader use the column index position to split a line. Extra column index will cause get
wrong split result. This PR is to reset the column index after Projection, remove the useless column index.
fix 3 bugs:
1. failed to insert into a table with mv.
```sql
create table t (
id int,
c1 int,
c2 int,
c3 int
) duplicate key(id)
distributed by hash(id) buckets 4
create materialized view k12s3m as select id, sum(c1), max(c3) from t group by id;
insert into t select -4, -4, -4, 'd';
```
insert will rise exception because mv column is not handled. now we will add a target column and value as defineExpr.
2. failed to insert into a table with not all the columns.
```sql
insert into t(c1, c2) select c1, c2 from t
```
and t(id ukey, c1, c2, c3), will insert too many data, we fix it by change the output partitions.
3. failed to insert into a table with complex select.
the select statement has join or agg, fix the bug by the way similar to the one at 2nd bug.
When using nereids, if we use compare operator of bitmap type, an analyze exception need to be throwed.
like:
select id from (select BITMAP_EMPTY() as c0 from expr_test) as ref0 where c0 = 1 order by id
Which c0 in subq0 is a bitmap type, this scenario is not supported right now.
update in-filter usage in pipeline mode:
1. if the target is local, we use in-bloom filter. Let BE choose in or bloom according to actual distinctive number
2. set default runtime_filter_max_in_num to 1024
Support refreshing ldap cache:
refresh ldap all;
refresh ldap;
refresh ldap for user1;
Support for caching non-existent ldap users.
When logging in with a doris user that does not exist in the Ldap service after ldap is enabled, avoid accessing the ldap service every time in scenarios such as show databases; that require a lot of authentication.
This PR refactors the old way of writing data to JDBC External Table & JDBC Catalog, mainly including the following tasks
1. Continuing the work of @BePPPower 's PR #18594, changing the logic of splicing Inster sql to operating off-heap memory and using preparedStatement.set to write data logic to complete
2. Supplement the support written by largeint type, mainly to adapt to Java.Math.BigInteger, which uses binary operations
3. Delete the splicing SQL logic in the JDBC External Table & JDBC Catalog related written code
ToDo: Binary type,like bit,binary, blob...
Finally, special thanks to @BePPPower , @AshinGau for his work
Co-authored-by: Tiewei Fang <43782773+BePPPower@users.noreply.github.com>
required_slots in TFileScanRangeParams params for external hive table may be updated after FileQueryScanNode finalize. For text file, we need to use the origin required_slots in params so that the list could be updated later. Otherwise, query text file may get the following error:
[INTERNAL_ERROR]Unknown source slot descriptor, slot_id=3
Currently, compaction is executed separately for each backend, and the reconstruction of the index during compaction leads to high CPU usage. To address this, we are introducing single replica compaction, where a specific primary replica is selected to perform compaction, and the remaining replicas fetch the compaction results from the primary replica.
The Backend (BE) requests replica information for all peers corresponding to a tablet from the Frontend (FE). This information includes the host where the replica is located and the replica_id. By calculating hash(replica_id), the replica with the smallest hash value is responsible for executing compaction, while the remaining replicas are responsible for fetching the compaction results from this replica.
The compaction task producer thread, before submitting a compaction task, checks whether the local replica should fetch from its peer. If it should, the task is then submitted to the single replica compaction thread pool.
When performing single replica compaction, the process begins by requesting rowset versions from the target replica. These rowset_versions are then compared with the local rowset versions. The first version that can be fetched is selected.
## Problem summary
When we want to push the filter through the union. We should check whether the union's children are `OneRowRelation` or not. If there are some `OneRowRelation`, we shouldn't push down the filter to that part
Before this PR
```
mysql> select * from (select 1 as a, 2 as b union all select 3, 3) t where a = 1;
+------+------+
| a | b |
+------+------+
| 1 | 2 |
| 3 | 3 |
+------+------+
2 rows in set (0.01 sec)
```
After this PR
```
mysql> select * from (select 1 as a, 2 as b union all select 3, 3) t where a = 1;
+------+------+
| a | b |
+------+------+
| 1 | 2 |
+------+------+
1 row in set (0.38 sec)
```
1. Fix create catalog with resource replay bug.
If user create catalog using `create catalog hive with resource xxx`, when replaying edit log,
there is a bug that resource may be dropped, causing NPE and FE will fail to start.
In this PR, I add a new FE config `disallow_create_catalog_with_resource`, default is true.
So that `with resource` will not be allowed, and it will be deprecated later.
And also fix the replay bug to avoid NPE.
2. Fix issue when creating 2 hive catalogs to connect with and without kerberos authentication.
When user create 2 hive catalogs, one use simple auth, the other use kerberos auth.
The query may fail with error like: `Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.`
So I add a default property for hive catalog: `"ipc.client.fallback-to-simple-auth-allowed" = "true"`.
Which means this property will be added automatically when user creating hive catalog, to avoid such problem.
3. Fix calling `hdfsExists()` issue
When calling `hdfsExists()` with non-zero return code, should check if it encounters error or is file not found.
3. Some code refactor
Avoid import `org.apache.parquet.Strings`
before
mysql [test]>select cast(1 as DECIMALV3(16, 2)) / cast(3 as DECIMALV3(16, 2));
+-----------------------------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / CAST(3 AS DECIMALV3(16, 2)) |
+-----------------------------------------------------------+
| 0.00 |
+-----------------------------------------------------------+
mysql [test]>select * from divtest;
+------+------+
| id | val |
+------+------+
| 3 | 5.00 |
| 2 | 4.00 |
| 1 | 3.00 |
+------+------+
mysql [test]>select cast(1 as decimalv3(16,2)) / val from divtest;
+-------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / `val` |
+-------------------------------------+
| 0 |
| 0 |
| 0 |
+-------------------------------------+
after
mysql [test]>select cast(1 as DECIMALV3(16, 2)) / cast(3 as DECIMALV3(16, 2));
+-----------------------------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / CAST(3 AS DECIMALV3(16, 2)) |
+-----------------------------------------------------------+
| 0.33 |
+-----------------------------------------------------------+
mysql [test]>select cast(1 as decimalv3(16,2)) / val from divtest;
+-------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / `val` |
+-------------------------------------+
| 0.250000 |
| 0.200000 |
| 0.333333 |
+-------------------------------------+
This is because in the previous code, the constant 1.000 would be transformed into 1.
remove "ReduceType
* [Improve](performance) introduce SchemaCache to cache TabletSchame & Schema
1. When the system is under high-concurrency load with wide table point queries, the frequent memory allocation and deallocation of Schema become evident system bottlenecks. Additionally, the initialization of TabletSchema and Schema also becomes a CPU hotspot.Therefore, the introduction of a SchemaCache is implemented to cache these resources for reuse.
2. Make some variables wrapped with std::unique<unique_ptr>
Performance:
| 状态 | QPS | 平均响应时间 (avg) | P99 响应时间 |
|------------------|-----|------------------|-------------|
| 开启 SchemaCache | 501 | 20ms | 34ms |
| 关闭 SchemaCache | 321 | 31ms | 61ms |
* handle schema change with schema version
* remove useless header
* rebase
1. Delete the stats for data size, since it would cost too much time but useless
2. Make task time out configurable since when it's common to analyze a quite huge table that the default 10 min is not suitable
This pr is to fix the bug introduced by PR #19909
The bug failed to set column unique id for iceberg table, which will cause the query result for iceberg table are all NULL.
```
mysql> select * from iceberg_partition_lower_case_parquet limit 1;
+------+------+------+---------+
| k1 | k2 | k3 | city |
+------+------+------+---------+
| NULL | NULL | NULL | Beijing |
+------+------+------+---------+
1 row in set (0.60 sec)
```
After fix:
```
mysql> select * from iceberg_partition_lower_case_parquet limit 1;
+------+------+------+---------+
| k1 | k2 | k3 | city |
+------+------+------+---------+
| 1 | k2_1 | k3_1 | Beijing |
+------+------+------+---------+
1 row in set (0.35 sec)
```