Commit Graph

10866 Commits

Author SHA1 Message Date
6d75d56e7b [Fix](dynamic-partition) Try to avoid setting a zero-bucket-size partition. (#20177)
A fallback to avoid BE crash problem when partition's bucket size is 0, but not resolved.
2023-05-31 13:09:03 +08:00
1f22aa6961 [fix](nereids) like function's nullable property should be PropagateNullable (#20237) 2023-05-31 12:13:38 +08:00
6a8fdb45c6 [Bug](runtimefilter) Fix waiting for runtime filter (#20155) 2023-05-31 10:25:18 +08:00
ca88425bee [Enhancement](merge-on-write) optimize bloom filter for primary key index (#20182) 2023-05-31 09:49:15 +08:00
54d1b16116 [docs](spark-doris-connector): modify the link of spark-doris-connector (#20159) 2023-05-31 09:42:00 +08:00
f43282e612 [chore](third-party) Bump the version of hadoop_libs (#20250)
Fix the issues with the workflow Build Third Party Libraries. See https://github.com/apache/doris-thirdparty/actions/runs/5109407220/jobs/9184234534
2023-05-31 09:21:43 +08:00
3f91127854 [fix](regression)Update external Brown test case out file. #20232
Update external Brown test case out file to match the new precision.
2023-05-31 09:21:04 +08:00
8a54be3318 [feature-wip](workload-group) Support setting user default workload group (#20180)
Issue Number: close #xxx

SET PROPERTY 'default_workload_group' = 'group_name';
2023-05-31 09:18:25 +08:00
aae04d9680 [Chore](log) Remove some verbose log && Change log level (#20236) 2023-05-31 09:15:01 +08:00
ff05217a1e [regression](p0) fix test for array_enumerate_uniq (#20231) 2023-05-30 22:14:19 +08:00
56fa38de1d [Enhencement](JDBC Catalog) refactor jdbc catalog insert logic (#19950)
This PR refactors the old way of writing data to JDBC External Table & JDBC Catalog, mainly including the following tasks
1. Continuing the work of @BePPPower 's PR #18594, changing the logic of splicing Inster sql to operating off-heap memory and using preparedStatement.set to write data logic to complete
2. Supplement the support written by largeint type, mainly to adapt to Java.Math.BigInteger, which uses binary operations
3. Delete the splicing SQL logic in the JDBC External Table & JDBC Catalog related written code

ToDo: Binary type,like bit,binary, blob...

Finally, special thanks to @BePPPower , @AshinGau  for his work

Co-authored-by: Tiewei Fang <43782773+BePPPower@users.noreply.github.com>
2023-05-30 22:03:39 +08:00
ccfc4978c1 [feature](nereids) support the rewrite rule for push-down filter through sort (#20161)
Support the rewrite rule for push-down filter through sort.
We can directly push-down the filter through sort without any conditions check.

Before this PR:
```
mysql> explain select * from (select * from t1 order by a) t2 where t2.b > 2;
+-------------------------------------------------------------+
| Explain String                                              |
+-------------------------------------------------------------+
| PLAN FRAGMENT 0                                             |
|   OUTPUT EXPRS:                                             |
|     a[#2]                                                   |
|     b[#3]                                                   |
|   PARTITION: UNPARTITIONED                                  |
|                                                             |
|   VRESULT SINK                                              |
|                                                             |
|   3:VSELECT                                                 |
|   |  predicates: b[#3] > 2                                  |
|   |                                                         |
|   2:VMERGING-EXCHANGE                                       |
|      offset: 0                                              |
|                                                             |
| PLAN FRAGMENT 1                                             |
|                                                             |
|   PARTITION: HASH_PARTITIONED: a[#0]                        |
|                                                             |
|   STREAM DATA SINK                                          |
|     EXCHANGE ID: 02                                         |
|     UNPARTITIONED                                           |
|                                                             |
|   1:VTOP-N                                                  |
|   |  order by: a[#2] ASC                                    |
|   |  offset: 0                                              |
|   |                                                         |
|   0:VOlapScanNode                                           |
|      TABLE: default_cluster:test.t1(t1), PREAGGREGATION: ON |
|      partitions=0/1, tablets=0/0, tabletList=               |
|      cardinality=1, avgRowSize=0.0, numNodes=1              |
+-------------------------------------------------------------+
30 rows in set (0.06 sec)
```

After this PR:
```
mysql> explain select * from (select * from t1 order by a) t2 where t2.b > 2;
+-------------------------------------------------------------+
| Explain String                                              |
+-------------------------------------------------------------+
| PLAN FRAGMENT 0                                             |
|   OUTPUT EXPRS:                                             |
|     a[#2]                                                   |
|     b[#3]                                                   |
|   PARTITION: UNPARTITIONED                                  |
|                                                             |
|   VRESULT SINK                                              |
|                                                             |
|   2:VMERGING-EXCHANGE                                       |
|      offset: 0                                              |
|                                                             |
| PLAN FRAGMENT 1                                             |
|                                                             |
|   PARTITION: HASH_PARTITIONED: a[#0]                        |
|                                                             |
|   STREAM DATA SINK                                          |
|     EXCHANGE ID: 02                                         |
|     UNPARTITIONED                                           |
|                                                             |
|   1:VTOP-N                                                  |
|   |  order by: a[#2] ASC                                    |
|   |  offset: 0                                              |
|   |                                                         |
|   0:VOlapScanNode                                           |
|      TABLE: default_cluster:test.t1(t1), PREAGGREGATION: ON |
|      PREDICATES: b[#1] > 2                                  |
|      partitions=0/1, tablets=0/0, tabletList=               |
|      cardinality=1, avgRowSize=0.0, numNodes=1              |
+-------------------------------------------------------------+
28 rows in set (0.40 sec)
```
2023-05-30 21:38:16 +08:00
5c8e801761 [Fix](multi catalog, nereids)Fix text file required slot bug (#20214)
required_slots in TFileScanRangeParams params for external hive table may be updated after FileQueryScanNode finalize. For text file, we need to use the origin required_slots in params so that the list could be updated later. Otherwise, query text file may get the following error:
[INTERNAL_ERROR]Unknown source slot descriptor, slot_id=3
2023-05-30 21:29:33 +08:00
b7a69fbf4b [test](regression) add regression test from materialized slot bug (#20207)
The test query includes the conversion of string types to other types, and the processing of materialized columns for nested subqueries, which is the regression test for bug fix(#18783)
2023-05-30 21:23:05 +08:00
accaff1026 [Feature](compaction) wip: single replica compaction (#19237)
Currently, compaction is executed separately for each backend, and the reconstruction of the index during compaction leads to high CPU usage. To address this, we are introducing single replica compaction, where a specific primary replica is selected to perform compaction, and the remaining replicas fetch the compaction results from the primary replica.

The Backend (BE) requests replica information for all peers corresponding to a tablet from the Frontend (FE). This information includes the host where the replica is located and the replica_id. By calculating hash(replica_id), the replica with the smallest hash value is responsible for executing compaction, while the remaining replicas are responsible for fetching the compaction results from this replica.
The compaction task producer thread, before submitting a compaction task, checks whether the local replica should fetch from its peer. If it should, the task is then submitted to the single replica compaction thread pool.
When performing single replica compaction, the process begins by requesting rowset versions from the target replica. These rowset_versions are then compared with the local rowset versions. The first version that can be fetched is selected.
2023-05-30 21:12:48 +08:00
5e5f4ae9de [Improve](CI)Check PR approve status (#20172)
After discussion in the doris community @apache/doris-committers , we limit the PR to be merged only after at least two people approve it.↳

We can try to run it for a while first, and if everyone gives good feedback, we can use this as a mandatory check.

Since the merge must be approved by at least one committer, we only need to judge whether there are two approves, and we don't need to care about the identity of the approve.
When there is a request change, if the other party is a committer, the committer dismiss is required when merging, which is enforced by github, so we don't need to care.
2023-05-30 20:45:16 +08:00
Pxl
7415135ad4 [Enchancement](execute) make assert_cast can output derived class name (#20212)
before:
F0530 11:02:41.989699 1154607 assert_cast.h:54] Bad cast from type:doris::vectorized::IDataType const* to doris::vectorized::DataTypeAggState const*

after:
F0530 11:24:28.390286 1292475 assert_cast.h:46] Bad cast from type:doris::vectorized::DataTypeNullable* to doris::vectorized::DataTypeAggState const*
2023-05-30 20:23:04 +08:00
6f68ec9de0 support query queue (#20048)
support query queue (#20048)
2023-05-30 19:52:27 +08:00
1919355c04 [Feature](Inverted index) add MATCH_ PHRASE query (#20156) 2023-05-30 19:28:57 +08:00
f505eed253 [opt](Nereids) refactor the PartitionTopN (#20102)
Do some small refactoring for the `PartitionTopN` and also address the left comment in #18784
2023-05-30 17:34:47 +08:00
3d8440a1b7 [Feature-WIP](inverted index) support phrase for inverted index writer (#20193) 2023-05-30 17:07:45 +08:00
a855253543 [fix](Nereids) filter should not push through union to OneRowRelation (#20132)
## Problem summary
When we want to push the filter through the union. We should check whether the union's children are `OneRowRelation` or not. If there are some `OneRowRelation`, we shouldn't push down the filter to that part

Before this PR
```
mysql> select * from (select 1 as a, 2 as b union all select 3, 3) t where a = 1;
+------+------+
| a    | b    |
+------+------+
|    1 |    2 |
|    3 |    3 |
+------+------+
2 rows in set (0.01 sec)
```

After this PR
```
mysql> select * from (select 1 as a, 2 as b union all select 3, 3) t where a = 1;
+------+------+
| a    | b    |
+------+------+
|    1 |    2 |
+------+------+
1 row in set (0.38 sec)
```
2023-05-30 17:06:52 +08:00
0c98355fff [fix](catalog) fix create catalog with resource replay issue and kerberos auth issue (#20137)
1. Fix create catalog with resource replay bug.
	If user create catalog using `create catalog hive with resource xxx`, when replaying edit log,
	there is a bug that resource may be dropped, causing NPE and FE will fail to start.

	In this PR, I add a new FE config `disallow_create_catalog_with_resource`, default is true.
	So that `with resource` will not be allowed, and it will be deprecated later.

	And also fix the replay bug to avoid NPE.

2. Fix issue when creating 2 hive catalogs to connect with and without kerberos authentication.

	When user create 2 hive catalogs, one use simple auth, the other use kerberos auth.
	The query may fail with error like: `Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.`

	So I add a default property for hive catalog: `"ipc.client.fallback-to-simple-auth-allowed" = "true"`.
	Which means this property will be added automatically when user creating hive catalog, to avoid such problem.

3. Fix calling `hdfsExists()` issue

	When calling `hdfsExists()` with non-zero return code, should check if it encounters error or is file not found.

3. Some code refactor

	Avoid import `org.apache.parquet.Strings`
2023-05-30 16:57:39 +08:00
3735c21ef9 [fix](session-variable) fix set global var on non-master FE return error (#20179) 2023-05-30 16:26:28 +08:00
49ce4e6fda [fix](test) fix p2 broker load (#20196) 2023-05-30 16:26:00 +08:00
631494e05d [regression](decimalv3) Fix output for P1 regression (#20213) 2023-05-30 15:21:29 +08:00
4475a69c57 [Fix](multi-catalog) Fix q03 in text_external_brown regression test by handling correctly when text converter parsing error. (#20190)
Issue Number: close #20189

Fix `q03` in `text_external_brown` regression test by handling correctly when text converter parsing error.
2023-05-30 15:08:28 +08:00
de08c4a57b [enhance](match) Support match query without inverted index (#19936) 2023-05-30 15:02:57 +08:00
e623f3fb9e [runtimeFilter](nereids) use runtime filter default size for debug purpose (#20065)
use rf default size for debug
2023-05-30 14:34:14 +08:00
a220b1c34a [typo](docs) fix oceanbase jdbc catalog error (#20197) 2023-05-30 14:18:16 +08:00
5351153b04 [doc](fix)Modified the description about trino #20174 2023-05-30 13:14:45 +08:00
bb12a1cb49 [Enhance](array function) add support for DecimalV3 for array_enumerate_uniq() (#17724) 2023-05-30 13:09:19 +08:00
c7b8c83a7f [Improvement](runtimefilter) Build bloom filter according to the exact build size for IN_OR_BLOOM_FILTER (#20166)
* [Improvement](runtimefilter) Build bloom filter according to the exact build size for IN_OR_BLOOM_FILTER
2023-05-30 12:55:30 +08:00
945cb56fb6 [Bug](segment iterator) remove DCHECK for block row count (#20199)
CHECK rows count of block at segment iterator is not ready when `enable_common_expr_pushdown`
2023-05-30 11:34:25 +08:00
94e1072d14 Revert "[fix](DECIMALV3) Fix the error in DECIMALV3 when explicitly casting. (#19926)" (#20204)
This reverts commit 8ca4f9306763b5a18ffda27a07ab03cc77351e35.
2023-05-30 10:35:33 +08:00
72cfe5865a [feat](optimizer) Support CTE reuse (#19934)
Before this PR, new optimizer would inline CTE directly. However in many scenario a CTE could be referenced many times, such as in TPC-DS tests, for these cases materialize the result sets of CTE and reuse it would significantly agument performance. In our tests on tpc-ds related sqls, it would improve the performance by up to almost **4 times** than before.

We introduce belowing plan node in optimizer

1. CTEConsumer: which hold a reference to CTEProducer
2. CTEProducer: Plan defined by CTE stmt
3. CTEAnchor: the father node of CTEProducer, a CTEProducer could only be referenced from  corresponding CTEAnchor's right child.

A CTEConsumer would be converted to a inlined plan if corresponding CTE referenced less than or equal `inline_cte_referenced_threshold` (it's a session variable, by default is 1).


For SQL:

```sql
EXPLAIN REWRITTEN PLAN
WITH cte AS (SELECT col2 FROM t1)
SELECT * FROM t1 WHERE (col3 IN (SELECT c1.col2 FROM cte c1))
UNION ALL
SELECT * FROM t1 WHERE (col3 IN (SELECT c1.col2 FROM cte c1));
```

Rewritten plan before this PR:

```
+------------------------------------------------------------------------------------------------------------------------------------------------------+
| Explain String                                                                                                                                       |
+------------------------------------------------------------------------------------------------------------------------------------------------------+
| LogicalUnion ( qualifier=ALL, outputs=[col1#14, col2#15, col3#16], hasPushedFilter=false )                                                           |
| |--LogicalJoin[559] ( type=LEFT_SEMI_JOIN, markJoinSlotReference=Optional.empty, hashJoinConjuncts=[(col3#6 = col2#8)], otherJoinConjuncts=[] )      |
| |  |--LogicalProject[551] ( distinct=false, projects=[col1#4, col2#5, col3#6], excepts=[], canEliminate=true )                                       |
| |  |  +--LogicalFilter[549] ( predicates=(__DORIS_DELETE_SIGN__#7 = 0) )                                                                             |
| |  |     +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                    |
| |  +--LogicalProject[555] ( distinct=false, projects=[col2#20 AS `col2`#8], excepts=[], canEliminate=true )                                          |
| |     +--LogicalFilter[553] ( predicates=(__DORIS_DELETE_SIGN__#22 = 0) )                                                                            |
| |        +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                    |
| +--LogicalProject[575] ( distinct=false, projects=[col1#9, col2#10, col3#11], excepts=[], canEliminate=false )                                       |
|    +--LogicalJoin[573] ( type=LEFT_SEMI_JOIN, markJoinSlotReference=Optional.empty, hashJoinConjuncts=[(col3#11 = col2#13)], otherJoinConjuncts=[] ) |
|       |--LogicalProject[565] ( distinct=false, projects=[col1#9, col2#10, col3#11], excepts=[], canEliminate=true )                                  |
|       |  +--LogicalFilter[563] ( predicates=(__DORIS_DELETE_SIGN__#12 = 0) )                                                                         |
|       |     +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                 |
|       +--LogicalProject[569] ( distinct=false, projects=[col2#24 AS `col2`#13], excepts=[], canEliminate=true )                                      |
|          +--LogicalFilter[567] ( predicates=(__DORIS_DELETE_SIGN__#26 = 0) )                                                                         |
|             +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                 |
+------------------------------------------------------------------------------------------------------------------------------------------------------+

```

After this PR

```
+------------------------------------------------------------------------------------------------------------------------------------------------------+
| Explain String                                                                                                                                       |
+------------------------------------------------------------------------------------------------------------------------------------------------------+
| LogicalUnion ( qualifier=ALL, outputs=[col1#14, col2#15, col3#16], hasPushedFilter=false )                                                           |
| |--LOGICAL_CTE_ANCHOR#-1164890733                                                                                                                    |
| |  |--LOGICAL_CTE_PRODUCER#-1164890733                                                                                                               |
| |  |  +--LogicalProject[427] ( distinct=false, projects=[col2#1], excepts=[], canEliminate=true )                                                    |
| |  |     +--LogicalFilter[425] ( predicates=(__DORIS_DELETE_SIGN__#3 = 0) )                                                                          |
| |  |        +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                 |
| |  +--LogicalJoin[373] ( type=LEFT_SEMI_JOIN, markJoinSlotReference=Optional.empty, hashJoinConjuncts=[(col3#6 = col2#8)], otherJoinConjuncts=[] )   |
| |     |--LogicalProject[370] ( distinct=false, projects=[col1#4, col2#5, col3#6], excepts=[], canEliminate=true )                                    |
| |     |  +--LogicalFilter[368] ( predicates=(__DORIS_DELETE_SIGN__#7 = 0) )                                                                          |
| |     |     +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                 |
| |     +--LOGICAL_CTE_CONSUMER#-1164890733#1038782805                                                                                                 |
| +--LogicalProject[384] ( distinct=false, projects=[col1#9, col2#10, col3#11], excepts=[], canEliminate=false )                                       |
|    +--LogicalJoin[382] ( type=LEFT_SEMI_JOIN, markJoinSlotReference=Optional.empty, hashJoinConjuncts=[(col3#11 = col2#13)], otherJoinConjuncts=[] ) |
|       |--LogicalProject[379] ( distinct=false, projects=[col1#9, col2#10, col3#11], excepts=[], canEliminate=true )                                  |
|       |  +--LogicalFilter[377] ( predicates=(__DORIS_DELETE_SIGN__#12 = 0) )                                                                         |
|       |     +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                 |
|       +--LOGICAL_CTE_CONSUMER#-1164890733#858618008                                                                                                  |
+------------------------------------------------------------------------------------------------------------------------------------------------------+

```
2023-05-30 10:18:59 +08:00
9b32d42ee4 [Fix](multi-catalog) fix all nested type test which introduced by #19518(support insert-only transactional table). (#20194)
Fix `qt_nested_types_orc` in `test_tvf_p2` which introduced by #19518(support insert-only transactional table).

### Test case error
`qt_nested_types_orc` in `test_tvf_p2`
```
select count(array0), count(array1), count(array2), count(array3), count(struct0), count(struct1), count(map0)
            from hdfs(
            "uri" = "hdfs://172.21.16.47:4007/catalog/tvf/orc/all_nested_types.orc",
            "format" = "orc",
            "fs.defaultFS" = "hdfs://172.21.16.47:4007")
```

**Error Message:**
errCode = 2, detailMessage = (172.21.0.101)[INTERNAL_ERROR]Wrong data type for colum 'struct1'
2023-05-30 09:55:40 +08:00
2abbc9f921 [Fix](multi-catalog) Fix parquet bugs of #19758 'replace the single pointer with an array of 'conjuncts' in ExecNode'. (#20191)
Fix some parquet reader bugs which introduced by #19758 'replace the single pointer with an array of 'conjuncts' in ExecNode'.
2023-05-30 09:55:12 +08:00
6f31ee9492 [fix](p0 regression)Update hive docker test case result data (#20176)
Doris updated array type output format, using double quote for Strings.
Before, it was using single quote. So we need to update the case out file using double quote.
2023-05-30 00:17:30 +08:00
90b4e127e3 [Feature](inverted index) add parser_mode properties for inverted index parser (#20116)
We add parser mode for inverted index, usage like this:
```
CREATE TABLE `inverted` (
  `FIELD0` text NULL,
  `FIELD1` text NULL,
  `FIELD2` text NULL,
  `FIELD3` text NULL,
  INDEX idx_name1 (`FIELD0`) USING INVERTED PROPERTIES("parser" = "chinese", "parser_mode" = "fine_grained") COMMENT '',
  INDEX idx_name2 (`FIELD1`) USING INVERTED PROPERTIES("parser" = "chinese", "parser_mode" = "coarse_grained") COMMENT ''
) ENGINE=OLAP
);
```
2023-05-29 23:21:52 +08:00
9eccbdbef3 [typo](docs) fix fqdn doc error (#20171) 2023-05-29 21:39:42 +08:00
8ca4f93067 [fix](DECIMALV3) Fix the error in DECIMALV3 when explicitly casting. (#19926)
before

mysql [test]>select cast(1 as DECIMALV3(16, 2)) /  cast(3 as DECIMALV3(16, 2));
+-----------------------------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / CAST(3 AS DECIMALV3(16, 2)) |
+-----------------------------------------------------------+
|                                                      0.00 |
+-----------------------------------------------------------+


mysql [test]>select * from divtest;
+------+------+
| id   | val  |
+------+------+
|    3 | 5.00 |
|    2 | 4.00 |
|    1 | 3.00 |
+------+------+

mysql [test]>select cast(1 as decimalv3(16,2)) / val from divtest;
+-------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / `val` |
+-------------------------------------+
|                                   0 |
|                                   0 |
|                                   0 |
+-------------------------------------+
after

mysql [test]>select cast(1 as DECIMALV3(16, 2)) /  cast(3 as DECIMALV3(16, 2));
+-----------------------------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / CAST(3 AS DECIMALV3(16, 2)) |
+-----------------------------------------------------------+
|                                                      0.33 |
+-----------------------------------------------------------+

mysql [test]>select cast(1 as decimalv3(16,2)) / val from divtest;
+-------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / `val` |
+-------------------------------------+
|                            0.250000 |
|                            0.200000 |
|                            0.333333 |
+-------------------------------------+
This is because in the previous code, the constant 1.000 would be transformed into 1.

remove "ReduceType
2023-05-29 19:51:12 +08:00
d76be1315f [BUG]storage_min_left_capacity_bytes default value has integer overflow #19943 2023-05-29 19:50:31 +08:00
Pxl
d1d0d9e5e8 [Chore](build) adjust some compile diagnostic (#20162) 2023-05-29 19:19:01 +08:00
5f37396514 [Enhancement](Nerieds) add switch for developing Nereids DML (#20100) 2023-05-29 19:06:55 +08:00
Pxl
5788214416 [Bug](function) fix equals implements not judge order by elements of function call expr (#20083)
fix equals implements not judge order by elements of function call expr
#19296
2023-05-29 19:03:05 +08:00
f9478dbd9a [fix](function) Fix VcompoundPred execute const column #20158
recurrent:

./run-regression-test.sh  --run -suiteParallel 1 -actionParallel 1 -parallel 1 -d query_p0/sql_functions/window_functions

select      /*+ SET_VAR(query_timeout = 600) */ subq_0.`c1` as c0 from    (select           ref_1.`s_name` as c0,          ref_1.`s_suppkey` as c1,          ref_1.`s_address` as c2,          ref_1.`s_address` as c3       from          regression_test_query_p0_sql_functions_window_functions.tpch_tiny_supplier as ref_1       where (ref_1.`s_name` is NULL)          or (ref_1.`s_acctbal` is not NULL)) as subq_0 where (subq_0.`c3` is NULL)    or (subq_0.`c2` is not NULL)
reason:
FunctionIsNull and FunctionIsNotNull execute returns a const column, but their VectorizedFnCall::is_constant returns false, which causes problems with const handling when VCompoundPred::execute.

This pr converts const column to full column in VCompoundPred execute. In the future, there will be a more thorough solution to such problems.
2023-05-29 18:16:58 +08:00
Pxl
e9917612f0 [Chore](gensrc) remove gen_vector_functions.py #20150 2023-05-29 18:16:31 +08:00
ab8125d56f [Improve](performance) introduce SchemaCache to cache TabletSchame & Schema (#20037)
* [Improve](performance) introduce SchemaCache to cache TabletSchame & Schema

1. When the system is under high-concurrency load with wide table point queries, the frequent memory allocation and deallocation of Schema become evident system bottlenecks. Additionally, the initialization of TabletSchema and Schema also becomes a CPU hotspot.Therefore, the introduction of a SchemaCache is implemented to cache these resources for reuse.

2. Make some variables wrapped with std::unique<unique_ptr>

Performance:
| 状态              | QPS | 平均响应时间 (avg) | P99 响应时间 |
|------------------|-----|------------------|-------------|
| 开启 SchemaCache | 501 | 20ms             | 34ms        |
| 关闭 SchemaCache | 321 | 31ms             | 61ms        |

* handle schema change with schema version

* remove useless header

* rebase
2023-05-29 17:34:53 +08:00
198433b131 [typo](config)Remove FE config max_conn_per_user (#20122)
---------

Co-authored-by: Yijia Su <suyijia@selectdb.com>
2023-05-29 17:20:36 +08:00