add <optional> head to solve the compilation issue
use 3.12.9 as the protoc.artifact's version, because there is no 3.12.21
See: https://repo.maven.apache.org/maven2/com/google/protobuf/protoc/
Remove --show-progress arguments of wget because it is not supported in low version wget
Sometimes the competition of lock is fierce in DatabaseTransactionMgr, which may lead to publish time out, i think we should have a log to hint these lock competition.
When using JDBC Catalog to query the Doris data, because Doris does not provide the cursor reading method (that is, fetchBatchSize is invalid), Doris will send the data to the client at one time, resulting in client OOM.
The MySQL protocol provides a stream reading method. Doris can use this method to avoid OOM. The requirements of using the stream method are setting fetchbatchsize = Integer.MIN_VALUE and setting ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY
For Hive text input format: the column types ARRAY/MAP/STRUCT are not supported yet.
It will be supported over successive versions.
Co-authored-by: jinzhe <jinzhe@selectdb.com>
Problem:
1. FE will split the parquet file into split. So a file can have several splits.
2. BE will scan each split, read the footer of the parquet file.
3. If 2 splits belongs to a same parquet file, the footer of this file will be read twice.
This PR mainly changes:
1. Use kv cache to cache the footer of parquet file.
2. The kv cache is belong to a scan node, so all parquet reader belong to this scan node will share same kv cache.
3. In cache, the key is "meta_file_path", the value is parsed thrift footer.
The KV Cache is sharded into mutlti sub cache.
So that different file can use different sub cache, avoid blocking each other
In my test, a query with 26 splits can reduce the footer parse time from 4s -> 1s
If a colocate table's tablet is heathy. When a BE report a extra reaplica to FE, FE will not delete it.
But it should be deleted, otherwise it will report again and again but no one will handle it.
select * from T where A = 10.0
suppose A is int column
after stats derive on `cast(A as double) = 10.0`,
we set column stats for `cast(A as double)` on `A`
when Upgrade the version of jackson,k8s client will failed
java.lang.NoClassDefFoundError: org/yaml/snakeyaml/LoaderOptions
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.(YAMLParser.java:191) ~[jackson-dataformat-yaml-2.14.2.jar:2.14.2]
at com.fasterxml.jackson.dataformat.yaml.YAMLFactory._createParser(YAMLFactory.java:509) ~[jackson-dataformat-yaml-2.14.2.jar:2.14.2]
at com.fasterxml.jackson.dataformat.yaml.YAMLFactory.createParser(YAMLFactory.java:413) ~[jackson-dataformat-yaml-2.14.2.jar:2.14.2]
at com.fasterxml.jackson.dataformat.yaml.YAMLFactory.createParser(YAMLFactory.java:386) ~[jackson-dataformat-yaml-2.14.2.jar:2.14.2]
at com.fasterxml.jackson.dataformat.yaml.YAMLFactory.createParser(YAMLFactory.java:15) ~[jackson-dataformat-yaml-2.14.2.jar:2.14.2]
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3677) ~[jackson-databind-2.14.2.jar:2.14.2]
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3645) ~[jackson-databind-2.14.2.jar:2.14.2]
at io.fabric8.kubernetes.client.internal.KubeConfigUtils.parseConfigFromString(KubeConfigUtils.java:47) ~[kubernetes-client-5.12.2.jar:?]
...
1. close (https://github.com/apache/doris/issues/16458) for nereids
2. varchar and string type should be treated as same type in bucket shuffle join scenario.
```
create table shuffle_join_t1 ( a varchar(10) not null )
create table shuffle_join_t2 ( a varchar(5) not null, b string not null, c char(3) not null )
```
the bellow 2 sqls can use bucket shuffle join
```
select * from shuffle_join_t1 t1 left join shuffle_join_t2 t2 on t1.a = t2.a;
select * from shuffle_join_t1 t1 left join shuffle_join_t2 t2 on t1.a = t2.b;
```
3. PushdownExpressionsInHashCondition should consider both hash and other conjuncts
4. visitPhysicalProject should handle MarkJoinSlotReference
1. add the global keyword.
SHOW [GLOBAL] [FULL] [BUILTIN] FUNCTIONS [IN|FROM db] [LIKE 'function_pattern']
SHOW CREATE GLOBAL FUNCTION function_name(arg_type [, ...]);
2. show the details of the global udf.
Fix function analysis repeat add child.
select list expression not produced by aggregation output (missing from GROUP BY clause?): if(length(`r_2_3`.`name`) % 32 = 0, aes_decrypt(unhex(`r_2_3`.`name`), '***'), `r_2_3`.`name`)
1. some bitmap functions like bitmap_or, bitmap_and_count, bitmap_or_count etc shouldn't follow constant fold rule for PropagateNullable functions. So remove PropagateNullable property and these functions would use their own constant fold logic correctly
2. dphyper's PlanReceiver class shouldn't change hyperGraph's complex project info. So make PlanReceiver use its own copy of complex project info now.
When we use a hive client to submit a `INSERT INTO TBL SELECT * FROM ...` or `INSERT INTO TBL VALUES ...`
sql and the table is non-partitioned table, the hms will generate an insert event. The insert stmt may changed the
hdfs file distribution of this table, but currently we do not handle this, so the file cache of this table may be inaccurate.
the legacy PartitionPruner only support some simple cases, some useful cases not support:
1. can not support evaluate some builtin functions, like `cast(part_column as bigint) = 1`
2. can not prune multi level range partition, for partition `[[('1', 'a'), ('2', 'b'))`, it has some constraints:
- first_part_column between '1' and '2'
- if first_part_column = '1' then second_part_column >= 'a'
- if first_part_column = '2' then second_part_column < 'a'
This pr refactor it and support:
1. use visitor to evaluate function and fold constant
2. if the partition is discrete like int, date, we can expand it and evaluate, e.g `[1, 5)` will be expand to `[1, 2, 3, 4]`
3. support prune multi level range partition, as previously described
4. support evaluate capabilities for a range slot, e.g. datetime range partition `[('2023-03-21 00:00:00'), ('2023-03-21 23:59:59'))`, if the filter is `date(col1) = '2023-03-22'`, this partition will be pruned, we can do this prune because we know that the date always is `2023-03-21`. you can implement the visit method in FoldConstantRuleOnFE and OneRangePartitionEvaluator to support this functions.
### How can we do it so finely ?
Generally, the range partition can separate to three parts: `const`, `range`, `other`.
for example, the partition `[(1, 'a', 'D'), ('1', 'c', 'D'))` exist
1. first partition column is `const`: always equals to '1'
2. second partition column is `range`: `slot >= 'a' and <= 'c'`. If not later slot, it must be `slot >= 'a' and < 'c'`
3. third partition column is `other`: regardless of whether the upper and lower bounds are the same, it must exist multi values, e.g. `('1', 'a', 'D')`, `('1', 'a', 'F')`, `('1', 'b', 'A')`, `('1', 'c', 'A')`
In a partition, there is one and only one `range` slot can exist; maybe zero or one or many `const`/`other` slots.
Normally, a partition look like [const*, range, other*], these are the possible shapes:
1. [range], e.g `[('1'), ('10'))`
2. [const, range], e.g. `[('1', 'a'), ('1', 'd'))`
3. [range, other, other], e.g. `[('1', '1', '1'), ('2', '1', '1'))`
4. [const, const, ..., range, other, other, ...], e.g. `[('1', '1', '2', '3', '4'), ('1', '1', '3', '3', '4'))`
The properties of `const`:
1. we can replace slot to literal to evaluate expression tree.
The properties of `range`:
1. if the slot date type is discrete type, like int, and date, we can expand it to literal and evaluate expression tree
2. if not discrete type, like datetime, or the discrete values too much, like [1, 1000000), we can keep the slot in the expression tree, and assign a range for it, when evaluate expression tree, we also compute the range and check whether range is empty set, if so we can simplify to BooleanLiteral.FALSE to skip this partition.
5. if the range slot satisfied some conditions , we can fold the slot with some function too, see the datetime example above
The properties of `other`:
1. only when the previous slot is literal and equals to the lower bound or upper bound of partition, we can shrink the range of the `other` slot
According this properties, we can do it finely.
at the runtime, the `range` and `other` slot maybe shrink the range of values,
e.g.
1. the partition `[('a'), ('b'))` with predicate `part_col = 'a'` will shrink range `['a', 'b')` to `['a']`, like a `range` slot change/downgrading to `const` slot;
2. the partition `[('a', '1'), ('b', '10'))` with predicate `part_col1 = 'a'` will shrink the range of `other` slot from unknown(all range) to `['1', +∞)`, like a `other` slot change/downgrading to `range` slot.
But to simplify, I haven't change the type at the runtime, just shrink the ColumnRange.
This pr refactor the column pruning by the visitor, the good sides
1. easy to provide ability of column pruning for new plan by implement the interface `OutputPrunable` if the plan contains output field or do nothing if not contains output field, don't need to add new rule like `PruneXxxChildColumns`, few scenarios need to override the visit function to write special logic, like prune the LogicalSetOperation and Aggregate
2. support shrink output field in some plans, this can skip some useless operations so improvement
example:
```sql
select id
from (
select id, sum(age)
from student
group by id
)a
```
we should prune the useless `sum (age)` in the aggregate.
before refactor:
```
LogicalProject ( distinct=false, projects=[id#0], excepts=[], canEliminate=true )
+--LogicalSubQueryAlias ( qualifier=[a] )
+--LogicalAggregate ( groupByExpr=[id#0], outputExpr=[id#0, sum(age#2) AS `sum(age)`#4], hasRepeat=false )
+--LogicalProject ( distinct=false, projects=[id#0, age#2], excepts=[], canEliminate=true )
+--LogicalOlapScan ( qualified=default_cluster:test.student, indexName=<index_not_selected>, selectedIndexId=10007, preAgg=ON )
```
after refactor:
```
LogicalProject ( distinct=false, projects=[id#0], excepts=[], canEliminate=true )
+--LogicalSubQueryAlias ( qualifier=[a] )
+--LogicalAggregate ( groupByExpr=[id#0], outputExpr=[id#0], hasRepeat=false )
+--LogicalProject ( distinct=false, projects=[id#0], excepts=[], canEliminate=true )
+--LogicalOlapScan ( qualified=default_cluster:test.student, indexName=<index_not_selected>, selectedIndexId=10007, preAgg=ON )
```
in PR #17813 , we want to forbid bind slot on brother's column
howerver the fix is not in correct way.
the correct way to do that is forbid subquery register itself in parent's analyzer.
This reverts commit b91a3b5a72520105638dad1079b71a05f02c10a0.
The metadata storage format of the materialized view has changed, and the new optimizer adapts to the new storage method.
The column storage format of the metadata for the materialized view is changed to start with mv_ or start with mva_
This pr allows the new optimizer to recognize the new materialized view columns and select the correct materialized view.
TODO: support advance mv
Current getSplits for TVF is to create one split for each file. In this case, large file scan performance maybe bad.
This pr is to implement the getSplits function in TVFSplitter to support split file to multiple blocks which
may improve the performance for large files.
1. adjust in cost model
the cost of broadcast should lower than the cost of shuffle when data size is small.
In broadcast, we do not known the number of receiver BEs, so we use the number of BEs in the system.
2. debug message adjust
a. in explain, print row count after filter
b. if join is not marked join, do not print marked join info