support bind external relation out of Doris fe environment, for example, analyze sql in other java application.
see BindRelationTest.bindExternalRelation.
This pr is to add the collecting hive statistic function. While the CBO fetching hive table statistics, statistic cache will
first load from internal stats olap table. If not found, then using this pr's function to fetch from remote Hive metastore.
Keep hadoop-aliyun version consistent with hadoop main version (3.3.5)
upgrade jackson to 2.14.3
upgrade netty version to 4.1.94.final
binding check.freamework version to 3.32.0
upgrade snappy-java to 1.1.10.1
upgrade hudi version to 0.13.1
upgrade spring version to 2.7.13
upgrade orc version to 1.8.4
revert nonsensical changes
in PR #21168 , we refactor physcial properties and translator
to ensure not generating useless excahange. olap scan node
could be gather in Nereids but translate to hash partitioned.
since coordinator could not process gather olap scan node,
we remove the candidate distribution spec of olap scan
When creating a new hive catalog or refresh the hive catalog, it will refresh the HiveMetaStore cache.
And it will call "FileInputFormat.setInputPaths()".
In this method, it will create a new FileSystem instance and store it in FileSystem's cache.
So if refresh catalog frequently, there will be too many FileSystem instances in cache, causing OOM.
This PR disable the FileSystem Cache.
Try to reuse an existed ugi at DFSFileSystem, otherwise if we query a more then ten-thousands partitons hms table, we will do more than ten-thousands login operations, each login operation will cost hundreds of ms from my test.
Co-authored-by: 王翔宇 <wangxiangyu@360shuke.com>
testEliminatingSortNode needs to check if SortNode is existed in plan tree, so it should check plan1.contains("order by:"), but rather than plan1.contains("SORT INFO:") or plan1.contains("SORT LIMIT:").
introduced by #19031
FE could not recover any more because there is a convert to olap table operation in the code. But there are many table types that is not a olap table such as view jdbc table ...
It will convert failed and FE will not start correctly.Co-authored-by: yiguolei <yiguolei@gmail.com>
1. prune hash join output slot ids list based on slot ids in required project and other conjunctions, to reduce the be side effort.
2. support pruning for semi/anti also
this PR
1. refactor physical properties, property deriver and property regular
to ensure Nereids could generate plan with sufficent PhysicalDistribute.
2. refactor PhyscialPlanTranslator to ensure all ExchangeNode generated
by PhysicalDistribute, except CTEConsumer. We will refactor all cte
related node later.
the detail changes of this PR:
1. update DistributionSpec of physical properties:
- Any: random distribution, used in output and require
- StorageAny: random distribution but constrained by where the data is stored, used in output
- ExecutionAny: random distribution to present random shuffle, used in output
- Gather: gather distribution, used in output and require
- StorageGather: gather distribution but constrained by where the data is stored, used in output
- Replicated: broadcast distribution
- Hash: bucket distribution
2. update shuffle type of DistributionSpecHash
- REQUIRE: used in require
- NATURAL: distribution as storage engine hash algorithm, constrained by where the data is stored
- STORAGE_BUCKETED: distribution as storage engine hash algorithm
- EXECUTION_BUCKETED: distribution as execution engine hash algorithm
3. update HideOneRowRelationUnderSetOperation to MergeOneRowRelationIntoSetOperation
4. update property deriver of SetOperation to ensure suitable PhysicalDistribute be added
at top and below of SetOperation
5. refactor PhysicalPlanTranslator to ensure no unplanned exchange node will be added
1. LOG sql when analyze failed
2. Return directly for analyze_test suite when there is more than one frontend
3. Set query_timeout for tpcds suites to avoid unneccessary failed caused by analyze sync
1 check group exists when set group for user property;
eg, if g1 not exists, then set op should be failed.
mysql [test]>SET PROPERTY FOR 'root' 'default_workload_group' = 'g1';
ERROR 1105 (HY000): errCode = 2, detailMessage = workload group g1 not exists
2 check whether group is used for user when drop group;
eg, if a group is set for root, then drop should be failed.
mysql [test]>drop workload group test_g1;
ERROR 1105 (HY000): errCode = 2, detailMessage = workload group test_g1 is set for user root
complex predicate in delete stmt like:
```sql
delete from t1 where t1.id in (select id from t2);
```
will be replaced to an insert stmt.
```sql
insert into t1(id, __DORIS_DELETE_SIGN__) select id, 1 from t1 where id in (select id from t2);
```
* [Improve](dynamic schema) support filtering invalid data
1. Support dynamic schema to filter illegal data.
2. Expand the regular expression for ColumnName to support more column names.
3. Be compatible with PropertyAnalyzer and support legacy tables.
4. Default disable parse multi dimenssion array, since some bug unresolved
1. Fix MC jni scanner OOM
2. add the second datetime type for MC SDK timestamp
3. make s3 uri case insensitive by the way
4. optimize max compute scanner parallel model
The default value of RefreshCatalogStmt.invalidCache is false now, but the RefreshManager.RefreshTask does not invoke RefreshCatalogStmt.analyze() so it will not invalidate the cache. This pr mainly fix this problem
In the previous implementation, the check for groupby exprs was ignored. Add this necessary check to make sure it would work
You could reproduce it by runnning belowing sql:
CREATE TABLE t_push_filter_through_agg (col1 varchar(11451) not null, col2 int not null, col3 int not null)
UNIQUE KEY(col1)
DISTRIBUTED BY HASH(col1)
BUCKETS 3
PROPERTIES(
"replication_num"="1"
);
CREATE VIEW `view_i` AS
SELECT
`b`.`col1` AS `col1`,
`b`.`col2` AS `col2`
FROM
(
SELECT
`col1` AS `col1`,
sum(`cost`) AS `col2`
FROM
(
SELECT
`col1` AS `col1`,
sum(CAST(`col3` AS INT)) AS `cost`
FROM
`t_push_filter_through_agg`
GROUP BY
`col1`
) a
GROUP BY
`col1`
) b;
SELECT SUM(`total_cost`) FROM view_a WHERE `dt` BETWEEN '2023-06-12' AND '2023-06-18' LIMIT 1;