before the lambda function Expr not implement toSqlImpl() function.
so it's call parent function, which is not suit for lambda function.
and will be have error when create view.
**Support sql cache for hms catalog. Legacy planner and Nereids planner are all supported.
Not support partition cache now, not support federated query now.**
A similar bug compares to #22140 .
When executing a query with hms catalog, the query maybe failed because some hdfs files are not existed. We should just distinguish this kind of errors and skip it.
```
errCode = 2, detailMessage = (xxx.xxx.xxx.xxx)[CANCELLED][INTERNAL_ERROR]failed to init reader for file hdfs://xxx/dwd_tmp.db/check_dam_table_relation_record_day_data/part-00000-c4ee3118-ae94-4bf7-8c40-1f12da07a292-c000.snappy.orc, err: [INTERNAL_ERROR]Init OrcReader failed. reason = Failed to read hdfs://xxx/dwd_tmp.db/check_dam_table_relation_record_day_data/part-00000-c4ee3118-ae94-4bf7-8c40-1f12da07a292-c000.snappy.orc: [INTERNAL_ERROR]Read hdfs file failed. (BE: xxx.xxx.xxx.xxx) namenode:hdfs://xxx/dwd_tmp.db/check_dam_table_relation_record_day_data/part-00000-c4ee3118-ae94-4bf7-8c40-1f12da07a292-c000.snappy.orc, err: (2), No such file or directory), reason: RemoteException: File does not exist: /xxx/dwd_tmp.db/check_dam_table_relation_record_day_data/part-00000-c4ee3118-ae94-4bf7-8c40-1f12da07a292-c000.snappy.orc at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:86)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:76)
at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:158) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1927)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:738)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:426)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
```
Add 2 metrics in jdbc scan node profile:
- `CallJniNextTime`: call get next from jdbc result set
- `ConvertBatchTime`: call convert jobject to columm block
Also fix a potential concurrency issue when init jdbc connection cache pool
The origin virtual number is Math.max(Math.min(512 / backends.size(), 32), 2);, which is too small,
causing uneven cache distribution when enabling file cache.
This pr mainly has two changes:
1. add some merge processes about partition events
2. add a ut for `MetastoreEventFactory`. First add some mock classes (`MockCatalog`/`MockDatabase` ...) to simulate the real hms catalog/databases/tables/partitions, then create a event producer which can produce every kinds of `MetastoreEvent`s randomly. Use two catalogs for test, one is named `testCatalog` and the other is the `validateCatalog`, use event producer to produce many events and let `validateCatalog` to handle all of the events, but `testCatalog` just handles the events which have been merged by `MetastoreEventFactory`, check if the `validateCatalog` is equals to `testCatalog`.
If there 3 above fe nodes,
the following opeartions will cause all FE nodes down.
DROP USER revoke_test_user
DROP ROLE revoke_test_role
DROP DATABASE IF EXISTS revoke_test_db
CREATE DATABASE revoke_test_db
CREATE ROLE revoke_test_role
CREATE USER revoke_test_user IDENTIFIED BY 'revoke_test_pwd'
GRANT SELECT_PRIV ON revoke_test_db.* TO ROLE 'revoke_test_role'
GRANT 'revoke_test_role' TO revoke_test_user
SHOW GRANTS FOR revoke_test_user
REVOKE 'revoke_test_role' from revoke_test_user
SHOW GRANTS FOR revoke_test_user
DROP USER revoke_test_user
DROP ROLE revoke_test_role
DROP DATABASE revoke_test_db
mv in select materialized_view should disable show table,
because Nereids planner can output the string such as
slot#[0] in toSql() of SlotRef. Note this is only a
temporary solution, will use an expression translator later
select rank() over (partition by A, B) as r, sum(x) over(A, C) as s from T;
A is a common partition key for all windowExpressions, that is A is intersection of {A,B} and {A, C}
we could push filter A=1 through this window, since A is a common Partition key:
select * from (select a, row_number() over (partition by a) from win) T where a=1;
origin plan:
----filter((T.a = 1))
----------PhysicalWindow
------------PhysicalQuickSort
--------------PhysicalProject
------------------PhysicalOlapScan[win]
transformed to
----PhysicalWindow
------PhysicalQuickSort
--------PhysicalProject
----------filter((T.a = 1))
------------PhysicalOlapScan[win]
But C=1 can not be pushed through window.
Issue Number: close #xxx
ShowTableStmtTest.testNoDb and DropDbStmtTest.testNoPriv are unstable cases,error msg is:
java.lang.Exception: Unexpected exception, expected<org.apache.doris.common.AnalysisException> but was<mockit.internal.expectations.invocation.MissingInvocation>
we can not know what is missing ,and this issue cannot be reproduced locally,so add some log
When be load reblancer choose candidate tablets, it will try moving tablets from high load backends to low backend backends. If the higher HIGH BE has no available slot num, it should try next HIGH BE.
Sometimes FE replica's version is unreliable. FE's replica may bigger than BE's real version. Need check if BE missing version (last failed version > 0).
There are 1 security vulnerabilities found in gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b
CVE-2022-28948
What did I do?
Upgrade gopkg.in/yaml.v3 from v3.0.0-20210107192922-496545a6307b to 3.0.0 for vulnerability fix
What did you expect to happen?
Ideally, no insecure libs should be used.
How can we automate the detection of these types of issues?
By using the GitHub Actions configurations provided by murphysec, we can conduct automatic code security checks in our CI pipeline.
The specification of the pull request
PR Specification from OSCS