Reference to `org.apache.doris.planner.external.HiveSplitter`, the file cache of `HiveMetaStoreCache`
may be created even the table is a non-partitioned table,
so the `RefreshTableStmt` should consider this scene and handle it.
When adding inverted index to UNIQUE_KEYS table without merge_on_write enabled, the match query may failed before the segment is compacted.
So we add the restriction here.
add regression-test of decimalv3 for nereids and refactor some suites.
too many suites will be changed, so this pr we just add arithmetic test.
1. some tests are disabled because of unfixed results and precision, detailed a big integer mul and div a float will cause the latter and bit-op will cause the former.
2. the disabled tests with tag original planner are caused by unfixed results.
consider the query like this:
```sql
SELECT
k3, k4
FROM
test
WHERE
EXISTS( SELECT
d.*
FROM
(SELECT
k1 AS _1234, SUM(k2)
FROM
`test` d
GROUP BY _1234) d
LEFT JOIN
(SELECT
k1 AS _1234,
SUM(k2)
FROM
`test`
GROUP BY _1234) temp ON d._1234 = temp._1234)
ORDER BY k3, k4
```
when we analyze group by exprs in `temp` inline view. we bind the `_1234` on `d._1234` by mistake.
that because, when we do analyze in a **SUB-QUERY**, we will resolve SlotRef by itself **AND** parent's tuple. in the meanwhile, we register child's tuple to parent's analyzer. So, in a **SUB-QUERY**, the brother's tuple will affect the resolve result of current inlineview's slot.
This PR:
1. add a flag on the function `resolveColumnRef` in `Analyzer`
```java
private TupleDescriptor resolveColumnRef(String colName, boolean requestFromChild);
private TupleDescriptor resolveColumnRef(TableName tblName, String colName, boolean requestByChild);
```
2. add a flag to specify whether the tuple is from child.
```java
// alias name -> <from child, tupleDesc>
private final Multimap<String, Pair<Boolean, TupleDescriptor>> tupleByAlias;
```
when `requestByChild == true`, we **SKIP** the tuple from other child to avoid resolve error.
1. should always execute projection plan, whatever the statement it is.
2. should always execute projection plan, since we only have vectorized engine now
1.When querying data, it is no longer necessary to verify the permissions of the entire table, but rather to verify the
permissions of the queried columns. Currently, the 'ranger' already supports column permissions, and the internal
catalog provides the implementation of dummy column permissions (the actual verified permissions are still table
permissions)
2.delete roles in userIdentity
3.Change trigger logic of initAccessController
See #17764 for details
I have tested:
- Unit test for local/s3/hdfs/broker file system: be/test/io/fs/file_system_test.cpp
- Outfile to local/s3/hdfs/broker.
- Load from local/s3/hdfs/broker.
- Query file on local/s3/hdfs/broker file system, with table value function and catalog.
- Backup/Restore with local/s3/hdfs/broker file system
Not test:
- cold & host data separation case.
Complete the type coercion of the subquery in the function Binder process.
Expressions generated when subqueries are nested are uniformly converted to implicit types in the analyze stage.
Method: Add a typeCoercionExpr field to the subquery expression to store the generated cast information.
Fix scenario where scalarSubQuery handles arithmetic expressions when implicitly converting types
When setting FE config default_storage_medium to SSD, and set all BE storage path as SSD.
And table will be stored with storage medium SSD.
But there is a FE config storage_cooldown_second and its default value is 30 days.
So after 30 days, the storage medium of table will be changed to HDD, which is unexpected.
This PR removes the storage_cooldown_second, and use a max value to set the cooldown time of SSD
storage medium when the default_storage_medium is SSD.
Missing userinfo
java.lang.NullPointerException: null
at org.apache.doris.load.loadv2.LoadJob.getShowInfo(LoadJob.java:816) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.load.loadv2.LoadManager.getLoadJobInfosByDb(LoadManager.java:557) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.ShowExecutor.handleShowLoad(ShowExecutor.java:1094) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.ShowExecutor.execute(ShowExecutor.java:280) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.handleShow(StmtExecutor.java:1862) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:619) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:435) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:414) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.ConnectProcessor.dispatch(ConnectProcessor.java:558) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:799) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.mysql.ReadListener.lambda$handleEvent$0(ReadListener.java:52) ~[doris-fe.jar:1.2-SNAPSHOT]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]