project child not always NamedExpression
failed msg
```
org.apache.doris.common.AnalysisException: errCode = 2, detailMessage = class org.apache.doris.nereids.trees.expressions.literal.VarcharLiteral cannot be cast to class org.apache.doris.nereids.trees.expressions.NamedExpression (org.apache.doris.nereids.trees.expressions.literal.VarcharLiteral and org.apache.doris.nereids.trees.expressions.NamedExpression are in unnamed module of loader 'app')
at org.apache.doris.qe.StmtExecutor.executeByNereids(StmtExecutor.java:623) ~[classes/:?]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:478) ~[classes/:?]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:457) ~[classes/:?]
at org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:245) ~[classes/:?]
at org.apache.doris.qe.MysqlConnectProcessor.handleQuery(MysqlConnectProcessor.java:166) ~[classes/:?]
at org.apache.doris.qe.MysqlConnectProcessor.dispatch(MysqlConnectProcessor.java:193) ~[classes/:?]
at org.apache.doris.qe.MysqlConnectProcessor.processOnce(MysqlConnectProcessor.java:246) ~[classes/:?]
at org.apache.doris.mysql.ReadListener.lambda$handleEvent$0(ReadListener.java:52) ~[classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:829) ~[?:?]
Caused by: java.lang.ClassCastException: class org.apache.doris.nereids.trees.expressions.literal.VarcharLiteral cannot be cast to class org.apache.doris.nereids.trees.expressions.NamedExpression (org.apache.doris.nereids.trees.expressions.literal.VarcharLiteral and org.apache.doris.nereids.trees.expressions.NamedExpression are in unnamed module of loader 'app')
at org.apache.doris.nereids.trees.plans.physical.PhysicalSetOperation.pushDownRuntimeFilter(PhysicalSetOperation.java:178) ~[classes/:?]
at org.apache.doris.nereids.trees.plans.physical.PhysicalHashJoin.pushDownRuntimeFilter(PhysicalHashJoin.java:229) ~[classes/:?]
at org.apache.doris.nereids.processor.post.RuntimeFilterGenerator.pushDownRuntimeFilterCommon(RuntimeFilterGenerator.java:386) ~[classes/:?]
```
In the old caching logic, we only used jdbcurl, user, and password as cache keys. This may cause the old link to be still used when replacing the jar package, so we should concatenate all the parameters required for the connection pool as the key.
We will Integrate new load job manager into new job scheduling framework so that the insert into task can be scheduled after the broker load sql is converted to insert into TVF(table value function) sql.
issue: https://github.com/apache/doris/issues/24221
Now support:
1. load data by tvf insert into sql, but just for simple load(columns need to be defined in the table)
2. show load stmt
- job id, label name, job state, time info
- simple progress
3. cancel load from db
4. support that enable new load through Config.enable_nereids_load
5. can replay job after restarting doris
TODO:
- support partition insert job
- support show statistics from BE
- support multiple task and collect task statistic
- support transactional task
- need add ut case
1. Remove `doris_max_remote_scanner_thread_pool_thread_num`, use `doris_scanner_thread_pool_thread_num` only.
2. Set the default value `doris_scanner_thread_pool_thread_num` as `std::max(48, CpuInfo::num_cores() * 4)`
* [improvement] (nereids) Get partition related table disable nullable field and modify regression test, complete agg mv rules.
* make filed not null to create partition mv
Introduced by #28851, after evaluating build side expr, some columns in resulting block may be referenced more than once in the same block.
e.g. coalesce(col_a, 'string') if col_a is nullable but actually contains no null values, in this case funcition coalesce will insert a new nullable column which references the original col_a.
Calling hdfsGetLastExceptionRootCause without initializing ThreadLocalState
will crash. This PR modifies the condition for determining the existence of
a hdfs file, because hdfsExists will set errno to ENOENT when the file does
not exist, we can use this condition to check whether a file existence rather
than check the existence of the root cause.