Before, the auto analyze job start time was the job creation time, not the start to execute time, which is inaccurate. This pr is to change the start time to the first task start to execute time.
If there exists huge datasets with many database and may tables and many columns, Auto collector might be submit too many jobs which would occupy too much of FE memory.
In this PR, limit job each round could submit up to 5
In the previous PR #27124, we used `objectMapper.readValue` for deserialization. However, this method does not handle null fields, which can lead to issues when upgrading from older versions. Specifically, if a required field is missing in the persistent data, `String realColumnNamesJson = serializeMap.get(REAL_COLUMNS);` will return null, resulting in deserialization errors and frontend startup failure. This issue is likely to occur when upgrading from an older version that uses Jdbc Catalog to a new version including PR #27124. As this represents a specific upgrade scenario involving compatibility with old version data structures, it was not covered in the regular PR test cases. Given the specificity and difficulty in replicating such a scenario, no special test cases were added for this PR.
```
java.lang.NullPointerException: null
at com.sleepycat.je.rep.util.ReplicationGroupAdmin.getMasterSocket(ReplicationGroupAdmin.java:191)
at com.sleepycat.je.rep.util.ReplicationGroupAdmin.doMessageExchange(ReplicationGroupAdmin.java:607)
at com.sleepycat.je.rep.util.ReplicationGroupAdmin.getGroup(ReplicationGroupAdmin.java:406)
at org.apache.doris.ha.BDBHA.getElectableNodes(BDBHA.java:132)
at org.apache.doris.common.proc.FrontendsProcNode.getFrontendsInfo(FrontendsProcNode.java:84)
at org.apache.doris.qe.ShowExecutor.handleShowFrontends(ShowExecutor.java:1923)
at org.apache.doris.qe.ShowExecutor.execute(ShowExecutor.java:355)
at org.apache.doris.qe.StmtExecutor.handleShow(StmtExecutor.java:2113)
...
```
All cases' results are tested and passed with decimalv3
Cases about:
Calculation ( +, - , *, /)
Kinds of predicates(<, >, =, <>, in, not in, is null, is not null)
Load test(from csv and select into)
Runtime filter
Delete conditions
Key columns(agg/duplicate/uniq model, distributed/partition, bitmap index...)
1. optimize rf prune when col stats are not avaliable
2. add regression case to check plan and rf for tpcds_sf100 with stats
3. add regression case to check plan and rf for tpcds_sf100 without stats