Fixed the bug of incomplete query results when querying information_schema.rowsets in the case of multiple BEs.
The reason is that the schema scanner sends the scan fragment to one of multiple bes, and be queries the information of fe through rpc. Since the rowsets information requires information about all BEs, the scan fragment needs to be sent to all BEs.
when backup is prepareAndSendSnapshotTask(), if some table has error, return status not ok, but not return, and other tables continue put snapshot job into batchTask and summit jobs to be while these jobs need cancel. so when status is not ok, return and do not summit jobs
1. fix race condition problem when get tablet load index
2. change tablet search algorithm from random to round-robin for random distribution table when load_to_single_tablet set to false
Reproduce:
DBA do following operations:
1. create user user1@['domain']; // the domain will be resolved as 2 ip: ip1 and ip2;
2. create user user1@'ip1';
3. wait at least 10 second
4. grant all on *.*.* to user1@'ip1'; // will return error: user1@'ip1' does not exist
This is because the daemon thread DomainResolver resolve the "domain" and overwrite the `user1@'ip1'`
which is created by DBA.
This PR fix it.
New session variable: runtime_filter_wait_infinitely. If set runtime_filter_wait_infinitely = true, consumer of rf will wait on receiving until query is timeout.
## Motivation:
In the past, our JOB only supported Timer JOB, which could only provide scheduling for fixed-time tasks. Meanwhile, the JOB was solely responsible for execution, and during execution, there might be inconsistencies in states, where the task was executed successfully, but the JOB's recorded task status was not updated.
This inconsistency in task states recorded by the JOB could not guarantee the correctness of the JOB's status. With the gradual integration of various businesses into the JOB, such as the export job and mtmv job, we found that scaling became difficult, and the JOB became particularly bulky. Hence, we have decided to refactor JOB.
## Refactoring Goals:
- Provide a unified external registration interface so that all JOBs can be registered through this interface and be scheduled by the JobScheduler.
- The JobScheduler can schedule instant JOBs, timer JOBs, and manual JOBs.
- JOB should provide a unified external extension class. All JOBs can be extended through this extension class, which can provide special functionalities like JOB status restoration, Task execution, etc.
- Extended JOBs should manage task states on their own to avoid inconsistent state maintenance issues.
- Different JOBs should use their own thread pools for processing to prevent inter-JOB interference.
### Design:
- The JOBManager provides a unified registration interface through which all JOBs can register and then be scheduled by the JobScheduler.
- The TimerJob periodically fetches JOBs that need to be scheduled within a time window and hands them over to the Time Wheel for triggering. To prevent excessive tasks in the Time Wheel, it distributes the tasks to the dispatch thread pool, which then assigns them to corresponding thread pools for execution.
- ManualJob or Instant Job directly assigns tasks to the corresponding thread pool for execution.
- The JOB provides a unified extension class that all JOBs can utilize for extension, providing special functionalities like JOB status restoration, Task execution, etc.
- To implement a new JOB, one only needs to implement AbstractJob.class and AbstractTask.class.
<img width="926" alt="image" src="https://github.com/apache/doris/assets/16631152/3032e05d-133e-425b-b31e-4bb492f06ddc">
## NOTICE
This will cause the master's metadata to be incompatible
The show column stats result for external table shows N/A for the columns of method, type, trigger and query_times. This pr is to fix this bug, to show the correct value.
Invoke ConnectContext.get() at replayer thread of slave FE nodes maybe return null, so a NPE will be thrown and slave nodes will be crashed.
Co-authored-by: wangxiangyu <wangxiangyu@360shuke.com>
Problem:
when we create table with datatype varchar(), we regard it to be max length by default. But when we desc, it does not show
real length but show varchar()
Reason:
when we upgrade version from 2.0.1 to 2.0.2, we support new feature of creating varchar(), and it shows the same way with
ddl schema. So user would confuse of the length of varchar
Solved:
change the showing of varchar() to varchar(65533), which in compatible with hive
close#26882
We should not use the singleNodePlan to generate the rootPlanFragment if the query is inside a insert statement or distributedPlanner will be null.
introduced in #15491
Concurrent schema change and txn may cause dead lock. An example:
Txn T commit but not publish;
Run schema change or rollup on T's related partition, add alter replica R;
sc/rollup add a sched txn watermark M;
Restart fe;
After fe restart, T's loadedTblIndexes will clear because it's not save to disk;
T will publish version to all tablet, including sc/rollup's new alter replica R;
Since R not contains txn data, so the T will fail. It will then always waitting for R's data;
sc/rollup wait for txn before M to finish, only after that it will let R copy history data;
Since T's not finished, so sc/rollup will always wait, so R will nerver copy history data;
Txn T and sc/rollup will wait each other forever, cause dead lock;
Fix: because sc/rollup will ensure double write after the sched watermark M, so for finish transaction, when checking a alter replica:
if txn id is bigger than M, check it just like a normal replica;
otherwise skip check this replica, the BE will modify history data later.