1. the stopped and cancelled job will be cleaned after the interval of clean second
2. the interval of clean second * 1000 = current timestamp - end timestamp
3. if job could not fetch topic metadata when need_schedule, job will be cancelled
4. fix the deadlock of job and txn. the lock of txn must be in front of the lock of job
5. the job will be paused or cancelled depend on the abort reason of txn
6. the job will be cancelled immediately if the abort reason named offsets out of range
1. stream load executor will abort txn when no correct data in task
2. change txn label to DebugUtil.print(UUID) which is same as task id printed by be
3. change print uuid to hi-lo
1. init cmt offset in stream load context
2. init default max error num = 5000 rows / per 10000 rows
3. add log builder for routine load job and task
4. clone plan fragment param for every task
5. be does not throw too many filter rows while the init max error ratio is 1
1. Check if properties is null before check routine load properties
2. Change transactionStateChange reason to string
3. calculate current num by beId
4. Add kafka offset properties
5. Prefer to use previous be id
6. Add before commit listener of txn: if txn is committed after task is aborted, commit will be aborted
7. queryId of stream load plan = taskId
Add a variable enable_insert_strict, this default value is false. When
this value is set to true, insert will fail if there is any filtered
data. If this value is false, insert will ignore filtered data and
success
1. Add parallel number property in fragment
2. Parallel number will be init in contrustor of fragment
3. Async plan will use the default value while sync plan will use the parallelExecInstanceNum of SessionVariable