Previously when creating repository, FE would not do connectivity check.
It might result in confusing error when using backup restore.
pick #38350
Co-authored-by: AlexYue <yj976240184@gmail.com>
pick (#39311)
Update progress maybe throw exception, causing offset has been persisted
on edit log or meta service, but the memory data has not been updated.
It will cause repeated consumption.
pick (#39360)
When fetch stream load record from BE node, if can not find database,
StreamLoadRecordMgr will throw exception and the remaining records will
not be recorded in memory.
For example: Ten stream load records were pulled, and the database
associated with the stream load of the first record was deleted by the
user. Therefore, the pull will end, resulting in the remaining nine
records not being consumed recorded in memory.
This pr do not throw exception but skip record when can not find
database to solve this problem.
## Proposed changes
Expose the error msg from BE as the real fail reason recorded for schema
change tasks. To avoid too much memory usage, we just pick one among all
to record.
## Proposed changes
bp: #39205
When the catalog attributes have not changed, refreshing the catalog
only requires processing the cache, without rebuilding the entire
catalog.
pick (#39341)
In previous versions, we used a method based on JDBC 4.2 to read data,
so it was equivalent to abandoning support for ojdbc6. However, we
recently found that a large number of users still use Oracle version
11g, which will have some unexpected compatibility issues when using
ojdbc8 to connect. Therefore, I use version verification to make it
compatible with both ojdbc6 and ojdbc8, so that good compatibility can
be obtained through ojdbc6, and better reading efficiency can be
obtained through ojdbc8.
pick (#39180)
In #37565, due to the change in the calling order of finalize, the final
generated Plan will be missing the PREDICATES that have been pushed down
in Jdbc. Although this behavior is correct, before perfectly handling
the push down of various PREDICATES, we need to keep all conjuncts to
ensure that we can still filter data normally when the data returned by
Jdbc is a superset.
cherry-pick from master #39020
Problem:
when use delete from using clause and assign partition information, it
would delete more data from other partition
Solved:
add partition information when transfer delete clause into insert into
select clause
introduced by #38950, explain plan with sql cache will throw an exception
```
errCode = 2, detailMessage = Cannot invoke "org.apache.doris.nereids.trees.plans.Plan.treeString()" because "this.optimizedPlan" is null
```
pick from master #39352
use double to match string
- corr
- covar
- covar_samp
- stddev
- stddev_samp
use largeint to match string
- group_bit_and
- group_bit_or
- group_git_xor
use double to match decimalv3
- topn_weighted
optimize error message
- multi_distinct_sum
- multi_distinct_sum0
pick from master #39095
this is a behaviour change PR.
set operation INTERSECT should evaluated before others. In Doris
history, all set operators have same priority.
This PR change Nereids, let it be same with MySQL.
pick from master #38497 and #39342
use array<double> for array<string>
- array_avg
- array_cum_sum
- array_difference
- array_product
use array<bigint> for array<string>
- bitmap_from_array
use double first
- fmod
- pmod
let high order function throw friendly exception
- array_filter
- array_first
- array_last
- array_reverse_split
- array_sort_by
- array_split
let return type same as parameter's type
- array_push_back
- array_push_front
- array_with_constant
- if
let greatest / least work same as mysql's greatest