Previously when creating repository, FE would not do connectivity check.
It might result in confusing error when using backup restore.
pick #38350
Co-authored-by: AlexYue <yj976240184@gmail.com>
pick (#39311)
Update progress maybe throw exception, causing offset has been persisted
on edit log or meta service, but the memory data has not been updated.
It will cause repeated consumption.
pick (#39360)
When fetch stream load record from BE node, if can not find database,
StreamLoadRecordMgr will throw exception and the remaining records will
not be recorded in memory.
For example: Ten stream load records were pulled, and the database
associated with the stream load of the first record was deleted by the
user. Therefore, the pull will end, resulting in the remaining nine
records not being consumed recorded in memory.
This pr do not throw exception but skip record when can not find
database to solve this problem.
## Proposed changes
Expose the error msg from BE as the real fail reason recorded for schema
change tasks. To avoid too much memory usage, we just pick one among all
to record.
## Proposed changes
bp: #39205
When the catalog attributes have not changed, refreshing the catalog
only requires processing the cache, without rebuilding the entire
catalog.
pick (#39341)
In previous versions, we used a method based on JDBC 4.2 to read data,
so it was equivalent to abandoning support for ojdbc6. However, we
recently found that a large number of users still use Oracle version
11g, which will have some unexpected compatibility issues when using
ojdbc8 to connect. Therefore, I use version verification to make it
compatible with both ojdbc6 and ojdbc8, so that good compatibility can
be obtained through ojdbc6, and better reading efficiency can be
obtained through ojdbc8.
pick (#39180)
In #37565, due to the change in the calling order of finalize, the final
generated Plan will be missing the PREDICATES that have been pushed down
in Jdbc. Although this behavior is correct, before perfectly handling
the push down of various PREDICATES, we need to keep all conjuncts to
ensure that we can still filter data normally when the data returned by
Jdbc is a superset.
cherry-pick from master #39020
Problem:
when use delete from using clause and assign partition information, it
would delete more data from other partition
Solved:
add partition information when transfer delete clause into insert into
select clause
introduced by #38950, explain plan with sql cache will throw an exception
```
errCode = 2, detailMessage = Cannot invoke "org.apache.doris.nereids.trees.plans.Plan.treeString()" because "this.optimizedPlan" is null
```
pick from master #39352
use double to match string
- corr
- covar
- covar_samp
- stddev
- stddev_samp
use largeint to match string
- group_bit_and
- group_bit_or
- group_git_xor
use double to match decimalv3
- topn_weighted
optimize error message
- multi_distinct_sum
- multi_distinct_sum0
pick from master #39095
this is a behaviour change PR.
set operation INTERSECT should evaluated before others. In Doris
history, all set operators have same priority.
This PR change Nereids, let it be same with MySQL.
pick from master #38497 and #39342
use array<double> for array<string>
- array_avg
- array_cum_sum
- array_difference
- array_product
use array<bigint> for array<string>
- bitmap_from_array
use double first
- fmod
- pmod
let high order function throw friendly exception
- array_filter
- array_first
- array_last
- array_reverse_split
- array_sort_by
- array_split
let return type same as parameter's type
- array_push_back
- array_push_front
- array_with_constant
- if
let greatest / least work same as mysql's greatest
## Proposed changes
this is brought by https://github.com/apache/doris/pull/38008
if use cast(FLOOR(MINUTE(time) / 15) as decimal(9, 0)) in group by
clause when sync materialized view. if downgrade from 2.1.6 to 2.1.5 or
upgrade 2.1.6 to 3.0.0
this may cause fe can not run. So revert the function.
## Proposed changes
Issue Number: close #xxx
before `corr(nullable_x, nullable_y)` will core dump. not fixed.
no need to patch in master because the refactor
https://github.com/apache/doris/pull/37330 already changed the
implementation context
## Proposed changes
Pick #38895
Before this pr, this api needs backends' ip and port as param, which is
hard to use. This pr modify it. If there is no param, doris will print
all backends WAL info.
The acceptable usage are as follows
```
curl -u root: "127.0.0.1:8038/api/get_wal_size?host_ports=127.0.0.1:9058"
{"msg":"success","code":0,"data":["127.0.0.1:9058:0"],"count":0}%
curl -u root: "127.0.0.1:8038/api/get_wal_size?host_ports="
{"msg":"success","code":0,"data":["127.0.0.1:9058:0"],"count":0}%
curl -u root: "127.0.0.1:8038/api/get_wal_size"
{"msg":"success","code":0,"data":["127.0.0.1:9058:0"],"count":0}%
```
<!--Describe your changes.-->
## Proposed changes
Issue Number: close #xxx
<!--Describe your changes.-->
…#39322)
https://github.com/apache/doris/pull/39322
## Proposed changes
```
mysql [(none)]>select round(timediff(now(),'2024-08-15')/60/60,2);
ERROR 1105 (HY000): errCode = 2, detailMessage = argument 1 requires datetimev2 type, however 'now()' is of datetime type
```
The reason is that the function parameter types were modified in
expectedInputTypes, which led to no match being found. The code here is
from a long time ago. Because the precision of datetimev2 could not be
deduced in the past, a separate implementation was made here. This code
can be safely deleted.
<!--Describe your changes.-->
## Proposed changes
Issue Number: close #xxx
<!--Describe your changes.-->
bp: #38400
When the `Export` statement specifies the `delete_existing_files`
property, each `Outfile` statement generated by the `Export` will carry
this property. This causes each `Outfile` statement to delete existing
files, so only the result of the last Outfile statement will be
retained.
So, we add a rpc method which can delete existing files for `Export`
statement and the `Outfile` statements generated by the `Export` will
not carry `delete_existing_files` property any more.
## Proposed changes
Issue Number: close #xxx
<!--Describe your changes.-->