Commit Graph

18429 Commits

Author SHA1 Message Date
d60d804d9c [fix](memory) Fix task repeat attach task DCHECK failed #32784 (#33343)
[branch-2.1](memory) Fix CCR task repeat attach task DCHECK failed3 #33366
2024-04-08 16:15:04 +08:00
1f3ab4fd24 [fix](jdbc catalog) fix db2 test connection sql (#33335) 2024-04-08 09:05:44 +08:00
c318c48a38 [fix](compile) fix implicit float-to-int conversion in mem_info calculation (#33311) 2024-04-08 07:34:22 +08:00
ebbfb06162 [Bug](array) fix array column core dump in get_shrinked_column as not check type (#33295)
* [Bug](array) fix array column core dump in get_shrinked_column as not check type

* add function could_shrinked_column
2024-04-08 07:27:40 +08:00
1b3e4322e8 [improvement](serde) Handle NaN values in number for MySQL result write (#33227) 2024-04-07 23:24:23 +08:00
fae55e0e46 [Feature](information_schema) add processlist table for information_schema db (#32511) 2024-04-07 23:24:22 +08:00
29556f758e [fix](parquet) fix time zone error in parquet reader (#33217)
`isAdjustedToUTC` is exactly the opposite in parquet reader(https://github.com/apache/parquet-format/blob/master/LogicalTypes.md), resulting the time with `isAdjustedToUTC=true` has increased by eight hours(UTC8).

The parquet with `isAdjustedToUTC=true` can be produced by spark-sql with the following configuration:
```
--conf spark.sql.session.timeZone=UTC
--conf spark.sql.parquet.outputTimestampType=TIMESTAMP_MICROS
```

However, using the following configuration, there's no logical and convert type in parquet meta data, so the time read by doris will also increase by eight hours(UTC8). Users need to set their own UTC time zone in doris(https://doris.apache.org/docs/dev/advanced/time-zone/)
```
--conf spark.sql.session.timeZone=UTC
--conf spark.sql.parquet.outputTimestampType=INT96
```
2024-04-07 23:24:22 +08:00
b882704eaf [fix](Export) Set the default value of the data_consistence property of export to partition (#32830) 2024-04-07 23:24:22 +08:00
69bf3b9da4 [fix](hdfs-writer) Catch error information after hdfsCloseFile() (#33195) 2024-04-07 23:24:17 +08:00
586df24b9d [fix](tvf) Support fs.defaultFS with postfix '/' (#33202)
For HDFS tvf like:
```
select count(*) from hdfs(
"uri" = "hdfs://HDFS8000871/path/to/1.parquet",
"fs.defaultFS" = "hdfs://HDFS8000871/",
"format" = "parquet"
);
```

Before, if the `fs.defaultFS` is end with `/`, the query will fail with error like:
```
reason: RemoteException: File does not exist: /user/doris/path/to/1.parquet
```
You can see that is a wrong path with wrong prefix `/user/doris`
User need to set `fs.defaultFS` to `hdfs://HDFS8000871` to avoid this error.

This PR fix this issue
2024-04-07 22:21:14 +08:00
466972926e [fix](dns-cache) do not detach the refresh thread (#33182) 2024-04-07 22:18:56 +08:00
feb2f4fae8 [feature](local-tvf) support local tvf on shared storage (#33050)
Previously, local tvf can only query data on one BE node.
But if the storage is shared(eg, NAS), it can be executed on multi nodes.

This PR mainly changes:
1. Add a new property `"shared_stoage" = "false/true"`

    Default is false, if set to true, "backend_id" is optional. If "backend_id" is set,
    it still be executed on that BE, if not set, "shared_stoage" must be "true"
    and it will be executed on multi nodes.

Doc: https://github.com/apache/doris-website/pull/494
2024-04-07 22:17:28 +08:00
95da52b9d8 [fix](avro) avoid BE crash if avro scanner's dependency jars is mssing (#33031)
1. Check the return value of avro reader's init_fetch_table_schema_reader()
2. Also fix a bug but the parse exception of Nereids may suppress the real exception from old planner
    It will result unable to see the real error msg.
2024-04-07 22:17:16 +08:00
ed93d6132f [fix](jni) avoid coredump if failed to get jni env (#32950)
This PR #32217 find a problem that may failed to get jni env.
And it did a work around to avoid BE crash.

This PR followup this issue, to avoid BE crash when doing `close()` of JniConnector
if failed to get jni env.

The `close()` method will return error when:
1. Failed to get jni env
2. Failed to release jni resource.

This PR will ignore the first error, and still log fatal for second error
2024-04-07 22:16:53 +08:00
c758a25dd8 [opt](fqdn) Add DNS Cache for FE and BE (#32869)
In previously, when enabling FQDN, Doris will call dns resolver to get IP from hostname
each time when 1) FE gets BE's grpc client. 2) BE gets other BE's brpc client.
So when in high concurrency case, the dns resolver be overloaded and failed to resolve hostname.

This PR mainly changes:

1. Add DNSCache for both FE and BE.
    The DNSCache will run on every FE and BE node. It has a cache, key is hostname and value is IP.
    Caller can get IP by hostname from this cache, and if hostname does not exist, it will try to resolve it
    and update the cache.
    In addition, DNSCache has a daemon thread to refresh the cache every 1 min, in case that the IP may
    be changed at anytime.

There are other implements of this dns cache:

1.  36fed13997
    This is for BE side, but it does not handle the IP change case.

3. https://github.com/apache/doris/pull/28479
    This is for FE side, but it can only work with Master FE. Other FE node will not be aware of the IP change.
    And there are a bunch of BackendServiceProxy, this PR only handle cache in one of them.
2024-04-07 22:16:04 +08:00
8bb2ef1668 [opt](iceberg) no need to check the name format of iceberg's database (#32977)
No need to check the name format of iceberg's database.
We should accept all databases.
2024-04-07 22:14:51 +08:00
e9b67bc82d [bugfix](paimon)merge meta-inf/services for paimon FileIOLoader (#33166)
We introduced paimon's oss and s3 packages, but did not register them in meta-info/service. As a result, when be used the s3  or oss interface, an error was reported and the class could not be found(`Could not find a file io implementation for scheme 's3' in the classpath.`).

FYI:
https://stackoverflow.com/questions/47310215/merging-meta-inf-services-files-with-maven-assembly-plugin
https://stackoverflow.com/questions/1607220/how-can-i-merge-resource-files-in-a-maven-assembly
2024-04-07 22:13:00 +08:00
d9d950d98e [fix](iceberg) fix iceberg predicate conversion bug (#33283)
Followup #32923

Some cases are not covered in #32923
2024-04-07 22:12:38 +08:00
190763e301 [bugfix](iceberg)Convert the datetime type in the predicate according to the target column (#32923)
Convert the datetime type in the predicate according to the target column.
And add a testcase for #32194
related #30478 #30162
2024-04-07 22:12:33 +08:00
ecb4372479 [Fix](pipelinex) Fix MaxScannerThreadNum calculation error in file scan operator when turn on pipelinex. (#33037)
MaxScannerThreadNum in file scan operator when turn on pipelinex is incorrect, it will cost many memory and causing performance degradation. This PR fix it.
2024-04-07 22:11:27 +08:00
32d6a4fdd5 [opt](rowcount) refresh external table's rowcount async (#32997)
In previous implementation, the row count cache will be expired after 10min(by default),
and after expiration, the next row count request will miss the cache, causing unstable query plan.

In this PR, the cache will be refreshed after Config.external_cache_expire_time_minutes_after_access,
so that the cache entry will remain fresh.
2024-04-07 22:11:14 +08:00
ebf45bff20 [fix](variables) change column type of @@autocommit to BIGINT (#33282)
Some of mysql connector (eg, dotnet MySQL.Data) rely on variable's column type to make connection.
eg, `select @@autocommit` should with column type `BIGINT`, not `BIT`, otherwise it will throw error like:

```
System.FormatException: The input string 'True' was not in a correct format.
   at System.Number.ThrowFormatException[TChar](ReadOnlySpan`1 value)
   at System.Convert.ToInt32(String value)
   at MySql.Data.MySqlClient.Driver.LoadCharacterSetsAsync(MySqlConnection connection, Boolean execAsync, CancellationToken cancellationToken)
   at MySql.Data.MySqlClient.Driver.ConfigureAsync(MySqlConnection connection, Boolean execAsync, CancellationToken cancellationToken)
   at MySql.Data.MySqlClient.MySqlConnection.OpenAsync(Boolean execAsync, CancellationToken cancellationToken)
   at MySql.Data.MySqlClient.MySqlConnection.Open()
```

In this PR, I add a new field of `VarAttr`: `convertBoolToLongMethod`, if set, it will convert boolean to long.
And set it for `autocommit`
2024-04-07 22:02:28 +08:00
59b8bf24b1 [chore](license) fix incomplete license header (#33306)
Co-authored-by: yiguolei <yiguolei@gmail.com>
2024-04-07 15:00:14 +08:00
92d7333810 [Fix](point query) avoid nullptr in _block_pool (#33120)
`resize` will make nullptrs in _block_pool if _block_pool.size() < s_preallocted_blocks_num
2024-04-07 13:02:37 +08:00
132dbeda7f [BugFix](Iceberg Catalog) Fix iceberg catalog of hms and hadoop not support iceberg properties (#33113)
* fix iceberg catalog of  hms and hadoop not support iceberg properties

* remove unused import
2024-04-07 13:01:24 +08:00
62699c8eea [improve](function) the offset params in lead/lag function could use 0 (#33174) 2024-04-07 12:58:03 +08:00
77349ca71a [pipelineX](fix) Fix coredump by incorrect cancel order (#33294) 2024-04-07 12:06:12 +08:00
0d0cb6d8a4 [fix](nereids)SimplifyRange didn't process NULL value correctly (#33296) 2024-04-07 11:02:32 +08:00
950ca68fac [fix](move-memtable) fix timeout to get tablet schema (#33256) (#33260) 2024-04-04 21:45:55 +08:00
df8e397dd8 [Fix](executor)Fix normal group can not be appended when image exits #33197 2024-04-03 20:37:12 +08:00
df197c6a14 [fix](move-memtable) fix initial use count of streams for auto partition (#33165) (#33236)
Co-authored-by: Kaijie Chen <ckj@apache.org>
2024-04-03 20:31:29 +08:00
Pxl
05a84bd485 [Bug](runtime-filter) set need_local_merge to false when rf is broadcast (#33211)
set need_local_merge to false when rf is broadcast
2024-04-03 19:14:09 +08:00
Pxl
113bada7ed [Chore](runtime-filter) add check is broadcast on nlj (#33088)
add check is broadcast on nlj
2024-04-03 19:14:05 +08:00
797b8fa456 [FIX](agg) fix vertical_compaction_reader for agg table with array/map type (#33130) 2024-04-03 18:09:45 +08:00
fff5c85a71 [bugfix](stop) should skip the loop when graceful stop (#33212)
Co-authored-by: yiguolei <yiguolei@gmail.com>
2024-04-03 17:10:32 +08:00
7675383c40 [bugfix](deadlock) fix dead lock in cancel fragment (#33181)
Co-authored-by: yiguolei <yiguolei@gmail.com>
2024-04-03 13:41:24 +08:00
4d18fc1e4c [profile](name) add table rollup name in profile (#33137) 2024-04-02 22:36:30 +08:00
Pxl
34f5521643 [Bug](min-max) store string data in MinMaxNumFunc to avoid use after free when cancel (#33152)
* store string data in MinMaxNumFunc to avoid use after free when cancel

* update
2024-04-02 22:35:58 +08:00
6150c54df5 [bugfix](asyncwriter) async writer's lock should not include finish or close method (#33077)
close or finish method will take a lot of time, and the lock will hold a lot of time. If there is a bug in close or finish method, it will affect pipeline execute thread.
writer's close method will need this lock, so that it will hang when close method is called.
2024-04-02 14:23:00 +08:00
1be38e798d push topn-filter to both sides of inner join (#33112) 2024-04-01 22:46:28 +08:00
7ace3ff6de [branch-2.1](pick) pick 2 prs about nereids create variant table (#33125)
* [fix](Nereids) support variant column with index when create table (#32948)

* [opt](Nereids) support create table with variant type (#32953)

---------

Co-authored-by: morrySnow <101034200+morrySnow@users.noreply.github.com>
2024-04-01 19:00:39 +08:00
bec153b369 [fix](timeout) query timeout was not correctly set #33045 2024-03-30 22:33:03 +08:00
9f2520537f 2.1.1-rc05 2024-03-30 21:26:27 +08:00
ed48d321d0 [fix](pipelineX) fix error open in scan (#33068) 2024-03-30 20:05:49 +08:00
2f699e27a6 [log](pipeline)add more log in scan localstate #33062 #33063 2024-03-30 12:22:50 +08:00
425c00a0d1 [fix](agg) incorrect result with having conjuncts and limit (#33040) 2024-03-30 10:14:44 +08:00
db3179edaf [pipelineX](local exchange) Fix potential timeout problem (#33022) 2024-03-29 22:24:23 +08:00
f72befe05e [fix](path-gc) Fix pending rowset guard check failure when ordered data compaction failed (#33029) 2024-03-29 17:47:51 +08:00
0c13977ee5 [Fix](segment compaction) _check_and_set_is_doing_segcompaction should be the last condition (#33043)
introduced by #33001
2024-03-29 17:35:30 +08:00
9d6fb39573 [regression-test](Variant) add order by to make test stable (#33014) (#33039) 2024-03-29 17:25:26 +08:00