Commit Graph

13721 Commits

Author SHA1 Message Date
8ccd8b4337 [fix](Nereids) fix ends calculation when there are constant project (#22265) 2023-07-31 14:10:44 +08:00
147a148364 [refactor](segcompaction) simplify submit_seg_compaction_task interface (#22387) 2023-07-31 13:53:38 +08:00
f2919567df [feature](datetime) Support timezone when insert datetime value (#21898) 2023-07-31 13:08:28 +08:00
acc24df10a [fix](datax)doris writer url decoder fix #22401
When the user imports data, there are some special characters in the data, which will cause the import to fail
The following error message appears:

2023-07-28 15:15:28.960  INFO 21756 --- [-interval-flush] c.a.d.p.w.d.DorisWriterManager           : Doris interval Sinking triggered: label[datax_doris_writer_7aa415e6-5a9c-4070-a699-70b4a627ae64].
2023-07-28 15:15:29.015  INFO 21756 --- [       Thread-3] c.a.d.p.w.d.DorisStreamLoadObserver      : Start to join batch data: rows[95968] bytes[3815834] label[datax_doris_writer_7aa415e6-5a9c-4070-a699-70b4a627ae64].
2023-07-28 15:15:29.038  INFO 21756 --- [       Thread-3] c.a.d.p.w.d.DorisStreamLoadObserver      : Executing stream load to: 'http://10.38.60.218:8030/api/ods_prod/ods_pexweb_online_product/_stream_load', size: '3911802'
2023-07-28 15:15:31.559  WARN 21756 --- [       Thread-3] c.a.d.p.w.d.DorisStreamLoadObserver      : Request failed with code:500
2023-07-28 15:15:31.561  INFO 21756 --- [       Thread-3] c.a.d.p.w.d.DorisStreamLoadObserver      : StreamLoad response :null
2023-07-28 15:15:31.564  WARN 21756 --- [       Thread-3] c.a.d.p.w.d.DorisWriterManager           : Failed to flush batch data to Doris, retry times = 0

java.io.IOException: Unable to flush data to Doris: unknown result status.
	at com.alibaba.datax.plugin.writer.doriswriter.DorisStreamLoadObserver.streamLoad(DorisStreamLoadObserver.java:66) ~[doriswriter-0.0.1-SNAPSHOT.jar:na]
	at com.alibaba.datax.plugin.writer.doriswriter.DorisWriterManager.asyncFlush(DorisWriterManager.java:163) [doriswriter-0.0.1-SNAPSHOT.jar:na]
	at com.alibaba.datax.plugin.writer.doriswriter.DorisWriterManager.access$000(DorisWriterManager.java:19) [doriswriter-0.0.1-SNAPSHOT.jar:na]
	at com.alibaba.datax.plugin.writer.doriswriter.DorisWriterManager$1.run(DorisWriterManager.java:134) [doriswriter-0.0.1-SNAPSHOT.jar:na]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_221]

在fe.log日志中发现下面的错误信息:

ava.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - For input string: " l"
        at java.net.URLDecoder.decode(URLDecoder.java:194) ~[?:1.8.0_221]
        at org.springframework.http.converter.FormHttpMessageConverter.read(FormHttpMessageConverter.java:352) ~[spring-web-5.3.22.jar:5.3.22]
        at org.springframework.web.filter.FormContentFilter.parseIfNecessary(FormContentFilter.java:109) ~[spring-web-5.3.22.jar:5.3.22]
        at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:88) ~[spring-web-5.3.22.jar:5.3.22]
        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.22.jar:5.3.22]
        at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
        at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
        at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.3.22.jar:5.3.22]
        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.22.jar:5.3.22]
        at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
        at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600) ~[jetty-security-9.4.48.v20220622.jar:9.4.48.v20220622]
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandle
2023-07-31 12:57:10 +08:00
b64f62647b [runtime filter](profile) add merge time on non-pipeline engine (#22363) 2023-07-31 12:52:42 +08:00
7bcf024757 [typo](doc)Modify some words typos. #22361
Modify some words typos. An incorrect word was corrected in from_unixtime.md and a superfluous word was removed in BROKER_LOAD.md.
2023-07-31 12:09:50 +08:00
b58f125211 [pipeline](p0) exclude test_profile, re-add test_cast_string_to_array #22389 2023-07-31 10:41:12 +08:00
93a9cec406 [Improvement] Add iceberg metadata cache and support manifest file content cache (#22336)
Cache the iceberg table. When accessing the same table, the metadata will only be loaded once.
Cache the snapshot of the table to optimize the performance of the iceberg table function.
Add cache support for iceberg's manifest file content
a simple test from 2.0s to 0.8s

before
mysql> refresh table tb3;
Query OK, 0 rows affected (0.03 sec)

mysql> select * from tb3;
+------+------+------+
| id   | par  | data |
+------+------+------+
|    1 | a    | a    |
|    2 | a    | b    |
|    3 | a    | c    |
....
|   68 | a    | a    |
|   69 | a    | b    |
|   70 | a    | c    |
+------+------+------+
70 rows in set (2.10 sec)

mysql> select * from tb3;
+------+------+------+
| id   | par  | data |
+------+------+------+
|    1 | a    | a    |
|    2 | a    | b    |
|    3 | a    | c    |
...
|   68 | a    | a    |
|   69 | a    | b    |
|   70 | a    | c    |
+------+------+------+
70 rows in set (2.00 sec)

after
mysql> refresh table tb3;
Query OK, 0 rows affected (0.03 sec)

mysql> select * from tb3;
+------+------+------+
| id   | par  | data |
+------+------+------+
|    1 | a    | a    |
|    2 | a    | b    |
...
|   68 | a    | a    |
|   69 | a    | b    |
|   70 | a    | c    |
+------+------+------+
70 rows in set (2.05 sec)

mysql> select * from tb3;
+------+------+------+
| id   | par  | data |
+------+------+------+
|    1 | a    | a    |
|    2 | a    | b    |
|    3 | a    | c    |
...
|   68 | a    | a    |
|   69 | a    | b    |
|   70 | a    | c    |
+------+------+------+
70 rows in set (0.80 sec)
2023-07-31 10:12:09 +08:00
ec0be8a037 [bug](decimal) change result type for decimalv2 computation (#22366) 2023-07-31 10:00:34 +08:00
2b9a95f74f [typo](docs) add show collation doc description and example (#22370) 2023-07-30 23:08:52 +08:00
0e7f63f5f6 [fix](ipv6)Remove restrictions from IPv4 when add backend (#22323)
When adding be, it is required to have only one colon, otherwise an error will be reported. However, ipv6 has many colons

```
String[] pair = hostPort.split(":");
if (pair.length != 2) {
    throw new AnalysisException("Invalid host port: " + hostPort);
}
```
2023-07-30 22:47:24 +08:00
f87f29e1ab [fix](multi-catalog)compatible with hdfs HA empty prefix (#22342)
compatible with hdfs HA empty prefix
for example: ’hdfs:///‘ will be replaced to ’hdfs://ha-nameservice/‘
2023-07-30 22:21:14 +08:00
ee754307bb [refactor](load) refactor memtable flush actively (#21634) 2023-07-30 21:31:54 +08:00
79289e32dc [fix](cast) fix wrong result of casting empty string to array date (#22281) 2023-07-30 21:15:03 +08:00
63a9a886f5 [enhance](S3) add s3 bvar metrics for all s3 operation (#22105) 2023-07-30 21:09:17 +08:00
06e4061b94 [enhance](ColdHeatSeparation) carry use path style info along with cold heat separation to support using minio (#22249) 2023-07-30 21:03:33 +08:00
4077338284 [Opt](parquet) opt the performance of date convertion (#22360)
before:
```
mysql>  select count(l_commitdate) from lineitem;
+---------------------+
| count(l_commitdate) |
+---------------------+
|           600037902 |
+---------------------+
1 row in set (1.61 sec)
```

after:
```
mysql>  select count(l_commitdate) from lineitem;
+---------------------+
| count(l_commitdate) |
+---------------------+
|           600037902 |
+---------------------+
1 row in set (0.86 sec)
```
2023-07-30 15:54:13 +08:00
47c09f518b [typo](docs) fix show frontends doc typo (#22368)
Co-authored-by: zy-kkk <zhongyk10@gmail.com>
2023-07-30 10:19:56 +08:00
17b4c94ef2 [fix-typo](merge-on-write) fix wrong stream load header option name in regression test (#22354) 2023-07-30 09:58:48 +08:00
e47d1fccf5 [bugfix](be core) fragment executor's destruct method should be called before query context (#22362)
fragment executor's destruct method will call close, it depends on query context's object pool, because many object is put in query context's object pool such as runtime filter.
It should be deleted before query context. Or there will be heap use after free error.
It is fixed in #17675, but Do not know why not in master. So 1.2-lts does not have this problem.
---------

Co-authored-by: yiguolei <yiguolei@gmail.com>
2023-07-29 22:41:46 +08:00
03761c37cd [Improvement](multi catalog) Support Iceberg, Paimon and MaxCompute table in nereids. (#22338) 2023-07-29 21:43:35 +08:00
23fd996ea0 [typo](doc) Modify the version supported by the function #22146 2023-07-29 13:58:57 +08:00
765f1b6efe [Refactor](load) Extract load public code (#22304) 2023-07-29 12:56:31 +08:00
47c2cc5c74 [vectorized](udf) java udf support with return map type (#22300) 2023-07-29 12:52:27 +08:00
Pxl
210f6661b4 [Bug](profile) add lock on add_filter_info #22355
multiple scanner may update profile at same time
2023-07-29 12:45:50 +08:00
bc88d34b16 [bug](distinct-agg) fix distinct-agg outblock columns size not equal key size (#22357)
* [imporve](flex) support scientific notation(aEb) parser

* update

* [bug](distinct-agg) fix distinct-agg outblock columns size not equal key size
2023-07-29 12:44:44 +08:00
302de27985 [Refactor] Refactor some code with three-way comparison (#22170)
Refactor some code with three-way comparison
2023-07-29 11:30:15 +08:00
ebd114b384 [enhancement](binlog) CreateTable inherit db binlog && Add some checks (#22293) 2023-07-29 08:27:27 +08:00
ae8a26335c [opt](hive)opt select count(*) stmt push down agg on parquet in hive . (#22115)
Optimization "select count(*) from table" stmtement , push down "count" type to BE.
support file type : parquet ,orc in hive .

1. 4kfiles , 60kwline num 
    before:  1 min 37.70 sec 
    after:   50.18 sec

2. 50files , 60kwline num
    before: 1.12 sec
    after: 0.82 sec
2023-07-29 00:31:01 +08:00
53d255f482 [fix](partial update) remove CHECK on illegal number of partial columns (#22319) 2023-07-28 23:11:58 +08:00
5b14d9fcdc [fix](compaction) fix time series compaction policy corner case (#22238) 2023-07-28 23:07:36 +08:00
f7c106c709 [opt](nereids) enhance broadcast join cost calculation (#22092)
Enhance broadcast join cost calculation, by considering both the build side effort from building bigger hash table, and more probe side effort from bigger cost of ProbeWhenBuildSideOutput and ProbeWhenSearchHashTable, if parallel_fragment_exec_instance_num is more than 1.

Current solution gives a penalty factor on rightRowCount, and the factor is the total instance number to the power of 2.
Penalty on outputRows is not taken currently and will be refined in next generation cost model.

Also brings some update for shape checking:

update original control variable in shape file parallel_fragment_exec_instance_num to parallel_pipeline_task_num, if pipeline is enabled.
fix a be_number variable inactive issue.
2023-07-28 23:06:02 +08:00
ebc8988f70 [typo](docs),array_zip function is supportted in 2.0 (#22306)
Co-authored-by: zhuwei <zhuwei8421@gmail.com>
2023-07-28 21:14:33 +08:00
b6e22340cb [fix](doc) fix command misspelling (#22344)
show backend ----> show backends
2023-07-28 19:16:43 +08:00
0cc3232d6f [Improve](topn opt) modify fetch rpc timeout from 20s to 30s, since fetch is quite heavy sometimes (#22163) 2023-07-28 17:56:18 +08:00
2f43e59535 [test](regression) add partial update seq_col delete cases (#22340) 2023-07-28 17:36:55 +08:00
Pxl
f7e0479605 [Chore](refactor) remove some unused code (#22152)
remove some unused code
2023-07-28 17:30:46 +08:00
05abfbc5ef [improvement](regression-test) add compression algorithm regression test (#22303) 2023-07-28 17:28:52 +08:00
3eeca7ee55 [enhance](regresstion case)add external group mark 0727 (#22287)
* add external group mark 0727

* add external pipeline regression conf
 0727

* update pipeline regression config 0727

* open es config from docker 0727
2023-07-28 17:11:19 +08:00
ef218d79da [fix](case) add sync after stream load (#22232)
add sync after stream load
2023-07-28 17:05:20 +08:00
ec1a4d172b (vertical compaction) fix vertical compaction core (#22275)
* (vertical compaction) fix vertical compaction core
co-author:@zhannngchen
2023-07-28 16:41:00 +08:00
25f26198f4 [fix](executor) only mysql connect to set GlobalPipelineTask (#22205) 2023-07-28 16:19:34 +08:00
5a0ad09856 [fix](nereids) SubqueryToApply may lost conjunct (#22262)
consider sql:
```
SELECT *
        FROM sub_query_correlated_subquery1 t1
        WHERE coalesce(bitand( 
        cast(
            (SELECT sum(k1)
            FROM sub_query_correlated_subquery3 ) AS int), 
            cast(t1.k1 AS int)), 
            coalesce(t1.k1, t1.k2)) is NULL
        ORDER BY  t1.k1, t1.k2;
```
is Null conjunct is lost in SubqueryToApply rule. This pr fix it
2023-07-28 15:08:56 +08:00
80673406b1 [fix](Nereids) project hidden columns when show_hidden_columns is true (#22285) 2023-07-28 15:08:18 +08:00
0c734a861e [Enhancement](delete) eliminate reading the old values of non-key columns for delete stmt (#22270) 2023-07-28 14:37:33 +08:00
9f565cf835 [fix](ut) fix ut of stats test #22325
After auto retry merged, it's hard to determine the execute times of doExecute method in compile time, and if the expected execute times in the expectation block is missed, unexpected invocation  exception would be thrown, so just remove the expected execute times
2023-07-28 14:23:35 +08:00
c2155678ca [fix](functions) fix now(null) crash (#22321)
before: BE crash
now:

mysql [test]>select now(null);
+-----------+
| now(NULL) |
+-----------+
| NULL      |
+-----------+
1 row in set (0.06 sec)
2023-07-28 14:07:56 +08:00
1c6246f7ee [improve](agg) support distinct agg node (#22169)
select c_name from customer union select c_name from customer
this sql used agg node to get distinct row of c_name,
so it's no need to wait for inserted all data to hash map,
could output the data which it's inserted into hash map successed.
2023-07-28 13:54:10 +08:00
ad080c691f [chore](log)Move non-user-friendly error message to be.WARNING (#22315)
Move non-user-friendly error message to be.WARNING
2023-07-28 13:15:25 +08:00
2d538d5959 [regression](fix) include case test_round (#22326)
include case test_round
2023-07-28 13:10:54 +08:00