9cee0ecccc
[fix](show-table-status) fix priv error on show table status stmt ( #22918 )
2023-08-18 18:30:09 +08:00
419e922a69
[fix](json)Fix the bug that does not stop when reading json files ( #23062 )
...
* [fix](json)Fix the bug that does not stop when reading json files
2023-08-18 18:23:19 +08:00
477961dc21
[Chore](agg) refactor of hash map ( #22958 )
...
refactor of hash map
2023-08-18 17:59:30 +08:00
f0ad3ef244
[fix](merge-on-write) should use write lock of tablet's header lock in #23047 ( #23161 )
2023-08-18 17:50:44 +08:00
f71b78c415
[enhancement](Nereids): remove override child(int index) ( #23124 )
...
method `child(int index)` use code `super.child(index)` will cause Pointer jump twice.
2023-08-18 17:34:49 +08:00
3d4ec1ac88
[pipeline](exec) support async writer in jdbc sink in pipeline query engine ( #23144 )
...
support async writer in jdbc sink in pipeline query engine
2023-08-18 17:07:57 +08:00
4d3113c6e5
Update test_segcompaction_dup_keys_index.groovy ( #23046 )
2023-08-18 16:52:26 +08:00
609d20de8c
[refactor](nereids)remove ColumnStatistics.selectivity ( #23039 )
2023-08-18 16:45:54 +08:00
18f47f3e6e
Update regression-conf.groovy ( #23057 )
2023-08-18 15:51:17 +08:00
1c3cc77a54
[fix](function) to_bitmap parameter parsing failure returns null instead of bitmap_empty ( #21236 )
...
* [fix](function) to_bitmap parameter parsing failure returns null instead of bitmap_empty
* add ut
* fix nereids
* fix regression-test
2023-08-18 14:37:49 +08:00
aa5e56c73d
[fix](broker) fix export job failed for that currentStreamOffset may be different with request offset ( #23133 )
...
Co-authored-by: caiconghui1 <caiconghui1@jd.com >when export job encounter heavy pressure, the failed export job may see the following message
current outputstream offset is 423597 not equal to request 421590, cause by: null,
because the broker pwrite operation may retry for timeout, so we just skip it instead of throw broker exception.
2023-08-18 14:32:36 +08:00
a7771ea507
[fix](planner) fix current_timestamp param type mismatch when doing stream load ( #23092 )
...
FileLoadScanNode did not analyze the default value expr, result in target param type int32 become int8 as the original IntLiteral type.
2023-08-18 14:28:45 +08:00
a8d63ef93b
[fix](case) Update test_dup_tab_auto_inc_10000.groovy, add sync after streamload #23082
2023-08-18 14:20:31 +08:00
cf368728be
[fix](merge-on-write) Fix a typo and remove useless member rowset in CommitTabletTxnInfo ( #23151 )
...
Fix a typo in #23078
2023-08-18 14:14:34 +08:00
635349a015
[fix](log4j) fix audit_log_roll_num not work for fe audit log file ( #23157 )
...
Co-authored-by: caiconghui1 <caiconghui1@jd.com >
2023-08-18 14:13:45 +08:00
441032c3d8
[fix](Nereids): LogicalSink equals() shouldn't invoke super.equals() ( #23145 )
2023-08-18 14:05:48 +08:00
795006ea3d
[fix](multi-catalog) conversion of compatible numerical types ( #23113 )
...
Hive support schema change, but doesn't rewrite the parquet file, so the physical type of parquet file may not equal the logical type of table schema.
2023-08-18 14:05:33 +08:00
4f7760a5f4
[bugfix](segment cache) Recycle the fds when drop table ( #23081 )
2023-08-18 13:31:34 +08:00
2d96d19030
[FIX](array-func) fix array() with decimal type ( #23117 )
...
if we write sql with : select array(1.0,2.0,null, null,2.0)
here will pass arg type with uint8 to be which does not match array() func sign with deicmal, and make be core. so here should cast from be and make null tag to cast decimal type
2023-08-18 12:12:50 +08:00
59c6139aa5
[Chore](parser) fix create view failed when view contained cast as varchar ( #23043 )
...
fix create view failed when view contained cast as varchar
2023-08-18 11:50:18 +08:00
df8e7f7f09
[enhancement](msg) add disk root path in message ( #23000 )
2023-08-18 11:21:59 +08:00
e6fe8c05d1
[fix](inverted index change) fix update delete bitmap incompletely when build inverted index on mow table ( #23047 )
2023-08-18 11:15:39 +08:00
16df7a7ec0
[chore](macOS) Fix SSL errors while building documents ( #23127 )
...
Issue Number: #23126
Add NODE_OPTIONS to fix this issue.
2023-08-18 10:57:05 +08:00
d018ac8fb7
fix show grants throw NullPointerException ( #22943 )
2023-08-18 10:48:56 +08:00
5b8a76a22e
[doc](catalog)faq for lzo.jar not found ( #23070 )
2023-08-18 10:16:32 +08:00
a5ca6cadd6
[Improvement] Optimize count operation for iceberg ( #22923 )
...
Iceberg has its own metadata information, which includes count statistics for table data. If the table does not contain equli'ty delete, we can get the count data of the current table directly from the count statistics.
2023-08-18 09:57:51 +08:00
03d59ba81e
[Fix](Nereids) fix sql-cache for nereids. ( #22808 )
...
1. should not use ((LogicalPlanAdapter)parsedStmt).getStatementContext().getOriginStatement().originStmt.toLowerCase() as the cache key (do not invoke toLowerCase()), for example: select * from tbl1 where k1 = 'a' is different with select * from tbl1 where k1 = 'A', so the cache should be missed.
2. according to issue 6735 , the cache key should contains all views' s ddl sql (including nested views)
2023-08-18 09:36:07 +08:00
38c182100a
[refactor](mysql compatibility) An abstract class for all databases created for mysql compatibility ( #23087 )
...
Better code structure for mysql compatibility databases.
2023-08-18 09:16:23 +08:00
de98324ea7
[fix](inverted index change) make mutex for ALTER_INVERTED_INDEX task and STORAGE_MEDIUM_MIGRATE task ( #22995 )
2023-08-18 08:35:30 +08:00
314f5a5143
[Fix](orc-reader) Fix filling partition or missing column used incorrect row count. ( #23096 )
...
[Fix](orc-reader) Fix filling partition or missing column used incorrect row count.
`_row_reader->nextBatch` returns number of read rows. When orc lazy materialization is turned on, the number of read rows includes filtered rows, so caller must look at `numElements` in the row batch to determine how
many rows were not filtered which will to fill to the block.
In this case, filling partition or missing column used incorrect row count which will cause be crash by `filter.size() != offsets.size()` in filter column step.
When orc lazy materialization is turned off, add `_convert_dict_cols_to_string_cols(block, nullptr)` if `(block->rows() == 0)`.
2023-08-17 23:26:11 +08:00
1f19d0db3e
[improvement](tablet clone) improve tablet balance, scaling speed etc ( #22317 )
2023-08-17 22:30:49 +08:00
57568ba472
[fix](be)shouldn't use arena to alloc memory for SingleValueDataString ( #23075 )
...
* [fix](be)shouldn't use arena to alloc memory for SingleValueDataString
* format code
2023-08-17 22:18:09 +08:00
29ff7b7964
[fix](merge-on-write) add sentinel mark when do compaction ( #23078 )
2023-08-17 20:08:01 +08:00
c5c984b79b
[refactor](bitmap) using template to reduce duplicate code ( #23060 )
...
* [refactor](bitmap) support for batch value insertion
* fix values was not filled for int8 and int16
2023-08-17 18:14:29 +08:00
b91bb9f503
[fix](alter table property) fix alter property if rpc failed ( #22845 )
...
* fix alter property
* add regression case
* do not repeat
2023-08-17 18:02:34 +08:00
11d76d0ebe
[fix](Nereids) non-inner join should not merge dist info ( #22979 )
...
1. left join should use left dist info.
2. right join should use right dist info.
3. full outer join should return ANY dist info.
2023-08-17 17:48:50 +08:00
330f369764
[enhancement](file-cache) limit the file cache handle num and init the file cache concurrently ( #22919 )
...
1. the real value of BE config `file_cache_max_file_reader_cache_size` will be the 1/3 of process's max open file number.
2. use thread pool to create or init the file cache concurrently.
To solve the issue that when there are lots of files in file cache dir, the starting time of BE will be very slow because
it will traverse all file cache dirs sequentially.
2023-08-17 16:52:08 +08:00
d7a6b64a65
[Fix](Planner) fix case function with null cast to array null ( #22947 )
2023-08-17 16:37:07 +08:00
b252c49071
[fix](hash join) fix heap-use-after-free of HashJoinNode ( #23094 )
2023-08-17 16:29:47 +08:00
47aac84549
Revert "[pipeline](branch-2.0) pr to branch-2.0 also run checks ( #23004 )" ( #23101 )
...
This reverts commit 41a52d45d33be6c1770531cef230aafe676bcce7.
2023-08-17 15:53:22 +08:00
a248cb720c
[fix](jdbc catalog) fix DefaultValueExpr in Jdbc table column when CTAS ( #22978 )
2023-08-17 15:52:20 +08:00
f092afc946
[Regression](pipeline) update p1 pipeline to required 0817 ( #23100 )
...
update p1 pipeline to required 0817
2023-08-17 15:47:40 +08:00
e289e03a1a
[fix](executor)fix no return with old type in time_round
2023-08-17 15:34:26 +08:00
cf1865a1c8
[Bug](scan) fix core dump due to store_path_map ( #23084 )
...
fix core dump due to store_path_map
2023-08-17 15:24:43 +08:00
3fe419eafa
[Fix](statistics)Fix update cached column stats bug ( #23049 )
...
`show column cached stats` sometimes show wrong min/max value:
```
mysql> show column cached stats hive.tpch100.region;
+-------------+-------+------+----------+-----------+---------------+------+------+--------------+
| column_name | count | ndv | num_null | data_size | avg_size_byte | min | max | updated_time |
+-------------+-------+------+----------+-----------+---------------+------+------+--------------+
| r_regionkey | 5.0 | 5.0 | 0.0 | 24.0 | 4.0 | N/A | N/A | null |
| r_comment | 5.0 | 5.0 | 0.0 | 396.0 | 66.0 | N/A | N/A | null |
| r_name | 5.0 | 5.0 | 0.0 | 40.8 | 6.8 | N/A | N/A | null |
+-------------+-------+------+----------+-----------+---------------+------+------+--------------+
```
This pr is to fix this bug. It is because while transferring ColumnStatistic object to JSON, it doesn't contain the minExpr and maxExpr attribute.
2023-08-17 15:20:02 +08:00
d59c2f763f
[fix](test) add sync for test_pk_uk_case ( #23067 )
2023-08-17 15:18:07 +08:00
bf2b92f5e8
[fix](Nereids): PushdownDistinctThroughJoin don't push distinct for relation ( #23066 )
...
* [fix](Nereids): PushdownDistinctThroughJoin don't push distinct for relation.
* fix test
2023-08-17 14:50:34 +08:00
f5da9f4ccc
[fix](muti-catalog)convert to s3 path when use aws endpoint ( #22784 )
...
Convert to s3 path when use aws endpoint
For compatibility, we can also use s3 client to access other cloud by setting s3 endpoint properties
2023-08-17 14:28:00 +08:00
6e51632ca9
[docs](kerberos)add FAQ cases and enable krb5 debug ( #22821 )
2023-08-17 14:25:09 +08:00
8b51da0523
[Fix](load) fix partiotion Null pointer exception ( #22965 )
2023-08-17 14:09:47 +08:00