For mow table, delete bitmap of stale rowsets has not been persisted. When be restart, duplicate keys will occur if read stale rowsets.
Therefore, for the mow table, we do not allow reading the stale rowsets. Although this may result in VERSION_ALREADY_MERGED error when query after be restart, its probability of occurrence is relatively low.
Check whether there are complex types in parquet/orc reader in broker/stream load. Broker/stream load will cast any type as string type, and complex types will be casted wrong. This is a temporary method, and will be replaced by tvf.
Default, if less than or equal 32 core, the following are 128, 128, 10240, 10240 in turn.
if greater than 32 core, the following are core num * 4, core num * 4, core num * 320, core num * 320 in turn
brpc_heavy_work_pool_threads
brpc_light_work_pool_threads
brpc_heavy_work_pool_max_queue_size
brpc_light_work_pool_max_queue_size
fragment executor's destruct method will call close, it depends on query context's object pool, because many object is put in query context's object pool such as runtime filter.
It should be deleted before query context. Or there will be heap use after free error.
It is fixed in #17675, but Do not know why not in master. So 1.2-lts does not have this problem.
---------
Co-authored-by: yiguolei <yiguolei@gmail.com>
Optimization "select count(*) from table" stmtement , push down "count" type to BE.
support file type : parquet ,orc in hive .
1. 4kfiles , 60kwline num
before: 1 min 37.70 sec
after: 50.18 sec
2. 50files , 60kwline num
before: 1.12 sec
after: 0.82 sec
select c_name from customer union select c_name from customer
this sql used agg node to get distinct row of c_name,
so it's no need to wait for inserted all data to hash map,
could output the data which it's inserted into hash map successed.
Fix tow bugs:
1. Unexpected null values in array column. If 65535 consecutive values are not null in nullable array column, this error will be triggered. The reason is that the array parser did not handle boundary conditions.
2. The number of rows of key filed, and that of value field in map column are not equal. Similarly, the number of rows among fields in struct column are not the same. This would be triggered when the number of rows are not equal among parquet pages of different columns in a row group.
### Issue
when partition has null partitions, it throws error
`Failed to fill partition column: t_int=null`
### Resolution
- Fix the following null partitions error in iceberg tables by replacing null partition to '\N'.
- Add regression test for hive null partition.
configs
1. Because vertical compaction is enabled by default, it consumes less
memory, we can enlarge default value of compaction related configs.
2. Enlarge default value of shard size related to lock.
when brpc client make a request to a server, if the server doesn't response and may not response forever(such as BE restart), the query can be cancelled at once, but the ExchangeSinkBuffer can not be cancelled until rpc timeout.
So we hope when the query is cancelled, the ExchangeSinkBuffer can be closed at once.