fe log is large for a busy doris cluster, if you want to preserve some historical logs, it cost too much disk space.
enable compression is a good way to save space.
and a gzip compressed text file can be viewed without decompression.
Truncate char or varchar columns if size is smaller than file columns or not found in the file column schema by session var `truncate_char_or_varchar_columns`.
This function can be used to replace bitmap_union(to_bitmap(expr)), because bitmap_union(to_bitmap(expr)) need create many many small bitmaps firstly and then merge them into a single bitmap.
bitmap_agg will convert the column value into a bitmap directly. Its performance is better than bitmap_union(to_bitmap(expr)) . In our test , there is about 30% improvement.
Currently, Doris only has one interface for querying specific FE session information, and many times we need to know how many session information there are in the current cluster, so I added this API.
`
GET /rest/v1/session/all
{
"msg": "success",
"code": 0,
"data": {
"column_names": ["FE", "Id", "User", "Host", "Cluster", "Db", "Command", "Time", "State", "Info"],
"rows": [{
"FE": "10.14.170.23",
"User": "root",
"Command": "Sleep",
"State": "",
"Cluster": "default_cluster",
"Host": "10.81.85.89:31465",
"Time": "230",
"Id": "0",
"Info": "",
"Db": "db1"
},
{
"FE": "10.14.170.24",
"User": "root",
"Command": "Sleep",
"State": "",
"Cluster": "default_cluster",
"Host": "10.81.85.88:61465",
"Time": "460",
"Id": "1",
"Info": "",
"Db": "db1"
}]
},
"count": 2
}
`
This PR was originally #16940 , but it has not been updated for a long time due to the original author @Cai-Yao . At present, we will merge some of the code into the master first.
thanks @Cai-Yao @yiguolei
count_by_enum(expr1, expr2, ... , exprN);
Treats the data in a column as an enumeration and counts the number of values in each enumeration. Returns the number of enumerated values for each column, and the number of non-null values versus the number of null values.
In some cases, the high load of HDFS may lead to a long time to read the data on HDFS,
thereby slowing down the overall query efficiency. HDFS Client provides Hedged Read.
This function can start another read thread to read the same data when a read request
exceeds a certain threshold and is not returned, and whichever is returned first will use the result.
eg:
create catalog regression properties (
'type'='hms',
'hive.metastore.uris' = 'thrift://172.21.16.47:7004',
'dfs.client.hedged.read.threadpool.size' = '128',
'dfs.client.hedged.read.threshold.millis' = "500"
);
Doris trash include FE catalog recycle bin and BE tablet trash. Users sometimes may be confused abount them. Put them together to let them better understand.
* Revert "[fix](executor) only mysql connect to set GlobalPipelineTask (#22205)"
* Revert "[feature](executor) using fe version to set instance_num (#22047)"