1. optimize error message when using batch delete
2. rename session variable is_report_success to enable_profile
3. add table name to OlapScanner profile
when be has excepiton, fe doesn't log the BackendHbResponse info, so we can't know which be has exception
the exception log is:
`WARN (heartbeat mgr|31) [HeartbeatMgr.runAfterCatalogReady():141] get bad heartbeat response: type: BACKEND, status: BAD, msg: java.net.ConnectException: Connection refused (Connection refused)
`
so need add toString(), then fe can log the BackendHbResponse info
This CL mainly changes:
1. Add star schema benchmark tools in `tools/ssb-tools`, for user to easy load and test with SSB data set.
2. Disable the segment cache for some read scenario such as compaction and alter operation.(Fix#6924 )
3. Fix a bug that `max_segment_num_per_rowset` won't work(Fix#6926)
4. Enable `enable_batch_delete_by_default` by default.
1. Simplify the use of flink connector like other stream sink by GenericDorisSinkFunction.
2. Add the use cases of flink connector.
## Use case
```
env.fromElements("{\"longitude\": \"116.405419\", \"city\": \"北京\", \"latitude\": \"39.916927\"}")
.addSink(
DorisSink.sink(
DorisOptions.builder()
.setFenodes("FE_IP:8030")
.setTableIdentifier("db.table")
.setUsername("root")
.setPassword("").build()
));
```
When loading meta by meta_tool goes wrong, we only get an error code from `json2pb`,
which is inconvenient for us to locate the problem.
This change is adding error message when loading meta goes wrong.
Log change is like below.
```
# before
./meta_tool --root_path=/home/disk1/qjl/mydoris/be/storage --operation=load_meta --json_meta_path=/home/disk1/qjl/data/meta-json.json
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1020 11:41:56.564241 74937 data_dir.cpp:837] path: /home/disk1/qjl/mydoris/be/storage total capacity: 7750843404288, available capacity: 7583325925376
I1020 11:41:56.564415 74937 data_dir.cpp:275] path: /home/disk1/qjl/mydoris/be/storage, hash: 7528840506668047470
load meta failed, status:-1410
# after
./meta_tool --root_path=/home/disk1/qjl/mydoris/be/storage --operation=load_meta --json_meta_path=/home/disk1/qjl/data/meta-json.json
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1020 14:41:40.084342 50727 data_dir.cpp:837] path: /home/disk1/qjl/mydoris/be/storage total capacity: 7750843404288, available capacity: 7584601022464
I1020 14:41:40.084496 50727 data_dir.cpp:275] path: /home/disk1/qjl/mydoris/be/storage, hash: 7528840506668047470
E1020 14:41:40.163007 50727 tablet_meta_manager.cpp:161] JSON to protobuf message failed: Fail to decode base64 string=0
load meta failed, status:-1410
```
String.valueOf() returns string "null" with null input, in which case requests with no SQL
will be accepted by TableQueryPlanAction unexpectedly with potential risk.
With thirdparties 1.4.0 to 1.4.1
1. Add patch for aws-c-cal-0.4.5
2. Add some solutions for `undefined reference libpsl`
3. Move libgsasl to fix link problme of libcurl.
4. Downgrade openssl to 1.0.2k to fix problem of low version glibc
some user do not know Doris support type boolean, they use TINYINT,
so i add type BOOLEAN when enter 'help create table' in mysql client.
currently, type BOOLEAN size is 1 byte, but the value of boolean column only in {0,1} ,
which waste some memory, and i want change it's implement to 1 bit in the future.
When the memory usage of BE reaches mem_limit, printing ReservationTrackerCounters through MemTracker
may cause BE crash in high concurrency.
ReservationTrackerCounters is not actually used in the current Doris, and the memory tracker in Doris
will be redesigned in the future.
1. Refactor the create method of hdfs reader & writer.
libhdfs3 does not support arm64. So we should not support hdfs reader & writer on arm64.
2. And micro for LowerUpperImpl
```
EXPORT TABLE xxx
...
PROPERTIES
(
"label" = "mylabel",
...
);
```
And than user can use label to get the info by SHOW EXPORT stmt:
```
show export from db where label="mylabel";
```
For compatibility, if not specified, a random label will be used. And for history jobs, the label will be "export_job_id";
Not like LOAD stmt, here we specify label in `properties` because this will not cause grammatical conflicts,
and there is no need to modify the meta version of the metadata.
1 Avoid sending tasks if there is no data to be consumed
By fetching latest offset of partition before sending tasks.(Fix [Optimize] Avoid too many abort task in routine load job #6803 )
2 Add a preCheckNeedSchedule phase in update() of routine load.
To avoid taking write lock of job for long time when getting all kafka partitions from kafka server.
3 Upgrade librdkafka's version to 1.7.0 to fix a bug of "Local: Unknown partition"
See offsetsForTimes fails with 'Local: Unknown partition' edenhill/librdkafka#3295
4 Avoid unnecessary storage migration task if there is no that storage medium on BE.
Fix [Bug] Too many unnecessary storage migration tasks #6804
* [Feature] Support lateral view
The syntax:
```
select k1, e1 from test lateral view explode_split(k1, ",") tmp as e1;
```
```explode_split``` is a special function of doris,
which is used to separate the string column according to the specified split string,
and then convert the row to column.
This is a conforming function of string separation + table function,
and its behavior is equivalent to explode in hive ```explode(split(string, string))```
The implement:
A tablefunction operator is added to the implementation to handle the syntax of the lateral view separately.
The query plan is following:
```
MySQL [test]> explain select k1, e1 from test_explode lateral view explode_split (k2, ",") tmp as e1;
+---------------------------------------------------------------------------+
| Explain String |
+---------------------------------------------------------------------------+
| PLAN FRAGMENT 0 |
| OUTPUT EXPRS:`k1` | `e1` |
| |
| RESULT SINK |
| |
| 1:TABLE FUNCTION NODE |
| | table function: explode_split(`k2`, ',') |
| | |
| 0:OlapScanNode |
| TABLE: test_explode |
+---------------------------------------------------------------------------+
```
* Add ut
* Add multi table function node
* Add session variables 'enable_lateral_view'
* Fix ut