Doris only support TThreadPoolServer model in thrift server, but the
server model is not effective in some high concurrency scenario, so this
PR introduced new config to allow user to choose different server model
by their scenario.
Add new FE config: `thrift_server_type`
TPlanExecParams::volume_id is never used, so delete the print_volume_ids() function.
Fix log, and log if PlanFragmentExecutor::open() returns error.
Fix some comments
We can build unit tests by specify BUILD_TYPE to DEBUG/RELEASE/LSAN/ASAN,
and outputs in each mode will be placed to different directories, it will
save time if rebuild in a same mode.
fix: https://github.com/apache/incubator-doris/issues/3984
1. add `conjunct.size` checking and `slot_desc nullptr` checking logic
2. For historical reasons, the function predicates are added one by one, I just refactor the processing make thelogic for function predicate processing more clearly
implemnets #3803
Support disable some unmeaningful order by clause.
The default limit of 65535 will not be disabled because of it is added at plannode,
after we support spill to disk we can move this limit to analyze.
When we get different columns's row ranges by column_delete_conditions, we should use union operation instead of intersection operation to get final get final row ranges.
The root cause is that we lost the relationship of the two delete conditions in same delete stmt.
Base data:
```
k1, k2
1, 2
1, 3
case 1:
delete from tbl where k1=1 and k2=2;
case 2:
delete from tbl where k1=1;
delete from tbl where k2=2;
```
We treat the above 2 cases as same, which is incorrect.
So we need to process every rowset of delete conditions separately.
Fixes#3893
In a cluster with frequent load activities, FE will ignore most tablet report from BE
because currently it only handle reports whose version >= BE's latest report version
(which is increased each time a transaction is published). This can be observed from FE's log,
with many logs like `out of date report version 15919277405765 from backend[177969252].
current report version[15919277405766]` in it.
However many system functionalities rely on TabletReport processing to work properly. For example
1. bad or version miss replica is detected and repaired during TabletReport
2. storage medium migration decision and action is made based on TabletReport
3. BE's old transaction is cleared/republished during TabletReport
In fact, it is not necessary to update the report version after the publish task.
Because this is actually a problem left over by history. In the reporting logic of the current version,
we will no longer decrease the version information of the replica in the FE metadata according to the report.
So even if we receive a stale version of the report, it does not matter.
This CL contains mainly two changes
1. do not increase report version for publish task
2. populate `tabletWithoutPartitionId` out of read lock of TabletInvertedIndex
* [Enhance] Add MetaUrl and CompactionUrl for "show tablet" stmt
Add MetaUrl and CompactionUrl in result of following stmt:
`show tablet 10010`;
* fix ut
* add doc
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
Fix: #3946
CL:
1. Add prepare phase for `from_unixtime()`, `date_format()` and `convert_tz()` functions, to handle the format string once for all.
2. Find the cctz timezone when init `runtime state`, so that don't need to find timezone for each rows.
3. Add constant rewrite rule for `utc_timestamp()`
4. Add doc for `to_date()`
5. Comment out the `push_handler_test`, it can not run in DEBUG mode, will be fixed later.
6. Remove `timezone_db.h/cpp` and add `timezone_utils.h/cpp`
The performance shows bellow:
11,000,000 rows
SQL1: `select count(from_unixtime(k1)) from tbl1;`
Before: 8.85s
After: 2.85s
SQL2: `select count(from_unixtime(k1, '%Y-%m-%d %H:%i:%s')) from tbl1 limit 1;`
Before: 10.73s
After: 4.85s
The date string format seems still slow, we may need a further enhancement about it.
Replace some boost to std in OlapScanNode.
This refactor seems solve the problem describe in #3929.
Because I found that BE will crash to calling `boost::condition_variable.notify_all()`.
But after upgrade to this, BE does not crash any more.
ISSUE:#3960
PR #3454 introduce the caching for EsClient, but the initialization of the client was only during editlog replay, all this work should done also during image replay.
This happens when restart or upgrade FE
BTW: modify a UT failure for metric
Currently we choose BE random without check disk is available,
the create table will failed until create tablet task is sent to BE
and BE will check is there has available capacity to create tablet.
So check backend disk available by storage medium will reduce unnecessary RPC call.
https://github.com/apache/incubator-doris/issues/3936
Doris On ES can obtain field value from `_source` or `docvalues`:
1. From `_source` , get the origin value as you put, ES process indexing、docvalues for `date` field is converted to millisecond
2. From `docvalues`, before( 6.4 you get `millisecond timestamp` value, after(include) 6.4 you get the formatted `date` value :2020-06-18T12:10:30.000Z, but ES (>=6.4) provide `format` parameter for `docvalue` field request, this would coming soon for Doris On ES
After this PR was merged into Doris, Doris On ES would only correctly support to process `millisecond` timestamp and string format date, if you provided a `seconds` timestamp, Doris On ES would process wrongly which (divided by 1000 internally)
ES mapping:
```
{
"timestamp_test": {
"mappings": {
"doc": {
"properties": {
"k1": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
}
}
}
}
}
}
```
ES documents:
```
{
"_index": "timestamp_test",
"_type": "doc",
"_id": "AXLbzdJY516Vuc7SL51m",
"_score": 1,
"_source": {
"k1": "2020-6-25"
}
},
{
"_index": "timestamp_test",
"_type": "doc",
"_id": "AXLbzddn516Vuc7SL51n",
"_score": 1,
"_source": {
"k1": 1592816393000 -> 2020/6/22 16:59:53
}
}
```
Doris Table:
```
CREATE EXTERNAL TABLE `timestamp_source` (
`k1` date NULL COMMENT ""
) ENGINE=ELASTICSEARCH
```
### enable_docvalue_scan = false
**For ES 5.5**:
```
mysql> select k1 from timestamp_source;
+------------+
| k1 |
+------------+
| 2020-06-25 |
| 2020-06-22 |
+------------+
```
**For ES 6.5 or above**:
```
mysql> select * from timestamp_source;
+------------+
| k1 |
+------------+
| 2020-06-25 |
| 2020-06-22 |
+------------+
```
### enable_docvalue_scan = true
**For ES 5.5**:
```
mysql> select k1 from timestamp_dv;
+------------+
| k1 |
+------------+
| 2020-06-25 |
| 2020-06-22 |
+------------+
```
**For ES 6.5 or above**:
```
mysql> select * from timestamp_dv;
+------------+
| k1 |
+------------+
| 2020-06-25 |
| 2020-06-22 |
+------------+
```
1. Split /_cluster/state into /_mapping and /_search_shards requests to reduce permissions and make the logic clearer
2. Rename part es related objects to make their representation more accurate
3. Simply support docValue and Fields in alias mode, and take the first one by default
#3311
Prior to this PR, Doris On ES merged another PR https://github.com/apache/incubator-doris/pull/3513 which misusing the `total` node. After Doris On ES introduce `terminate_after` (https://github.com/apache/incubator-doris/issues/2576), the `total` documents would not be computed, rely on this `total` field would be dangerous, we just rely on the actual document count by counting the `inner hits` node which it means to be. So we just remove all total parsing and related logic from Doris On ES, this maybe improve performance slightly because of ignoring and skipping `total` json node.
Before we use a map in DataStreamRecvr to save the StopWatch corresponding to the pending closures.
But we need to take care of the consistency between the map and pending closures queue, it is very error-prone.
If it is not consistent, BE will crash.
So we remove the map in DataStreamRecvr and replace by vector<pair<Closure*, MonotonicStopWatch>>.
There are too many logs in be.WARNING looks like:
```
W0622 17:47:52.513341 26554 run_length_byte_reader.cpp:102] fail to ReadOnlyFileStream seek.[res = -1705]
W0622 17:47:52.513417 26554 run_length_byte_reader.cpp:102] fail to ReadOnlyFileStream seek.[res = -1705]
W0622 17:47:52.513466 26554 run_length_byte_reader.cpp:102] fail to ReadOnlyFileStream seek.[res = -1705]
```
It's a normal case when a run length is EOF, so we can downgrade it from
WARNING to INFO to reduce useless log in be.WARNING
Fix#3920
CL:
1. Parse the TCP metrics header in `/proc/net/snmp` to get the right position of the metrics.
2. Add 2 new metrics: `tcp_in_segs` and `tcp_out_segs`
When we get default system time zone, it will return `PRC`, which is not supported by us, thus
will cause dynamic partition create failed. Fix#3919
This CL mainly changes:
1. Use a unified method to get the system default time zone
2. Now the default variable `system_time_zone` and `time_zone` is set to the default system
time zone, which is `Asia/Shanghai`.
3. Modify related unit test.
4. Support time zone `PRC`.