+ Building the materialized view function for schema_change here based on defineExpr.
+ This is a trick because the current storage layer does not support expression evaluation.
+ count distinct materialized view will set mv_expr with to_bitmap or hll_hash.
+ count materialized view will set mv_expr with count.
+ Support to regenerate historical data when a new materialized view is created in BE。
+ Support to_bitmap function
+ Support hll_hash function
+ Support count(field) function
For #3344
The output columns of query should be collected by all of tupleIds
in BaseTableRef rather than the top tupleIds of query.
The top tupleIds of count(*) is Agg tuple which does not expand the star.
Fixed#4065
[Bug]Fix some schema change not work right
This CL mainly fix some schema change to varchar type not work right
because forget to logic check && Add ConvertTypeResolver to add
supported convert type in order to avoid forget logic check
This PR is mainly to add `thrift_client_retry_interval_ms` config in be for thrift client
to avoid avalanche disaster in fe thrift server and fix some typo and some rpc
setting problems at the same time.
fix#4047#3886 has certain relevance to this case。
the sql : `bigtable t1 join mysqltable t2 join mysqltable t3 on t1.k1 = t3.k1`
1. after reorder:
t1, t2, t3
2. choose join t1 with t2:
t1 join t2 with no conditions, and Doris choose cross join
3. choose join (t1 join on t2) with t3:
in old code, the t2 is mysqlTable, so the cardinality is zero,
and "the cross join t1 with t2" 's cardinality is t1.cardinality multiply t2.cardinality,
for t2 is mysql, so t2.cardinality is zero, and "the cross join t1 with t2" is zero.
t3 is mysqltable, t3's cardinality is zero.
**If two tables need to be joined both are zero,we will choose the shuffle join**
So I change the mysql table ‘s cardinality from 0 to 1, the cross join's cardinality is not zero.
Its time to enable some features by default.
1. Enable FE plugins by setting `plugin_enable=true`
2. Enable dynamic partition by setting `dynamic_partition_enable=true`
3. Enable nio mysql server by setting `mysql_service_nio_enabled=true`
Also modify installation doc, add download link of MySQL client.
Fixes#3995
## Why does it happen
When SetOperations encounters that the previous node needs Aggregate, the timing of add AggregationNode is wrong. You should add AggregationNode first before add other children.
## Why doesn't intersect and union have this problem
intersect and union conform to the commutation law, so it doesn't matter if the order is wrong
## Why this problem has not been tested before
In the previous test case, not cover the previous node was not AggregationNode
After PR #3454 was merged, we should refactor and reorganize some logic for long-term sustainable iteration for Doris On ES.
To facilitate code review,I would divided into this work to multiple PRs (some other WIP work I also need to think carefully)
This PR include:
1. introduce SearchContext for all state we needed
2. divide meta-sync logic into three phase
3. modify some logic processing
4. introduce version detect logic for future using
This CL mainly support set replication_num property in dynamic partition
table if dynamic_partition.replication_num is not set, the value is the
same as table's default replication_num.
Now, FE use ThreadPoolManager to manage and monitor all Thread,
but there are still some threads are not managed. And FE use `Timer` class
to do some scheduler task, but `Timer` class has some problem and is out of date,
It should replace by ScheduledThreadPool.
Doris only support TThreadPoolServer model in thrift server, but the
server model is not effective in some high concurrency scenario, so this
PR introduced new config to allow user to choose different server model
by their scenario.
Add new FE config: `thrift_server_type`
TPlanExecParams::volume_id is never used, so delete the print_volume_ids() function.
Fix log, and log if PlanFragmentExecutor::open() returns error.
Fix some comments
implemnets #3803
Support disable some unmeaningful order by clause.
The default limit of 65535 will not be disabled because of it is added at plannode,
after we support spill to disk we can move this limit to analyze.
Fixes#3893
In a cluster with frequent load activities, FE will ignore most tablet report from BE
because currently it only handle reports whose version >= BE's latest report version
(which is increased each time a transaction is published). This can be observed from FE's log,
with many logs like `out of date report version 15919277405765 from backend[177969252].
current report version[15919277405766]` in it.
However many system functionalities rely on TabletReport processing to work properly. For example
1. bad or version miss replica is detected and repaired during TabletReport
2. storage medium migration decision and action is made based on TabletReport
3. BE's old transaction is cleared/republished during TabletReport
In fact, it is not necessary to update the report version after the publish task.
Because this is actually a problem left over by history. In the reporting logic of the current version,
we will no longer decrease the version information of the replica in the FE metadata according to the report.
So even if we receive a stale version of the report, it does not matter.
This CL contains mainly two changes
1. do not increase report version for publish task
2. populate `tabletWithoutPartitionId` out of read lock of TabletInvertedIndex
* [Enhance] Add MetaUrl and CompactionUrl for "show tablet" stmt
Add MetaUrl and CompactionUrl in result of following stmt:
`show tablet 10010`;
* fix ut
* add doc
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
Fix: #3946
CL:
1. Add prepare phase for `from_unixtime()`, `date_format()` and `convert_tz()` functions, to handle the format string once for all.
2. Find the cctz timezone when init `runtime state`, so that don't need to find timezone for each rows.
3. Add constant rewrite rule for `utc_timestamp()`
4. Add doc for `to_date()`
5. Comment out the `push_handler_test`, it can not run in DEBUG mode, will be fixed later.
6. Remove `timezone_db.h/cpp` and add `timezone_utils.h/cpp`
The performance shows bellow:
11,000,000 rows
SQL1: `select count(from_unixtime(k1)) from tbl1;`
Before: 8.85s
After: 2.85s
SQL2: `select count(from_unixtime(k1, '%Y-%m-%d %H:%i:%s')) from tbl1 limit 1;`
Before: 10.73s
After: 4.85s
The date string format seems still slow, we may need a further enhancement about it.
Replace some boost to std in OlapScanNode.
This refactor seems solve the problem describe in #3929.
Because I found that BE will crash to calling `boost::condition_variable.notify_all()`.
But after upgrade to this, BE does not crash any more.
ISSUE:#3960
PR #3454 introduce the caching for EsClient, but the initialization of the client was only during editlog replay, all this work should done also during image replay.
This happens when restart or upgrade FE
BTW: modify a UT failure for metric
Currently we choose BE random without check disk is available,
the create table will failed until create tablet task is sent to BE
and BE will check is there has available capacity to create tablet.
So check backend disk available by storage medium will reduce unnecessary RPC call.
1. Split /_cluster/state into /_mapping and /_search_shards requests to reduce permissions and make the logic clearer
2. Rename part es related objects to make their representation more accurate
3. Simply support docValue and Fields in alias mode, and take the first one by default
#3311
Fix#3920
CL:
1. Parse the TCP metrics header in `/proc/net/snmp` to get the right position of the metrics.
2. Add 2 new metrics: `tcp_in_segs` and `tcp_out_segs`
When we get default system time zone, it will return `PRC`, which is not supported by us, thus
will cause dynamic partition create failed. Fix#3919
This CL mainly changes:
1. Use a unified method to get the system default time zone
2. Now the default variable `system_time_zone` and `time_zone` is set to the default system
time zone, which is `Asia/Shanghai`.
3. Modify related unit test.
4. Support time zone `PRC`.
The following statement will throw error:
```
create table test.tbl2
(k1 int, k2 int, k3 float)
duplicate key(k1)
partition by range(k2)
(partition p1 values less than("10"))
distributed by hash(k3) buckets 1
properties('replication_num' = '1');
```
Error: `Only key column can be partition column`
But in duplicate key table, columns can be partition or distribution column
even if they are not in duplicate keys.
This bug is introduced by #3812