If user wants to create a no grouping mv on aggregation table, the doris will thrown exception.
The correct approach is that explicit declare the grouping column.
For example:
Agg table: k1, k2, sum(k3)
Create materialized view stmt: select k1, k2 from agg_table group by k1, k2.
Fixed#4316
A new feature has been added to acquire tablet id and schema hash of all the tablets on a particular BE node
via Web page,so that more detailed information of each tablet can be obtained according to these
tablet id and schema hash. In accordance with different web request, there are two ways
(table and json)to show these acquired tablet id and schema hash on Web page.
**Describe the bug**
Predicate push down where sub query has distinct may throw NPE
**To Reproduce**
Steps to reproduce the behavior:
1. create table like
```
+--------------+--------------+------+-------+---------+---------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-------+---------+---------+
| event_day | DATETIME | No | true | NULL | |
| title | VARCHAR(600) | No | true | NULL | |
| report_value | VARCHAR(50) | No | false | NULL | REPLACE |
+--------------+--------------+------+-------+---------+---------+
```
2. exec query
```
```SELECT
*
FROM
(
SELECT
DISTINCT event_day,
title
FROM
click_show_window
) a
WHERE
a.title IS NOT NULL
```
4. See error
```
ERROR 1064 (HY000): errCode = 2, detailMessage = Unexpected exception: null
```
This is because DISTINCT generate grouping exprs in agginfo, but this clause does not have a group by clause
1. Rename run-ut.sh to run-be-ut.sh
2. Find all test files from build dir instead of declaring separately in the script
3. Add gtest output to collect the result of unit test.
* [Feature][Cache] Cache proxy and coordinator #2581
1. Cache's abstract proxy class and BE's Cache implementation
2. Cache coordinator implemented by consistent hashing
* Adjusted the formatting code, naming and variables according to the comments
Fix be crash caused by cast decimal to date. A be crashed bug caused by Unable to find. _ZN5doris18DecimalV2Operators16cast_to_date_val.
also see #4281
The column types of the materialized view and the base table are different.
When mv is selected in query plan, the type of slot should be changed by mv column type.
For example:
base table: k1 int, k2 int
mv table: k1 int, k2 bigint sum
The k2 type of slot ref should be changed from int to bigint.
Closed. #4271
Redesign metrics to 3 layers:
MetricRegistry - MetricEntity - Metrics
MetricRegistry : the register center
MetricEntity : the entity registered on MetricRegistry. Generally a MetricRegistry can be registered on several
MetricEntities, each of MetricEntity is an independent entity, such as server, disk_devices, data_directories, thrift
clients and servers, and so on.
Metric : metrics of an entity. Such as fragment_requests_total on server entity, disk_bytes_read on a disk_device entity,
thrift_opened_clients on a thrift_client entity.
MetricPrototype: the type of a metric. MetricPrototype is a global variable, can be shared by the same metrics across
different MetricEntities.
This PR is to add inPredicate support to delete statement,
and add max_allowed_in_element_num_of_delete variable to
limit element num of InPredicate in delete statement.
The new function approx_count_distinct is the alias of function ndv.
So Doris also need to rewrite approx_count_distinct to hll function when it is possible to match the hll materialized view.
Fix calculation of cumulative point. The problem is calculation of cumulative point
is wrong when be restarts and there is delete rowset. also see #4258
Support ALTER ROUTINE LOAD JOB stmt, for example:
```
alter routine load db1.label1
properties
(
"desired_concurrent_number"="3",
"max_batch_interval" = "5",
"max_batch_rows" = "300000",
"max_batch_size" = "209715200",
"strict_mode" = "false",
"timezone" = "+08:00"
)
```
Details can be found in `alter-routine-load.md`
Revert “Change type of sum, min, max function column in mv”
This pr is revert pr #4199 .
The daily test is cored when the type of mv column has been changed.
So I revert the pr.
The daily core will be fixed in the future. After that, the pr#4199 will be enable.
Change-Id: Ie04fcfacfcd38480121addc5e454093d4ae75181
Using attachement strategy of brpc to send packet with big size.
BRPC send packet should serialize it first and then send it.
If we send one batch with big size, it will encounter a connection failed.
So we can use attachment strategy to bypass the problem and eliminate
the serialization cost.
Stream load should read all the data completely before parsing the json.
And also add a new BE config streaming_load_max_batch_read_mb
to limit the data size when loading json data.
Fix the bug of loading empty json array []
Add doc to explain some certain case of loading json format data.
Fix: #4124
Querys like DELETE FROM tbl WHERE decimal_key <= "123.456";
when the type of decimal_key is DECIMALV2 may failed randomly,
this is because the precision and scale is not initialized.
Now, if the length of URL is longer than 4096 bytes, netty will refuse.
The case can be reproduced by constructing a very long URL(longer than 4096bytes)
Add 2 http server params:
1. http_max_line_length
2. http_max_header_size
If the agg function is sum, the type of mv column will be bigint.
The only exception is that if the base column is largeint, the type of mv column will be largeint.
If the agg function is min or max, the type of mv column will be same as the type of base column.
For example, the type of mv column is smallint when the agg function is min.
We make all MemTrackers shared, in order to show MemTracker real-time consumptions on the web.
As follows:
1. nearly all MemTracker raw ptr -> shared_ptr
2. Use CreateTracker() to create new MemTracker(in order to add itself to its parent)
3. RowBatch & MemPool still use raw ptrs of MemTracker, it's easy to ensure RowBatch & MemPool destructor exec
before MemTracker's destructor. So we don't change these code.
4. MemTracker can use RuntimeProfile's counter to calc consumption. So RuntimeProfile's counter need to be shared
too. We add a shared counter pool to store the shared counter, don't change other counters of RuntimeProfile.
Note that, this PR doesn't change the MemTracker tree structure. So there still have some orphan trackers, e.g. RowBlockV2's MemTracker. If you find some shared MemTrackers are little memory consumption & too time-consuming, you could make them be the orphan, then it's fine to use the raw ptr.
If table1 and table2 are colocated using column k1, k2.
Query should contains all of the k1, k2 to apply colocation algorithm.
Query like select * from table1 inner join table2 where t1.k1 = t2.k1 can not be used as colocation.
We add the rule to avoid the problem.