The partition column of table also must be the key in materialized view.
If not, when user wants to add partition of table, the be will core.
The materialized view could not create partition correctly when partition column has been aggregated.
(1) Add LargeInt cast to date and datatime, see #3864
LargeInt can cast to date and datatime. Fix this error:
Unable to find _ZN5doris13CastFunctions16cast_to_date_valEPN9doris_udf15FunctionContextERKNS1_11LargeIntValE
(2) Add local timezone info to stale_version_path_json_doc rest api
Add timezone to "last create time" field.
{
"path id": "1",
"last create time": "1970-01-01 10:46:40 +0800",
"path list": "1 -> [2-3] -> [4-5]"
},
and add timezone to the test unix, see #4121 .
If user wants to create a no grouping mv on aggregation table, the doris will thrown exception.
The correct approach is that explicit declare the grouping column.
For example:
Agg table: k1, k2, sum(k3)
Create materialized view stmt: select k1, k2 from agg_table group by k1, k2.
Fixed#4316
A new feature has been added to acquire tablet id and schema hash of all the tablets on a particular BE node
via Web page,so that more detailed information of each tablet can be obtained according to these
tablet id and schema hash. In accordance with different web request, there are two ways
(table and json)to show these acquired tablet id and schema hash on Web page.
**Describe the bug**
Predicate push down where sub query has distinct may throw NPE
**To Reproduce**
Steps to reproduce the behavior:
1. create table like
```
+--------------+--------------+------+-------+---------+---------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-------+---------+---------+
| event_day | DATETIME | No | true | NULL | |
| title | VARCHAR(600) | No | true | NULL | |
| report_value | VARCHAR(50) | No | false | NULL | REPLACE |
+--------------+--------------+------+-------+---------+---------+
```
2. exec query
```
```SELECT
*
FROM
(
SELECT
DISTINCT event_day,
title
FROM
click_show_window
) a
WHERE
a.title IS NOT NULL
```
4. See error
```
ERROR 1064 (HY000): errCode = 2, detailMessage = Unexpected exception: null
```
This is because DISTINCT generate grouping exprs in agginfo, but this clause does not have a group by clause
1. Rename run-ut.sh to run-be-ut.sh
2. Find all test files from build dir instead of declaring separately in the script
3. Add gtest output to collect the result of unit test.
* [Feature][Cache] Cache proxy and coordinator #2581
1. Cache's abstract proxy class and BE's Cache implementation
2. Cache coordinator implemented by consistent hashing
* Adjusted the formatting code, naming and variables according to the comments
Fix be crash caused by cast decimal to date. A be crashed bug caused by Unable to find. _ZN5doris18DecimalV2Operators16cast_to_date_val.
also see #4281
The column types of the materialized view and the base table are different.
When mv is selected in query plan, the type of slot should be changed by mv column type.
For example:
base table: k1 int, k2 int
mv table: k1 int, k2 bigint sum
The k2 type of slot ref should be changed from int to bigint.
Closed. #4271
Redesign metrics to 3 layers:
MetricRegistry - MetricEntity - Metrics
MetricRegistry : the register center
MetricEntity : the entity registered on MetricRegistry. Generally a MetricRegistry can be registered on several
MetricEntities, each of MetricEntity is an independent entity, such as server, disk_devices, data_directories, thrift
clients and servers, and so on.
Metric : metrics of an entity. Such as fragment_requests_total on server entity, disk_bytes_read on a disk_device entity,
thrift_opened_clients on a thrift_client entity.
MetricPrototype: the type of a metric. MetricPrototype is a global variable, can be shared by the same metrics across
different MetricEntities.
This PR is to add inPredicate support to delete statement,
and add max_allowed_in_element_num_of_delete variable to
limit element num of InPredicate in delete statement.
The new function approx_count_distinct is the alias of function ndv.
So Doris also need to rewrite approx_count_distinct to hll function when it is possible to match the hll materialized view.
Fix calculation of cumulative point. The problem is calculation of cumulative point
is wrong when be restarts and there is delete rowset. also see #4258
Support ALTER ROUTINE LOAD JOB stmt, for example:
```
alter routine load db1.label1
properties
(
"desired_concurrent_number"="3",
"max_batch_interval" = "5",
"max_batch_rows" = "300000",
"max_batch_size" = "209715200",
"strict_mode" = "false",
"timezone" = "+08:00"
)
```
Details can be found in `alter-routine-load.md`
Revert “Change type of sum, min, max function column in mv”
This pr is revert pr #4199 .
The daily test is cored when the type of mv column has been changed.
So I revert the pr.
The daily core will be fixed in the future. After that, the pr#4199 will be enable.
Change-Id: Ie04fcfacfcd38480121addc5e454093d4ae75181
Using attachement strategy of brpc to send packet with big size.
BRPC send packet should serialize it first and then send it.
If we send one batch with big size, it will encounter a connection failed.
So we can use attachment strategy to bypass the problem and eliminate
the serialization cost.
Stream load should read all the data completely before parsing the json.
And also add a new BE config streaming_load_max_batch_read_mb
to limit the data size when loading json data.
Fix the bug of loading empty json array []
Add doc to explain some certain case of loading json format data.
Fix: #4124
Querys like DELETE FROM tbl WHERE decimal_key <= "123.456";
when the type of decimal_key is DECIMALV2 may failed randomly,
this is because the precision and scale is not initialized.