Commit Graph

1602 Commits

Author SHA1 Message Date
acc5fd2f21 [BUG] Fix string type cast bug and runtime filter may core when not support avx2 (#6495)
* fix string type cast bug and runtime filter instructions may not support

* add arm support
2021-08-26 09:14:31 +08:00
92e50504e5 [Feature] Supports case-insensitive table names. (#6403)
Implement the lower_case_table_names variable of mysql. The value meaning is as follows:
0: the table names are case-sensitive.
1: table names are stored in lowercase and comparisons are not case sensitive.
2: table names are stored as given but compared case-insensitively.
2021-08-25 22:34:45 +08:00
96013decd3 [BUG] Fixed the materialized number of resultExprs/constExprs and output slot of Union Node is inconsistent (#6380) 2021-08-25 22:33:49 +08:00
fa290383dc [Doc] Modify README to add some statistical indicators (#6486)
1. Add license/total line/release badegs.
2. Add monthly active contributor and contributor growth graph
3. fix a pom.xml bug
4. Modify some routine load log on BE side
2021-08-25 09:36:26 +08:00
7e30b28f3a [Optimize] Speed up converting the data of other types to string in mysql_result_writer (#6384)
Co-authored-by: caiconghui <caiconghui@xiaomi.com>
2021-08-24 22:30:58 +08:00
6c23f8d413 [Bug] Fix bug that check point load image failed in some circumstance (#6465)
Fix bug that check point load image failed in some circumstance
2021-08-19 14:17:57 +08:00
52f39e3fde [Bug][SparkLoad]: bitmap value in or operator in spark load should be deep copied (#6453)
fix multi rollup hold the same Ref of bitmapvalue which may be updated repeatedly.
fix #6452
2021-08-19 14:17:31 +08:00
fa382f8602 [Bug][MemLimit] Modify the memory limit of storage page cache (#6451)
This CL mainly changes:

1. the `storage_page_cache_limit` is based on config `mem_limit`

    the default is 20% of `mem_limit`. 

2. the `buffer_pool_limit` is based on config `mem_limit`

    the default is 20% of `mem_limit`. 

3. the `buffer_pool_clean_pages_limit` is based on config `buffer_pool_limit`

    the default is 50% of `buffer_pool_limit`

4. Fix some show bugs of lru cache hit ratio and usage ratio
5. Fix a create view bug that `notEvalNondeterministicFunction` should be reset after analyze.
2021-08-19 14:16:53 +08:00
66a7a4b294 [Feature] Support exact percentile aggregate function (#6410)
Support to calculate the exact percentile value array of numeric column `col` at the given percentage(s).
2021-08-18 15:56:06 +08:00
8738ce380b Add long text type STRING, with a maximum length of 2GB. Usage is similar to varchar, and there is no guarantee for the performance of storing extremely long data (#6391) 2021-08-18 09:05:40 +08:00
63a0d9d23a Add statistics struct and Support manually inject statistics (#6420)
* Add statistics struct and Support manually inject statistics

This PR mainly developed the data structure used by statistical information
and the function of manually modifying the statistical information.
We use a statistics package alone to store statistical information,
and use the 'statistics manager' as a unified entry for statistical information.
For detailed data structure and explanation, please refer to the comments on the class.

Manually modify statistics include: Manually modify table statistics and column statistics.
The syntax is explained in the issue #6370.

* Show table and column statistics

'SHOW TABLE STATS' used to show the statistics of table.
'SHOW COLUMN STATS' used to show the statistics of columns.

Currently, only the tables and columns for setting statistics
will be displayed in the results.
2021-08-16 17:20:05 +08:00
2030c44dba [Log] Modify some log level on BE side (#6381) 2021-08-14 10:25:45 +08:00
5e6f1b89da [Feature] Support sql block rule (#6192)
Support grammar:
- SHOW SQL_BLOCK_RULE [FOR NAME]
- CREATE SQL_BLOCK_RULE test_rule PROPERTIES ("user"="default", "sql"="select .* from .* join .*", "enable": "true");
- ALTER SQL_BLOCK_RULE test_rule PROPERTIES ("user"="test_user", "enable": "false");
- DROP SQL_BLOCK_RULE test_rule1,test_rule2;
2021-08-13 21:56:34 +08:00
671d8f6af8 [Bug] Return error if user failed to pause/resume a certain routine load. (#6426)
When operating a single job, maintain the same behavior as before
This problem is introduced by #6394.
2021-08-12 11:50:57 +08:00
a7d620c359 fix issue6390,wrong result in query view with an added null column (#6395) 2021-08-12 10:08:58 +08:00
Pxl
8a267f1ac5 [Feature] Support for cleaning the trash actively (#6323) 2021-08-12 10:07:51 +08:00
047e31d977 [Repository] Normalize the path of file on repository (#6402) 2021-08-12 09:41:34 +08:00
8ca86824df fix compile error (#6425)
Change-Id: If022dd00f00772166096483ee1d82f2cd34e0dec

Co-authored-by: qijianliang01 <qijianliang01@baidu.com>
2021-08-12 09:34:28 +08:00
Pxl
9a6e53a7f8 [Bug] fix wrong title at 'show trash' (#6407) 2021-08-11 16:39:21 +08:00
3fa3dfbeda [Bug][Fold constant] remove reanalyze in get constant expr (#6400)
fix #6399
2021-08-11 16:38:30 +08:00
708b6c529e [RoutineLoad] Support pause or resume all routine load jobs (#6394)
1. PAUSE ALL ROUTINE LOAD;
2. RESUME ALL ROUTINE LOAD;
2021-08-11 16:38:06 +08:00
7e93405df3 [Alter] Support alter table and column's comment (#6387)
1. alter table tbl1 modify comment "new comment";
2. alter table tbl1 modify column k1 comment "k1", modify column v1 comment "v1";
2021-08-11 16:37:42 +08:00
9216735cfa [New Featrue] Support Vectorization Execution Engine Interface For Doris (#6329)
1. FE vectorized plan code
2. Function register vec function
3. Diff function nullable type
4. New thirdparty code and new thrift struct
2021-08-11 14:54:06 +08:00
10f410f1c3 [Improvement] Imporve metrics text format for FE (#6382) (#6383)
Fix #6382
2021-08-11 10:26:19 +08:00
0930e89452 [http][manager] Add manager related http interface. (#6396)
Encapsulate some http interfaces for better management and maintenance of doris clusters.

The http interface includes getting cluster connection information, node information, node configuration information, batch modifying node configuration, and getting query profile.

For details, please refer to the document:  
`docs/zh-CN/administrator-guide/http-actions/fe/manager/`
2021-08-10 10:58:31 +08:00
Qi
5f7c7ce743 [Bug][Cache] Map.get with cache key real value. (#6377) 2021-08-10 10:14:46 +08:00
bf616dcb8f [Config] Add default configuration of load_parallelism (#6290)
- Make load_parallelism configurable. 
- Different clusters should be configured with different load_parallelism values.
- Some user don't know how to set load_parallelism, or don't know the best load_parallelism value.
2021-08-10 10:11:46 +08:00
Pxl
236e0f1eda [Feature] Support for querying the trash used capacity (#6247)
Support for querying the trash used capacity.

```
SHOW TRASH [ON ...]
```

Now user can proactively scan trash directory.
2021-08-10 10:10:47 +08:00
c8c571af37 [New Feature] Support synchronizing MySQL binlog in real time [stage 1] (#6289)
This commit is the first stage of #6287 

In this commit, we support:
1、Sync Job
1)、 Creating sync job and data channel in Fe.
2)、Pause sync job.
3)、Resume sync job.
4)、Stop sync job.
5)、Show sync jobs.

2、Canal
1)、Subscribing and getting the binlog data of canal with creating syncjob.
2021-08-08 21:39:34 +08:00
882242ed15 [Enhance][Fold constant] Support fold constants in InlineView by BE (#6393)
Add support for folding constants in InlineView by BE.
2021-08-07 21:34:02 +08:00
36fe112eb7 [BUG] Fix query failure caused by using case-insensitive system view names in information_schema. (#6374)
The system view names in information_schema are case-insensitive,
but we should not refer to one of these using different cases within the same statement.  

The following sql is correct:
```
select * from information_schema.TAbles where TAbles.ENGINE = 'Doris';
```

The following sql is wrong because `TAbles` and `tables` are used:
```
select * from information_schema.TAbles order by tables.CREATE_TIME;
```
2021-08-07 21:33:29 +08:00
c6aa37f5ef [Alter] Support doing compaction for tablets under alter operation (#6365)
The problem I want to solve is described in #6355.
This CL mainly changes:

1. Support compacting tablets under alter operations

   On BE side, the compaction logic will select tablets which state is "TABLET_NOTREADY" to do cumulative compaction.

2. Remove "alter_task" field in tablet's meta on BE side.

   "alter_task" field is never used long time ago

3. Support doing delete operation when table is doing alter operation.

   Previously, when a table is doing alter operation, execution of delete will return error: Table's state is not NORMAL.
   But now, delete can be executed successfully only if the condition column is not under schema change.
   And delete condition will be applied to all materialized indexes.
2021-08-07 21:32:26 +08:00
70825ce846 [Feature] Support alias function (#6261)
Implement #6260.

Add alias function type.
2021-08-07 21:29:13 +08:00
21f94c5d6c [BUG][ARRAY] fix array bound bug (#6347)
fix #6346
2021-08-05 14:34:12 +08:00
2823e4daba [Feature] Support SHOW DATA SKEW stmt (#6219)
SHOW DATA SKEW FROM tbl PARTITION(p1)

to view the data distribution of a specified partition

```
mysql> admin show data skew from tbl1 partition(tbl1);
+-----------+-------------+-------+---------+
| BucketIdx | AvgDataSize | Graph | Percent |
+-----------+-------------+-------+---------+
| 0         | 0           |       | 100.00% |
+-----------+-------------+-------+---------+
1 row in set (0.01 sec)
```

Also modify the result of `admin show replica distribution`, add replica size distribution

```
mysql> admin show replica distribution from tbl1 partition(tbl1);
+-----------+------------+-------------+----------+------------+-----------+-------------+
| BackendId | ReplicaNum | ReplicaSize | NumGraph | NumPercent | SizeGraph | SizePercent |
+-----------+------------+-------------+----------+------------+-----------+-------------+
| 10002     | 1          | 0           | >        | 100.00%    |           | 100.00%     |
+-----------+------------+-------------+----------+------------+-----------+-------------+
```
2021-08-05 14:05:41 +08:00
a16ad3fccd [BUG] Fix bug in fe metric Http restful API (#6364)
Fix bug in fe metric Http restful API. see #6363
2021-08-05 00:16:55 +08:00
866814dc47 [Improve] Add FE function timestamp (#6339)
* Add FE function timestamp
2021-08-04 11:52:46 +08:00
2c208e932b [Bug][RoutineLoad] Avoid TOO_MANY_TASKS error (#6342)
Use `commitAsync` to commit offset to kafka, instead of using `commitSync`, which may block for a long time.
Also assign a group.id to routine load if user not specified "property.group.id" property, so that all consumer of
this job will use same group.id instead of a random id for each consume task.
2021-08-03 11:59:06 +08:00
748604ff4f [RoutineLoad] Support alter broker list and topic for kafka routine load (#6335)
```
alter routine load for cmy2 from kafka("kafka_broker_list" = "ip2:9094", "kafka_topic" = "my_topic");
```

This is useful when the kafka broker list or topic has been changed.

Also modify `show create routine load`, support showing  "kafka_partitions" and "kafka_offsets".
2021-08-03 11:58:38 +08:00
a2eb1b9d5b fix version (#6334) 2021-07-30 09:25:22 +08:00
9ca369aa58 [Feature][LDAP] Add LDAP authentication login and LDAP group authorization support. (#6333)
* [Feature][LDAP] Add LDAP authentication login and LDAP group authorization support.

* Update docs/.vuepress/sidebar/en.js

Co-authored-by: Mingyu Chen <morningman.cmy@gmail.com>

Co-authored-by: Mingyu Chen <morningman.cmy@gmail.com>
2021-07-30 09:24:50 +08:00
14db74fac6 [Bug] Fix show load like match (#6314)
* fix show load like match

* Compatible with historical issues
2021-07-30 09:24:06 +08:00
60ac4a9660 [Bug][SparkLoad] Fix bucket_hash_value for bool value (#6284)
Co-authored-by: weixiang <weixiang06@meituan.com>
2021-07-27 13:38:42 +08:00
b3a52a05d5 [Update] Support update syntax (#6230)
[Update] Support update syntax

    The current update syntax only supports updating the filtered data of a single table.

    Syntax:

     * UPDATE table_reference
     *     SET assignment_list
     *     [WHERE where_condition]
     *
     * value:
     *     {expr}
     *
     * assignment:
     *     col_name = value
     *
     * assignment_list:
     *     assignment [, assignment] ...

    Example
    Update unique_table
         set v1=1
         where k1=1

    New Frontend Config: enable_concurrent_update
    This configuration is used to control whether multi update stmt can be executed concurrently in one table.
    Default value is false which means A table can only have one update task being executed at the same time.
    If users want to update the same table concurrently,
      they need to modify the configuration value to true and restart the master frontend.
    Concurrent updates may cause write conflicts, the result is uncertain, please be careful.

    The main realization principle:
    1. Read the rows that meet the conditions according to the conditions set by where clause.
    2. Modify the result of the row according to the set clause.
    3. Write the modified row back to the table.

    Some restrictions on the use of update syntax.
    1. Only the unique table can be updated
    2. Only the value column of the unique table can be updated
    3. The where clause currently only supports single tables

    Possible risks:
    1. Since the current implementation method is a row update,
         when the same table is updated concurrently, there may be concurrency conflicts which may cause the incorrect result.
    2. Once the conditions of the where clause are unsatisfactory, it is likely to cause a full table scan and affect query performance.
       Please pay attention to whether the column in the where clause can match the index when using it.

    [Docs][Update] Add update document and sql-reference

    Fixed #6229
2021-07-27 13:38:15 +08:00
f26e3408b2 [Profile] Support show load profile for broker load job (#6214)
1.
Add new statement:
`SHOW LOAD PROFILE "xxx";`

2.
Improve the read performance of orc scanner
2021-07-27 13:37:34 +08:00
b1f5979103 [New Feature][Meta][Image] Add file header and footer for image (#6207)
#6206 

At present, our image file does not have file header/footer. When we need to change the image format (such as adding different journal versions to the image), there is no way to distinguish different image formats.

Therefore, we suggest adding file header and footer to the image. By the new image format, we can freely distinguish and define different image reading ways.

The format of the image is as follows:

```
/**
 * Image Format:
 * |- Image --------------------------------------|
 * | - Magic String (4 bytes)                     |
 * | - Header Length (4 bytes)                    |
 * | |- Header -----------------------------|     |
 * | | |- Json Header ---------------|      |     |
 * | | | - version                   |      |     |
 * | | | - other key/value(undecided)|      |     |
 * | | |-----------------------------|      |     |
 * | |--------------------------------------|     |
 * |                                              |
 * | |- Image Body -------------------------|     |
 * | | Object a                             |     |
 * | | Object b                             |     |
 * | | ...                                  |     |
 * | |--------------------------------------|     |
 * |                                              |
 * | |- Footer -----------------------------|     |
 * | | | - Checksum (8 bytes)               |     |
 * | | |- object index --------------|      |     |
 * | | | - index a                   |      |     |
 * | | | - index b                   |      |     |
 * | | | ...                         |      |     |
 * | | |-----------------------------|      |     |
 * | | - other value(undecided)             |     |
 * | |--------------------------------------|     |
 * | - Footer Length (8 bytes)                    |
 * | - Magic String (4 bytes)                     |
 * |----------------------------------------------|
 */
```
1. Magic Number
One image format is identified by one magic string and one version field. The magic string is save in the first 4 bytes and last 4 bytes in the images.

2. Image Header:
The version is save in the header with json format now.

3. Image Body:
Equal to the original image.

4.Image Footer:
Image footer stores the file offset(index) of many image objects. If necessary, we can read some objects in the image by the footer.
2021-07-27 13:36:53 +08:00
e3db773149 [UT] Fix MaterializedViewFunctionTest run failed (#6294)
Co-authored-by: caiconghui <caiconghui@xiaomi.com>
2021-07-26 09:42:39 +08:00
75d954ced5 [Feature] Modify the cost evaluation algorithm of Broadcast and Shuffle Join (#6274)
issue #6272

After modifying the Hash Table memory cost estimation algorithm, TPC-DS query-72 etc. can be passed, avoiding OOM.
2021-07-26 09:38:41 +08:00
7771233f30 [BUG] GROUPING func and ORDER BY are used at the same time to report NullPointerException (#6266)
fix #6265

The reason for the error is that the `Grouping Func Exprs` is substituted twice. In the first substitution, `VirtualSlotRef` is used to replace the original `SlotRef`, and in the second substitution, `VirtualSlotRef` is reported in the `getTable()` Times Null pointer. IN
```
} else if (((SlotRef) child).getDesc().getParent().getTable().getType()
```
For the first substitution, the List of executable exprs in select clause has been substituted.
```
groupingInfo = new GroupingInfo(analyzer, groupByClause.getGroupingType());
            groupingInfo.substituteGroupingFn(resultExprs, analyzer);
```
In the second substitution, actually only need to substitute the unique expr in Ordering exprs.
```
createSortInfo(analyzer);
        if (sortInfo != null && CollectionUtils.isNotEmpty(sortInfo.getOrderingExprs())) {
            if (groupingInfo != null) {
                groupingInfo.substituteGroupingFn(sortInfo.getOrderingExprs(), analyzer);
            }
        }
```
change into:
```
createSortInfo(analyzer);
        if (sortInfo != null && CollectionUtils.isNotEmpty(sortInfo.getOrderingExprs())) {
            if (groupingInfo != null) {
                List<Expr> orderingExprNotInSelect = sortInfo.getOrderingExprs().stream()
                        .filter(item -> !resultExprs.contains(item)).collect(Collectors.toList());
                groupingInfo.substituteGroupingFn(orderingExprNotInSelect, analyzer);
            }
        }
```
2021-07-21 12:31:20 +08:00
7592f52d2e [Feature][Insert] Add transaction for the operation of insert #6244 (#6245)
## Proposed changes
Add transaction for the operation of insert. It will cost less time than non-transaction(it will cost 1/1000 time) when you want to insert a amount of rows.
### Syntax

```
BEGIN [ WITH LABEL label];
INSERT INTO table_name ...
[COMMIT | ROLLBACK];
```

### Example
commit a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
commit;
```
rollback a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
rollback;
```
commit a transaction with label:
```
begin with label test_label;
insert into Tbl values(11, 22, 33);
commit;
```

### Description
```
begin:  begin a transaction, the next insert will execute in the transaction until commit/rollback;
commit:  commit the transaction, the data in the transaction will be inserted into the table;
rollback:  abort the transaction, nothing will be inserted into the table;
```
### The main realization principle:
```
1. begin a transaction in the session. next sql is executed in the transaction;
2. insert sql will be parser and get the database name and table name, they will be used to select a be and create a pipe to accept data;
3. all inserted values will be sent to the be and write into the pipe;
4. a thread will get the data from the pipe, then write them to disk;
5. commit will complete this transaction and make these data visible;
6. rollback will abort this transaction
```

### Some restrictions on the use of update syntax.
1. Only ```insert``` can be called in a transaction.
2. If something error happened, ```commit``` will not succeed, it will ```rollback``` directly;
3. By default, if part of insert in the transaction is invalid, ```commit``` will only insert the other correct data into the table.
4. If you need ```commit``` return failed when any insert in the transaction is invalid, you need execute ```set enable_insert_strict = true``` before ```begin```.
2021-07-21 10:54:11 +08:00