Commit Graph

65 Commits

Author SHA1 Message Date
afce993ca7 [feature](load)(csv) CSV import and export support header (#8765)
- Add two new types to stream load boker load: **csv_with_names** and **csv_with_name_sand_types**
- Add two new types to export: **csv_with_names** and **csv_with_names_and_types**
2022-04-18 15:29:18 +08:00
6cbc5014b9 [doc] update export.md (#8650)
"where" should be in front of "to".
2022-03-28 10:23:53 +08:00
bea9a7ba4f [feature] Support pre-aggregation for quantile type (#8234)
Add a new column-type to speed up the approximation of quantiles.
1. The  new column-type is named `quantile_state` with fixed aggregation function `quantile_union`, which stores the intermediate results of pre-aggregated approximation calculations for quantiles.
2. support pre-aggregation of new column-type and quantile_state related functions.
2022-03-24 09:11:34 +08:00
011985e7e3 fix en broker load (#8566)
fix en broker load
2022-03-21 22:53:51 +08:00
12bd967846 [doc] Fix some typo about spark load and broker load (#8520)
1. add hive-bitmap-udf link
2. modify preceding-filter
2022-03-19 15:45:17 +08:00
571f0b688d [improvment] show export support label like (#8202)
using `show export where label like 'xxx%'` to list more results.
2022-03-15 11:41:59 +08:00
5ab3a8a137 [typo]broker load docs (#8434)
broker load docs
2022-03-13 13:45:26 +08:00
1e70f992e7 [improvement][fix](insert)(replay) support SHOW LAST INSERT stmt and fix json replay bug (#8355)
1. support SHOW LAST INSERT
    In the current implementation, the insert operation returns a json string to describe the result information
    of the insert. But this information is in the session track field of the mysql protocol,
    and it is difficult to obtain programmatically.

    Therefore, I provide a new syntax `show last insert` to explicitly obtain the result of the latest insert operation,
    and return a normal query result set to facilitate the user to obtain the result information of the insert.

2. the `ReturnRows` field in fe.audit.log of insert operation will be set to the loaded row num of the insert.

3.  Fix a bug described in #8354
2022-03-08 18:53:11 +08:00
f57f02bbf2 [improvement] Support show tablets stmt (#7970)
change `show tablet from tbl` to `show tablets from tbl`
2022-03-05 15:25:57 +08:00
236105daa0 [feature][show-transaction] Support view transactions info for specified status by SHOW TRANSACTION stmt (#8156)
SHOW TRANSACTION WHERE STATUS = 'prepare/precommitted/committed/visible/aborted';
2022-03-02 10:14:42 +08:00
83521a826a [Feature](create_table) Support create table with random distribution to avoid data skew (#8041)
In some scenarios, users cannot find a suitable hash key to avoid data skew, so we need to provide an additional data distribution for olap table to avoid data skew

example:
CREATE TABLE random_table
(
siteid INT DEFAULT '10',
citycode SMALLINT,
username VARCHAR(32) DEFAULT '',
pv BIGINT SUM DEFAULT '0'
)
AGGREGATE KEY(siteid, citycode, username)
DISTRIBUTED BY random BUCKETS 10
PROPERTIES("replication_num" = "1");

Co-authored-by: caiconghui1 <caiconghui1@jd.com>
2022-02-26 10:38:55 +08:00
a630e037b9 [Enhancement](routine_load) Support show routine load statement with like predicate (#8188)
* [Enhancement](routine_load) Support show routine load with like predicate

Co-authored-by: caiconghui1 <caiconghui1@jd.com>
2022-02-26 10:35:38 +08:00
4c7525cf2c [improvement](show) Support that user can use show data skew statement instead of admin (#7914)
* [improvement](show) Support that user can use show data skew statement instead of admin
This PR mainly do two things:
1. Support that user can use show data skew statement instead of admin
2. Fix fe ut failed caused by pr [improvement](rewrite) Make RewriteDateLiteralRule to be compatible with mysql #7876 and pr [feature-wip](iceberg) Step1: Support create Iceberg external table #7391

Co-authored-by: caiconghui1 <caiconghui1@jd.com>
2022-01-29 10:45:03 +08:00
3b8d48f08b [feature-wip](iceberg) Step1: Support create Iceberg external table (#7391)
Close related #7389

Support create Iceberg external table in Doris. 

This is the first step to support Iceberg external table.

### Create Iceberg external table
This pr describes two ways to create Iceberg external tables. Both ways do not require explicitly specifying column definitions, Doris automatically converts them based on Iceberg's column definitions.

1. Create an Iceberg external table directly

```sql
    CREATE [EXTERNAL] TABLE table_name 
    ENGINE = ICEBERG
    [COMMENT "comment"]
    PROPERTIES (
    "iceberg.database" = "iceberg_db_name",
    "iceberg.table" = "icberg_table_name",
    "iceberg.hive.metastore.uris"  =  "thrift://192.168.0.1:9083",
    "iceberg.catalog.type"  =  "HIVE_CATALOG"
    );
```

2. Create an Iceberg database and automatically create all the tables under that db.

```sql
    CREATE DATABASE db_name 
    [COMMENT "comment"]
    PROPERTIES (
    "iceberg.database" = "iceberg_db_name",
    "iceberg.hive.metastore.uris" = "thrift://192.168.0.1:9083",
    "iceberg.catalog.type" = "HIVE_CATALOG"
    );
```

### Show table creation

1. For individual tables you can view them with `help show create table`.

```sql 
mysql> show create table iceberg_db.logs_1;
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table  | Create Table                                                                                                                                                                                                                                                                                                                                                 |
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| logs_1 | CREATE TABLE `logs_1` (
  `level` varchar(-1) NOT NULL COMMENT "null",
  `event_time` datetime NOT NULL COMMENT "null",
  `message` varchar(-1) NOT NULL COMMENT "null"
) ENGINE=ICEBERG
COMMENT "ICEBERG"
PROPERTIES (
"iceberg.database" = "doris",
"iceberg.table" = "logs_1",
"iceberg.hive.metastore.uris"  =  "thrift://10.10.10.10:9087",
"iceberg.catalog.type"  =  "HIVE_CATALOG"
) |
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```

2. For Iceberg database, you can view it with `help show table creation`.

```sql
mysql> show table creation from iceberg_db;
+--------+---------+---------------------+---------------------------------------------------------+
| Table  | Status  | Create Time         | Error Msg                                               |
+--------+---------+---------------------+---------------------------------------------------------+
| logs   | fail    | 2021-12-14 13:50:10 | Cannot convert unknown type to Doris type: list<string> |
| logs_1 | success | 2021-12-14 13:50:10 |                                                         |
+--------+---------+---------------------+---------------------------------------------------------+
2 rows in set (0.00 sec)
```

  This is a new syntax.
  
  Show table creation records in Iceberg database:
  
  Syntax:
  ```sql
      SHOW TABLE CREATION [FROM db] [LIKE mask]
  ```
2022-01-27 10:22:47 +08:00
e80c34b6fe [docs][typo] fix some typos in documents (#7769) 2022-01-16 10:43:42 +08:00
a60d86c1e1 [improvement](broker) add disable cache config for broker (#7506) 2021-12-31 16:48:55 +08:00
dc9cd34047 [docs] Add user manual for hdfs load and transaction. (#7497) 2021-12-30 10:22:48 +08:00
0499b2211b [feat](lateral-view) Support execution of lateral view stmt (#7255)
1. Add table function node
2. Add 3 table functions: explode_split, explode_bitmap and explode_json_array
2021-12-16 10:46:15 +08:00
be89f0f77e [feat-opt](routine-load) Support show offset lag in show routine load stmt (#7114)
Add a new field `Lag` in result of `show routine load` stmt.

`Lag: {"0":10, "1":0}` means kafka partition 0 has 10 msg behind and partition 1 is update-to-date.
2021-11-18 14:31:16 +08:00
bd25d1a828 [Doc] Add documents for MySQL Binlog Load (#6859)
* add zh-CN docs

* add en docs and image

* fix

* fix
2021-10-19 10:25:42 +08:00
fcd15edbf9 [Export] Support export job with label (#6835)
```
EXPORT TABLE xxx
...
PROPERTIES
(
    "label" = "mylabel",
    ...
);
```

And than user can use label to get the info by SHOW EXPORT stmt:
```
show export from db where label="mylabel";
```

For compatibility, if not specified, a random label will be used. And for history jobs, the label will be "export_job_id";

Not like LOAD stmt, here we specify label in `properties` because this will not cause grammatical conflicts,
and there is no need to modify the meta version of the metadata.
2021-10-15 10:18:11 +08:00
0393c9b3b9 [Optimize] Support send batch parallelism for olap table sink (#6397)
* Support send batch parallelism for olap table sink

Co-authored-by: caiconghui <caiconghui@xiaomi.com>
2021-08-30 11:03:09 +08:00
708b6c529e [RoutineLoad] Support pause or resume all routine load jobs (#6394)
1. PAUSE ALL ROUTINE LOAD;
2. RESUME ALL ROUTINE LOAD;
2021-08-11 16:38:06 +08:00
748604ff4f [RoutineLoad] Support alter broker list and topic for kafka routine load (#6335)
```
alter routine load for cmy2 from kafka("kafka_broker_list" = "ip2:9094", "kafka_topic" = "my_topic");
```

This is useful when the kafka broker list or topic has been changed.

Also modify `show create routine load`, support showing  "kafka_partitions" and "kafka_offsets".
2021-08-03 11:58:38 +08:00
b3a52a05d5 [Update] Support update syntax (#6230)
[Update] Support update syntax

    The current update syntax only supports updating the filtered data of a single table.

    Syntax:

     * UPDATE table_reference
     *     SET assignment_list
     *     [WHERE where_condition]
     *
     * value:
     *     {expr}
     *
     * assignment:
     *     col_name = value
     *
     * assignment_list:
     *     assignment [, assignment] ...

    Example
    Update unique_table
         set v1=1
         where k1=1

    New Frontend Config: enable_concurrent_update
    This configuration is used to control whether multi update stmt can be executed concurrently in one table.
    Default value is false which means A table can only have one update task being executed at the same time.
    If users want to update the same table concurrently,
      they need to modify the configuration value to true and restart the master frontend.
    Concurrent updates may cause write conflicts, the result is uncertain, please be careful.

    The main realization principle:
    1. Read the rows that meet the conditions according to the conditions set by where clause.
    2. Modify the result of the row according to the set clause.
    3. Write the modified row back to the table.

    Some restrictions on the use of update syntax.
    1. Only the unique table can be updated
    2. Only the value column of the unique table can be updated
    3. The where clause currently only supports single tables

    Possible risks:
    1. Since the current implementation method is a row update,
         when the same table is updated concurrently, there may be concurrency conflicts which may cause the incorrect result.
    2. Once the conditions of the where clause are unsatisfactory, it is likely to cause a full table scan and affect query performance.
       Please pay attention to whether the column in the where clause can match the index when using it.

    [Docs][Update] Add update document and sql-reference

    Fixed #6229
2021-07-27 13:38:15 +08:00
7592f52d2e [Feature][Insert] Add transaction for the operation of insert #6244 (#6245)
## Proposed changes
Add transaction for the operation of insert. It will cost less time than non-transaction(it will cost 1/1000 time) when you want to insert a amount of rows.
### Syntax

```
BEGIN [ WITH LABEL label];
INSERT INTO table_name ...
[COMMIT | ROLLBACK];
```

### Example
commit a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
commit;
```
rollback a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
rollback;
```
commit a transaction with label:
```
begin with label test_label;
insert into Tbl values(11, 22, 33);
commit;
```

### Description
```
begin:  begin a transaction, the next insert will execute in the transaction until commit/rollback;
commit:  commit the transaction, the data in the transaction will be inserted into the table;
rollback:  abort the transaction, nothing will be inserted into the table;
```
### The main realization principle:
```
1. begin a transaction in the session. next sql is executed in the transaction;
2. insert sql will be parser and get the database name and table name, they will be used to select a be and create a pipe to accept data;
3. all inserted values will be sent to the be and write into the pipe;
4. a thread will get the data from the pipe, then write them to disk;
5. commit will complete this transaction and make these data visible;
6. rollback will abort this transaction
```

### Some restrictions on the use of update syntax.
1. Only ```insert``` can be called in a transaction.
2. If something error happened, ```commit``` will not succeed, it will ```rollback``` directly;
3. By default, if part of insert in the transaction is invalid, ```commit``` will only insert the other correct data into the table.
4. If you need ```commit``` return failed when any insert in the transaction is invalid, you need execute ```set enable_insert_strict = true``` before ```begin```.
2021-07-21 10:54:11 +08:00
7c34dbbc5b [Bug-Fix] Fix bug that show view report "Unresolved table reference" error (#6184) 2021-07-15 10:55:15 +08:00
01bef4b40d [Load] Add "LOAD WITH HDFS" model, and make hdfs_reader support hdfs ha (#6161)
Support load data from HDFS by using `LOAD WITH HDFS` syntax and read data directly via libhdfs3
2021-07-10 10:11:52 +08:00
d6e6c7815b [Feature] ADD: show create routine load (#6110)
Add show create routine load
2021-07-04 21:43:25 +08:00
d9c128b744 [BrokerLoad] Support read properties for broker load when read data (#5845)
* [BrokerLoad] support read properties for broker load when read data

Co-authored-by: caiconghui <caiconghui@xiaomi.com>
2021-06-09 14:59:55 +08:00
a29dd42b47 [BUG][Document] Fix the bug that failed to build the help module (#5917)
There are multiple entries with same key in help documents, which will cause build help module failed.
2021-05-27 22:07:15 +08:00
ba69f7a7c8 [Command] [SQL] Add show database/table/partition id command (#5807)
In BE, when a problem happened, in the log, we can find the database id, table id, partition id,
but no database name, table name, partition name.

In FE, there also no way to find database name/table name/partition name accourding to
database id/table id/partition id. Therefore, this patch add 3 new commands:

1. show database id;
mysql> show database 10002;
+----------------------+
| DbName               |
+----------------------+
| default_cluster:test |
+----------------------+

2. show table id;
mysql> show table 11100;
+----------------------+-----------+-------+
| DbName               | TableName | DbId  |
+----------------------+-----------+-------+
| default_cluster:test | table2    | 10002 |
+----------------------+-----------+-------+

3. show partition id;
mysql> show partition 11099;
+----------------------+-----------+---------------+-------+---------+
| DbName               | TableName | PartitionName | DbId  | TableId |
+----------------------+-----------+---------------+-------+---------+
| default_cluster:test | table2    | p201708       | 10002 | 11100   |
+----------------------+-----------+---------------+-------+---------+
2021-05-26 09:58:02 +08:00
07ad038870 [Feature][RoutineLoad] Support for consuming kafka from the point of time (#5832)
Support when creating a kafka routine load, start consumption from a specified point in time instead of a specific offset.
eg:
```
FROM KAFKA
(
    "kafka_broker_list" = "broker1:9092,broker2:9092",
    "kafka_topic" = "my_topic",
    "property.kafka_default_offsets" = "2021-10-10 11:00:00"
);

or

FROM KAFKA
(
    "kafka_broker_list" = "broker1:9092,broker2:9092",
    "kafka_topic" = "my_topic",
    "kafka_partitions" = "0,1,2",
    "kafka_offsets" = "2021-10-10 11:00:00, 2021-10-10 11:00:00, 2021-10-10 12:00:00"
);
```

This PR also reconstructed the analysis method of properties when creating or altering
routine load jobs, and unified the analysis process in the `RoutineLoadDataSourceProperties` class.
2021-05-22 23:37:53 +08:00
12e4ff2689 [Doc] Fix doc for 'SHOW EXPORT' (#5840) 2021-05-19 09:31:57 +08:00
9eacd0a89c [Doc] remove storage_type from docs (#5814) 2021-05-19 09:29:15 +08:00
65ff464e3d [Feature] Support show data order by (#5770)
Currently, the `show data` does not support sorting. When the number of tables increases, it is inconvenient to manage. Need to support sorting

like:
```
mysql>  show data order by ReplicaCount desc,Size asc;
+-----------+-------------+--------------+
| TableName | Size        | ReplicaCount |
+-----------+-------------+--------------+
| table_c   | 3.102 KB    | 40           |
| table_d   | .000        | 20           |
| table_b   | 324.000 B   | 20           |
| table_a   | 1.266 KB    | 10           |
| Total     | 4.684 KB    | 90           |
| Quota     | 1024.000 GB | 1073741824   |
| Left      | 1024.000 GB | 1073741734   |
+-----------+-------------+--------------+
```
2021-05-19 09:27:27 +08:00
add8c4bb74 [Load] Support reading multi-line json objects for JsonScanner (#5774)
Co-authored-by: caiconghui <caiconghui@xiaomi.com>
2021-05-18 15:44:45 +08:00
3fdfe0ba6f [Bug-fix] Export specified column (#5759)
The code logic error causes the user to specify the export column, which may not be effective.
The PR fix this problem.
2021-05-08 10:56:45 +08:00
9001fd28f4 support show stream load sql (#5488)
Co-authored-by: weizuo <weizuo@xiaomi.com>
2021-04-29 09:20:35 +08:00
de87f4ae84 [Feature] Add list partition support (#5529)
Add list partition support
2021-04-24 17:42:27 +08:00
86af8c76a3 [DOC] Add docs of load and export using S3 protocol (#5551)
Add docs of load and export using S3 protocol
2021-03-27 18:58:29 +08:00
64fa305c06 [Doc] correct format errors in English doc (#5487)
Some formate errors in English doc.
They are very straightforward and should not break any existing build.
2021-03-11 22:34:54 +08:00
6cbbc36ea1 [Export] Expand function of export stmt (#5445)
1. Support where clause in export stmt which only export selected rows.

The syntax is following:

Export table [table name]
    where [expr]
To xxx
xxxx

It will filter table rows.
Only rows that meet the where condition can be exported.

2. Support utf8 separator

3. Support export to local

The syntax is following:

Export table [table name]
To (file:///xxx/xx/xx)

If user export rows to local, the broker properties is not requried.
User only need to create a local folder to store data, and fill in the path of the folder starting with file://

Change-Id: Ib7e7ece5accb3e359a67310b0bf006d42cd3f6f5
2021-03-11 20:43:32 +08:00
e93a6da0e5 [Doc] correct format errors in English doc (#5321)
Fix some English doc format errors
2021-02-26 11:32:14 +08:00
780900ac9c [Feature] Support preceding filter original data when loading (#5338)
Support conditional filtering of original data in broker load and routine load
eg:

```
LOAD LABEL `label1`
(
DATA INFILE ('bos://cmy-repo/1.csv')
INTO TABLE tbl2
COLUMNS TERMINATED BY '\t'
(event_day, product_id, ocpc_stage, user_id)
SET (
	ocpc_stage = ocpc_stage + 100
)
PRECEDING FILTER user_id = 1381035
WHERE ocpc_stage > 30
)
...
```
2021-02-07 22:37:48 +08:00
de57667d6d [Delete] Support delete with multi partitions (#5252)
Support delete statement like:
1. delete from table partitions(p1, p2) where xxx;  // apply to p1, p2
2. delete from table where xxx;     // apply to all partitions

Also remove code about the deprecated sync/async delete job.

This CL changes FE meta version to 94
2021-01-30 20:33:34 +08:00
ca10205137 [Function] Support show create function statement (#5197)
* [Function]Support show create function stmt

Co-authored-by: caiconghui [蔡聪辉] <caiconghui@xiaomi.com>
2021-01-28 10:52:37 +08:00
83b7a23d5c fix alter routine load not work (#5257) 2021-01-20 10:52:02 +08:00
279ae1cb75 Add fuzzy_parse option to speed up json import (#5114)
add a flag of fuzzy_parse, if the json file all object keys are the same and has same order, we only need to parse the first row, and then use index instead key to parse value
2020-12-25 09:19:42 +08:00
d6497fedc4 [Config] Change config name 'streaming_load_max_batch_size_mb' to 'streaming_load_json_max_mb' (#4791)
The name and another config name are close to each other and are indistinguishable.
So this pr modify the name.
The document description has also been changed
2020-10-28 23:27:33 +08:00