Commit Graph

24 Commits

Author SHA1 Message Date
3fdfe0ba6f [Bug-fix] Export specified column (#5759)
The code logic error causes the user to specify the export column, which may not be effective.
The PR fix this problem.
2021-05-08 10:56:45 +08:00
9001fd28f4 support show stream load sql (#5488)
Co-authored-by: weizuo <weizuo@xiaomi.com>
2021-04-29 09:20:35 +08:00
de87f4ae84 [Feature] Add list partition support (#5529)
Add list partition support
2021-04-24 17:42:27 +08:00
86af8c76a3 [DOC] Add docs of load and export using S3 protocol (#5551)
Add docs of load and export using S3 protocol
2021-03-27 18:58:29 +08:00
6cbbc36ea1 [Export] Expand function of export stmt (#5445)
1. Support where clause in export stmt which only export selected rows.

The syntax is following:

Export table [table name]
    where [expr]
To xxx
xxxx

It will filter table rows.
Only rows that meet the where condition can be exported.

2. Support utf8 separator

3. Support export to local

The syntax is following:

Export table [table name]
To (file:///xxx/xx/xx)

If user export rows to local, the broker properties is not requried.
User only need to create a local folder to store data, and fill in the path of the folder starting with file://

Change-Id: Ib7e7ece5accb3e359a67310b0bf006d42cd3f6f5
2021-03-11 20:43:32 +08:00
780900ac9c [Feature] Support preceding filter original data when loading (#5338)
Support conditional filtering of original data in broker load and routine load
eg:

```
LOAD LABEL `label1`
(
DATA INFILE ('bos://cmy-repo/1.csv')
INTO TABLE tbl2
COLUMNS TERMINATED BY '\t'
(event_day, product_id, ocpc_stage, user_id)
SET (
	ocpc_stage = ocpc_stage + 100
)
PRECEDING FILTER user_id = 1381035
WHERE ocpc_stage > 30
)
...
```
2021-02-07 22:37:48 +08:00
de57667d6d [Delete] Support delete with multi partitions (#5252)
Support delete statement like:
1. delete from table partitions(p1, p2) where xxx;  // apply to p1, p2
2. delete from table where xxx;     // apply to all partitions

Also remove code about the deprecated sync/async delete job.

This CL changes FE meta version to 94
2021-01-30 20:33:34 +08:00
ca10205137 [Function] Support show create function statement (#5197)
* [Function]Support show create function stmt

Co-authored-by: caiconghui [蔡聪辉] <caiconghui@xiaomi.com>
2021-01-28 10:52:37 +08:00
83b7a23d5c fix alter routine load not work (#5257) 2021-01-20 10:52:02 +08:00
279ae1cb75 Add fuzzy_parse option to speed up json import (#5114)
add a flag of fuzzy_parse, if the json file all object keys are the same and has same order, we only need to parse the first row, and then use index instead key to parse value
2020-12-25 09:19:42 +08:00
6673306fda [DOC] fix toSql of ShowPartitionsStmt (#5070) 2020-12-19 11:18:00 +08:00
d6497fedc4 [Config] Change config name 'streaming_load_max_batch_size_mb' to 'streaming_load_json_max_mb' (#4791)
The name and another config name are close to each other and are indistinguishable.
So this pr modify the name.
The document description has also been changed
2020-10-28 23:27:33 +08:00
0475aa9b93 [Bug]Fix delete on clause may not work in routineLoad (#4683)
fix delete on may not work in some cases, this is describe in #4682
2020-09-30 09:56:19 +08:00
068707484d Support sequence column for UNIQUE_KEYS Table (#4256)
* add sequence  col

Co-authored-by: yangwenbo6 <yangwenbo3@jd.com>
2020-09-04 10:10:17 +08:00
wyb
ffe696d17c [Doc] Add spark load sql statement doc and update manual (#4463)
1. add sql statement in dml
2. update spark load manual
2020-08-30 21:09:17 +08:00
174c9f89ea [DOCS] Add batch delete docs (#4435)
update documents for batch delete #4051
2020-08-28 09:24:07 +08:00
eefad13107 [Feature] Support InPredicate in delete statement (#4006)
This PR is to add inPredicate support to delete statement,
and add max_allowed_in_element_num_of_delete variable to
limit element num of InPredicate in delete statement.
2020-08-06 23:19:40 +08:00
237c0807a4 [RoutineLoad] Support modify routine load job (#4158)
Support ALTER ROUTINE LOAD JOB stmt, for example:

```
alter routine load db1.label1
properties
(
"desired_concurrent_number"="3",
"max_batch_interval" = "5",
"max_batch_rows" = "300000",
"max_batch_size" = "209715200",
"strict_mode" = "false",
"timezone" = "+08:00"
)
```

Details can be found in `alter-routine-load.md`
2020-08-06 23:11:02 +08:00
fdcc223ad2 [Bug][Json] Refactor the json load logic to fix some bug
1. Add `json_root` for nest json data.
2. Remove `_jmap` to make the logic reasonable.
2020-07-30 10:36:34 +08:00
c3d9feed75 [Load][Json] Refactor json load logic to make it more reasonable (#4020)
This CL mainly changes:

1. Reorganized the code logic to limit the supported json format to two, and the import behavior is more consistent.
2. Modified the statistical behavior of the number of error rows when loading in json format, so that the error rows can be counted correctly.
3. See `load-json-format.md` to get details of loading json format.
2020-07-07 23:07:28 +08:00
77b9acc242 [Stmt] Add rowCount column to SHOW DATA stmt (#3676)
User can see the row count of all materialized indexes of a table.

```
mysql> show data from test;
+-----------+-----------+-----------+--------------+----------+
| TableName | IndexName | Size      | ReplicaCount | RowCount |
+-----------+-----------+-----------+--------------+----------+
| test2     | r1        | 10.000MB  | 30           | 10000    |
|           | r2        | 20.000MB  | 30           | 20000    |
|           | test2     | 50.000MB  | 30           | 50000    |
|           | Total     | 80.000    | 90           |          |
+-----------+-----------+-----------+--------------+----------+
```

Fix #3675
2020-05-26 15:53:38 +08:00
ef8fd1fcbe [Load] Support load json-data into Doris by RoutineLoad or StreamLoad (#3553)
Doris support load json-data by RoutineLoad or StreamLoad
2020-05-21 13:00:49 +08:00
f591976976 [Doc] Fix the incorrect docs (#3501) 2020-05-08 12:47:00 +08:00
432965e360 [Enhancement] documents rebuild with Vuepress (#3408) (#3414) 2020-04-29 09:14:31 +08:00