Commit Graph

117 Commits

Author SHA1 Message Date
2fd2b714c1 Add aggregate function doc (#1434) 2019-07-11 16:45:45 +08:00
941dec215b Add utc_timestamp function (#1456) 2019-07-11 11:09:08 +08:00
98bd4b4565 Add string function split_part (#1451) 2019-07-10 09:47:33 +08:00
4989f7bfe3 Fix spelling mistake in docs (#1435) 2019-07-07 11:55:51 +08:00
Ye
25e092f92a Fix broken image link in docs (#1436) 2019-07-07 10:51:31 +08:00
7eab12a40e Support reading Parquet file when loading data (#1173) 2019-07-01 18:39:27 +08:00
8db97998ba Collect all documents to Doris code base (#1414) 2019-07-01 09:23:13 +08:00
756a680143 Add a website builder of Doris documentations (#1396)
The build script locates in docs/website.
Built with Sphinx using a theme provided by Read the Docs.
2019-06-26 19:10:39 +08:00
566e122c0d Optimize Export feature (#1378)
1. Add 'timeout' properties in Export stmt.
2. Add more infos in 'show export' stmt.
3. Add more logs for debug.
2019-06-26 00:20:53 +08:00
e807064a88 Modify colocation creation logic (#1289) 2019-06-25 21:20:18 +08:00
51b2c1d5b2 Add some function doc (#1377) 2019-06-25 21:02:42 +08:00
322de9cd8e Add sql-function doc of cast_to_bigint (#1370) 2019-06-24 19:40:57 +08:00
7550b2f09b Convert mini load to streaming mini load (#1323)
* This commit has brought contribution to streaming mini load
The operation of streaming mini load is sames as previous. Also, user can check the load by frontend.
The difference is that streaming mini load finish the task before reply of REST API while the non-streaming only register a load.

* When updating doris
Updating fe or be firstly are also supported. After fe and be are updated, the streaming mini load will take effect.

* For multi mini load
The non-streaming mini load still has been used by multi mini load. The behavior of multi mini load has not been changed.

* Add a interface named isSupportedFunction
This function is used to protect the correctness of new feature which consists of be and fe during updaing.
2019-06-21 19:34:50 +08:00
bad6478d4f Allow chars i,h,s in time_format (#1328) 2019-06-18 19:48:19 +08:00
5c2cf9f2ce Handle the situation when there is no enough backends for tablet repair (#1299)
If there are only 3 backends and replication num is 3. If one replica of a
tablet is bad, there is no 4th backend for tablet repair. So we need to delete
a bad replica first to make room for new replica.
2019-06-14 20:28:29 +08:00
a04cf3a695 Fix bug that get_json_function may not be able to get result (#1295)
std::string object is unexpectly deconstructed, cause invalid result.
and Change log level
2019-06-13 12:58:47 +08:00
5dea4fb414 Add description of strict mode in decimal type (#1288) 2019-06-12 16:03:57 +08:00
9d7f99a669 Add new file format design markdown (#1267) 2019-06-11 09:34:06 +08:00
53062122ea Change strategy of incorrect data (#1255)
This change adds a load property named strict_mode which is used to prohibit the incorrect data.
When it is set to false, the incorrect data will be loaded by NULL just like before.
When it is set to true, the incorrect data which belongs to a column without expr will be filtered.
The strict_mode is supported in broker load v2 now. It will be supported in stream load later.
2019-06-10 20:39:45 +08:00
ff0dd0d2da Support SSL authentication with Kafka in routine load job (#1235) 2019-06-07 16:29:01 +08:00
cb91e15f1e Modify UDF docs (#1260) 2019-06-06 15:47:10 +08:00
7cdaba66dc Add spatial func (#1213)
Support some spatial functions, such as ST_Contains.
2019-05-31 14:23:09 +08:00
9d19c6c315 Support arbitrary kafka properties (#1204) 2019-05-28 10:03:50 +08:00
5ca2805701 Add some date time function doc (#1206) 2019-05-27 17:36:09 +08:00
85b4619d54 Change insert into to streaming (#1191)
The non-streaming hint of insert into will use the streamin plan which is same as the plan of stream insert.
It will also record the load info and return the label of insert stmt.
The partition is supportted in insert into stmt. The result which meet the target partitions will be loaded.
The introduction of example has been changed especially non-streaming insert.
Also, the param of partition_names is added in sql syntax which is used to declare the target partition_names in target table.

Change META_VERSION to 50
2019-05-23 20:53:30 +08:00
cde315c9e9 Add date-function doc (#1190) 2019-05-23 15:29:08 +08:00
722a9e71c7 Optimize json functions (#1177)
1. get_json_xxx() now support using quoto to escape dot
2. Implement json_path_prepare() function to preprocess json_path

Performance of get_json_string() on 1000000 rows reduces from 2.27s to 0.27s
2019-05-21 09:13:12 +08:00
398055ef3e Add logic of cancel job (#1154) 2019-05-14 17:26:45 +08:00
76a8093c70 Add documentation for doris on es (#1151) 2019-05-13 21:58:05 +08:00
debb58c278 Add SHOW FUNCTION and update docs for UDF (#1140) 2019-05-11 21:46:37 +08:00
4039985729 Fix some bugs about decommission (#1138)
1. Print the last few tablets of decommission backend in fe.log for debug.
2. OlapTableSink should get replica on alive Backends, not only available Backends.
3. When decommission multi Backends, we should drop the redundant replicas before creating a new one.
4. Replicas on decommissioning Backends should be not added to catalog again.
5. Decommissioning Backends should not be chosen as destination of tablet repairing.
2019-05-10 17:41:48 +08:00
79ab7f4413 Change label of broker load txn (#1134)
* Change label of broker load txn

1. put broker load label into txn label
2. fix the bug of `label is already used`
3. fix partition error of new broker load

* Fix count error in mini load and broker load

There are three params (num_rows_load_total, num_rows_load_filtered, num_rows_load_unselected) which are used to count dpp.norm.ALL and dpp.abnorm.ALL.
num_rows_load_total is the number rows of source file.
num_rows_load_unselected is the not satisfied (where conjuncts) rows of num_rows_load_total
num_rows_load_filtered is the rows (quality not good enough) of (num_rows_load_total-num_rows_load_unselected)
2019-05-10 16:53:46 +08:00
e5a5201626 Update routine-load-manual.md (#1133)
edit  some descriptions about “max_error_number”
2019-05-10 14:38:28 +08:00
4aa41a4e3b Update admin_stmt.md (#1131) 2019-05-10 11:49:29 +08:00
ba78adae94 Fix bugs when using function in both stream load request and routine load job (#1091) 2019-05-05 20:51:30 +08:00
b2a022b348 Add money_format function (#1064) 2019-04-29 18:31:24 +08:00
9a570af9a3 Add insert statement document (#1069) 2019-04-29 14:22:20 +08:00
310a375aec Fix bug that null value is not correctly handled when loading data (#1070)
When partition column's value is NULL, it should be loaded into
    the partition which include MIN VALUE
2019-04-29 13:55:28 +08:00
1662d91877 Change the logic of RoutineLoadTaskScheduler (#1061)
1. TaskScheduler will process one task per round
2. TaskScheduler will be blocked till queue tasks a new task
3. TaskScheduler will submit tasks when queue is empty
4. Add a example of creating a broker table by BOS
5. Change syntax of show routine load job
2019-04-28 20:05:48 +08:00
5e36a769a0 Change the way to calculate task num (#1049) 2019-04-28 10:33:50 +08:00
9cd090c96a Modify routine load doc (#1016)
Add config specification
2019-04-28 10:33:50 +08:00
a79bd0c771 Add doc of auto creator of kafka topic (#985)
* Add annotation of show routine load
2019-04-28 10:33:50 +08:00
1b5643c6fb Fix some bugs (#979)
1. Add Config.max_routine_load_concurrent_task_num instead of the old one
2. Fix a bug that SHOW ALTER TABLE COLUMN may throw Nullpointer exception
3. Fix some misspelling of docs
2019-04-28 10:33:50 +08:00
56bec6f22a Add routine load manual (#967) 2019-04-28 10:33:50 +08:00
b7b66527ce Fix some load bugs (#961)
1. Use load job's timeout as its txn timeout
2. Add a new session variable 'forward_to_master' for SHOW PROC and ADMIN stmt
2019-04-28 10:33:50 +08:00
400d8a906f Optimize the consumer assignment of Kafka routine load job (#870)
1. Use a data consumer group to share a single stream load pipe with multi data consumers. This will increase the consuming speed of Kafka messages, as well as reducing the task number of routine
load job. 

Test results:

* 1 consumer, 1 partitions:
    consume time: 4.469s, rows: 990140, bytes: 128737139.  221557 rows/s, 28M/s
* 1 consumer, 3 partitions:
    consume time: 12.765s, rows: 2000143, bytes: 258631271. 156689 rows/s, 20M/s
    blocking get time(us): 12268241, blocking put time(us): 1886431
* 3 consumers, 3 partitions:
    consume time(all 3): 6.095s, rows: 2000503, bytes: 258631576. 328220 rows/s, 42M/s
    blocking get time(us): 1041639, blocking put time(us): 10356581

The next 2 cases show that we can achieve higher speed by adding more consumers. But the bottle neck transfers from Kafka consumer to Doris ingestion, so 3 consumers in a group is enough.

I also add a Backend config `max_consumer_num_per_group` to change the number of consumers in a data consumer group, and default value is 3.

In my test(1 Backend, 2 tablets, 1 replicas), 1 routine load task can achieve 10M/s, which is same as raw stream load.

2. Add OFFSET_BEGINNING and OFFSET_END support for Kafka routine load
2019-04-28 10:33:50 +08:00
c577b9397e Add help doc of routine load (#811) 2019-04-28 10:33:50 +08:00
a5494372b8 Fix some error in doc (#998) 2019-04-23 13:45:04 +08:00
22f93b5d7a Fix doc in alter bloom filter (#984) 2019-04-22 14:07:12 +08:00
22dc6119b9 Add some string functions doc (#965) 2019-04-19 09:50:06 +08:00