Commit Graph

155 Commits

Author SHA1 Message Date
62acf5d098 Limit the memory usage of Loading process (#1954) 2019-10-15 09:26:20 +08:00
f130bd3e7b Use Env function to operate directory (#1980)
Now Env has unify all environment operation, such as file operation.
However some of our old functions don't leverage it. This change unify
FileUtils::scan_dir to use Env's function.
2019-10-15 09:25:12 +08:00
f852f50acb Improve unique id performance (#1911)
Remove the default constructor for UniqueID
Add a gen_uid method in UniqueId. If need to generate a new uid, users should call this api explicitly.
Reuse boost random generator not generate a new one every time.
2019-09-29 18:20:02 +08:00
d3a445ee09 Fix memory_scratch_sink_test in debug mode (#1906) 2019-09-28 10:33:24 +08:00
cafb9f1e62 Replace Arena with MemPool first step (#1899) 2019-09-28 01:12:22 +08:00
e67b398916 Fix bug that backup may create an empty file on remote storage. (#1869)
Sometime the broker writer failed to close, but we do not handle this failure.
This may create an empty file on remote storage but be treated as normal.

Also enhance some usabilities:
1. getting latest 2000 transactions instead of getting the earliest.
2. Show backend which download and upload tasks are being executed.
2019-09-28 00:11:43 +08:00
2f0808137a Refactor FrontendHelper (#1888) 2019-09-27 13:21:14 +08:00
b246d93128 Avoid SerDe for aggregation query with object pool (#1854) 2019-09-26 13:51:13 +08:00
c643cbd30c Optimize the load performance for large file (#1798)
The current load process is:

Tablet Sink -> Tablet Channel Mgr -> Tablets Channel -> Delta Writer -> MemTable -> Flush to disk

In the path of Tablets Channel -> DeltaWriter -> MemTable -> Flush to disk, the following operations are performed:

Insert tuple into different memtables according to tablet ID
When the memtable size reaches the threshold, it is written to disk.
The above operations are equivalent to single thread execution for a single load task.
In fact, the insertion of memtable and the flush of memtable can be executed synchronously.
Perform these operation in single thread prevents the insertion of memtable from being delayed due to slow disk writing.

In the new implementation, I added a MemTableFlushExecutor class with a set of flush queues and corresponding worker threads.
By default, each data directory uses two worker threads for flush, which can be modified by the parameter flush_thread_num_per_store of BE.
DeltaWriter will push the full memtable to MemTableFlushExecutor for flush operation and generate a new memtable for receiving new data.

This design can improve the performance of load large files.
In single host testing, the time to load a 1GB text file is reduced from 48 seconds to 29 seconds.
2019-09-25 13:49:32 +08:00
abd27dfcca Remove unused debug (#1836) 2019-09-20 09:31:56 +08:00
aaabf97471 Split channel close operation into two phase (#1830)
In this change, channel close is finished into two phases. So we can
close channels parallel, which can make query faster.
2019-09-19 18:14:30 +08:00
17e52a4bac Improve LRUCache to get better performance (#1826)
In this CL, I move the entry's deleter out of LRUCache's mutex block,
which can let others access this cache without waiting free cache entry.
2019-09-19 17:37:02 +08:00
054a3f48bc Add where expr in broker load (#1812)
The where predicate in broker load is responsible for filtering transformed data.
The docs of help and operator has been changed.
2019-09-17 11:32:40 +08:00
11eafe524f Add ChunkAllocator to accelerate chunk allocation (#1792)
I add ChunkAllocator in this CL to put unused memory chunk to a chunk
pool other than return it to system allocator. Now we only change
MemPool's chunk allocation and free to this.

And two configuration are introduduced too. 'chunk_reserved_bytes_limit'
is the limit of how many bytes this chunk pool can reserve in total and
its default value is 2147483648(2GB). 'use_mmap_allocate_chunk': if
chunk is allocated via mmap and default value is false.

And in my test case with default configuration a simple like
"select * from table limit 10", this can improve throughput from 280 QPS
to to 650 QPS. And when I config 'chunk_reserved_bytes_limit' to 0,
which means this is disabled, the throughput is the same with origin's.
2019-09-13 08:27:24 +08:00
9aa2045987 Refactor alter job (#1695) 2019-09-12 16:31:29 +08:00
5a12a1d7df Fix compile error (#1780) 2019-09-10 23:48:42 +08:00
235cdb0ecd Commit kafka offset (#1734)
Commit kafka offset in routine load

Kafka will decide whether to delete data based on whether all consumer group is commit offset or not. If there is no commit offset, the kafka server disk may be full
2019-09-10 14:27:06 +08:00
044489b92f Optimize some kinds of load jobs (#1762)
1. Support specifying label to Insert Into stmt.

    INSERT INTO tbl1 WITH LABEL label1 ...;

2. Return job' state corresponding to the existing label in result of stream load.

    ...
    "Status": "Label Already Exists",
    "ExistingJobStatus": "FINISHED"
    ...

3. Return the recent 2000 transactions in SHOW PROC '/transactions'
2019-09-09 22:11:12 +08:00
9f5e5717d4 Unify the msg of 'Memory exceed limit' (#1737)
The new msg of limit exceed: "Memory exceed limit. %msg, Backend:%ip, fragment:%id Used:% , Limit:%. xxx".
This commit unifies the msg of 'Memory exceed limit' such as check_query_state, RETURN_IF_LIMIT_EXCEEDED and LIMIT_EXCEEDED.
2019-09-03 10:42:16 +08:00
b4f6f755f1 Add exchange in MemPool to reduce alloc/free operation (#1732)
Reuse allocated chunks when storage read operation.
2019-09-02 19:29:30 +08:00
76987275b9 Fix result of unix_timestamp() (#1727) 2019-08-30 21:39:16 +08:00
378ce8ca04 Use double when converting TIME type value (#1722)
TIME type value is saved in DOUBLE, so using int64 can extend the time range.
2019-08-29 21:19:19 +08:00
ecbdfc2cee Avoid consistency problem when has no more data (#1716) 2019-08-29 18:57:49 +08:00
7e981b2b14 Limit the disk usage to avoid running out of disk capacity (#1702)
Set high watermark and flood stage of disk used capacity.
And forbid some operations if disk usage is too high.
2019-08-27 22:18:17 +08:00
dc2d49fe07 Make StringValue's memory layout same with Slice (#1712)
In our storage engine's code, we cast StringValue to Slice. Because
their memory layout is different, it may cause BE process crash.

We make their memory layout same in this patch to resolve this problem
temporary. We should improve it some day.
2019-08-27 22:15:46 +08:00
a1b92768dd Add a loaded rows in SHOW LOAD result (#1686)
Loaded rows will be updated periodically by query report. So that
user can see that a load job is still running or being blocked.
2019-08-27 14:13:47 +08:00
58801c6ab0 Support converting RowBatch and RowBlockV2 to/from Arrow (#1699) 2019-08-27 11:30:00 +08:00
4449316d85 Add error msg when memory limit exceeded (#1685) 2019-08-23 11:13:01 +08:00
00f8040bf3 Fix bug that 2 same stream load jobs may both be able to executed successfully (#1690)
This will cause 2 jobs trying to write same file, and cause file damaged.
2019-08-22 19:38:16 +08:00
82d0afc1ba FROM_UNIXTIME should only convert timestamp from 0 to 253402271999 (#1658)
which is between 1970-01-01 00:00:00 ~ 9999-12-31 23:59:59, otherwise, return null
2019-08-16 18:29:57 +08:00
85e89b79d5 Print src tuple in error_sample file (#1641)
The src tuple could not be print in error_sample file when the value is filtered by strict mode.
This commit fix this issue.
2019-08-14 19:58:09 +08:00
199ff968dc Fix time zone compatibility (#1631) 2019-08-13 18:44:35 +08:00
032d0b41bb Fix compile error (#1630) 2019-08-13 10:00:18 +08:00
69af50aa8c Time zone related BE function (#1598)
Details can be found in time-zone.md document
2019-08-12 20:57:59 +08:00
2bd01b23c7 Add page cache for column page in BetaRowset (#1607) 2019-08-12 10:42:00 +08:00
e3348c46a9 Expose data pruned-filter-scan ability (#1527) 2019-08-11 12:59:24 +08:00
0694b6a6fa Fix bugs of Broker load (#1546)
Use same UUID as query ID and load ID of a load execution plan.
Each load execution plan has a load ID, and as a plan, there is also a query ID.
We can use same UUID as query ID and load ID, for tracing the load process more easily.

Change the load ID when retrying a load execution plan.
When a load execution plan retry, the load ID should be changed, otherwise BE can not
distinguish the old and new load requests.

Cancel the running loading task when cancelling the broker load.
When user cancel a broker load, the running loading task should also be cancelled, or
it may occupies the worker thread for a long time.

Remove the unnecessary query report when doing load execution plan.
Only the last query report is needed.

Add a new BE config tablet_writer_rpc_timeout_sec.
It is used for RPC of tablet sink. The default is 600 seconds. which is long enough for flushing
about 6GB data. The long timeout config will reduce the possibility of encountering fail to send batch error when loading.

Use streaming_load_max_mb instead of mini_load_max_mb in BE config.

Add more logs for tracing a broker load process easily.
2019-07-27 20:17:05 +08:00
a88b55e649 Add more logs and metrics to trace the broker load process (#1530)
The Operator wants to known when the job being scheduled as PENDING
and LOADING. And how long it takes to finish these sub states.

Also add 2 metrics on BE to monitor the memtable's flush time.
`memtable_flush_total` and `memtable_flush_duration_us`
2019-07-23 21:42:44 +08:00
4aedaea84e Support TIME type and timediff function (#1505) 2019-07-23 13:42:39 +08:00
556299aae9 Remove query status report from BE when query is cancelled normally (#1489)
When query result reach limit, the Coordinator in FE will send a cancel
request to BE to cancel the query. And when being cancelled, BE will report
query status to FE for debug purpose. But actually it is not necessary
and will generate too many logs.

So I add a CancelReason to distinguish the difference between 'normally'
cancellation and 'internal error' cancellation. if 'normally' cancelled,
no status will be reported from BE.

When query reach limit, or user cancel it actively, it is being cancelled 'normally'.
Otherwise, the query is cancelled due to internal error, which will need
a report from BE.
2019-07-19 09:36:01 +08:00
4e043e66e2 Modify the result json format of mini load (#1487)
Mini load is now using stream load framework. But we should keep the
mini load return behavior and result json format be same as old.
So PUBLISH_TIMEOUT error should be treated as OK in mini load.

Also add 2 counters for OlapTableSink profile:
SerializeBatchTime: time of serializing all row batch.
WaitInFlightPacketTime: time of waiting last send packet
2019-07-16 19:15:41 +08:00
6c246418fb Add timeout in stream load planner (#1480)
Mini load timeout needs to be added in plan options.
The timeout property has been added in request of process put.
Otherwise, the timeout of mini load is useless.

Add log of label, txn and query id in mini load
2019-07-15 22:14:59 +08:00
0d48a3961c Refactor Storage Engine (#1478)
NOTE: This patch would modify all Backend's data.
And this will cause a very long time to restart be.
So if you want to interferer your product environment,
you should upgrade backend one by one.

1. Refactoring be is to clarify the structure the codes.
2. Use unique id to indicate a rowset.
   Nameing rowset with tablet_id and version will lead to
   many conflicts among compaction, clone, restore.
3. Extract an rowset interface to encapsulate rowsets
   with different format.
2019-07-15 21:18:22 +08:00
67b370a1ed Add ColumnBlock (#1450)
Use ColumnBlock to read data from Page.
2019-07-09 21:52:27 +08:00
ded60e59f9 Add a configuration to modify the reverse time of load error log (#1433)
Currently, the load error log on BE will be cleaned along with the
intermediate data of load, configured by 'load_data_reserve_hours'.
Sometimes user want to reserve the error log for longer time.
2019-07-09 10:36:13 +08:00
7eab12a40e Support reading Parquet file when loading data (#1173) 2019-07-01 18:39:27 +08:00
1ff1722d93 Fix the core in dpp sink by sum of int128 (#1412) 2019-06-28 23:30:33 +08:00
566e122c0d Optimize Export feature (#1378)
1. Add 'timeout' properties in Export stmt.
2. Add more infos in 'show export' stmt.
3. Add more logs for debug.
2019-06-26 00:20:53 +08:00
e30844a321 Add column reader writer for segment V2 (#1346) 2019-06-25 16:59:26 +08:00
7550b2f09b Convert mini load to streaming mini load (#1323)
* This commit has brought contribution to streaming mini load
The operation of streaming mini load is sames as previous. Also, user can check the load by frontend.
The difference is that streaming mini load finish the task before reply of REST API while the non-streaming only register a load.

* When updating doris
Updating fe or be firstly are also supported. After fe and be are updated, the streaming mini load will take effect.

* For multi mini load
The non-streaming mini load still has been used by multi mini load. The behavior of multi mini load has not been changed.

* Add a interface named isSupportedFunction
This function is used to protect the correctness of new feature which consists of be and fe during updaing.
2019-06-21 19:34:50 +08:00