TPlanExecParams::volume_id is never used, so delete the print_volume_ids() function.
Fix log, and log if PlanFragmentExecutor::open() returns error.
Fix some comments
implemnets #3803
Support disable some unmeaningful order by clause.
The default limit of 65535 will not be disabled because of it is added at plannode,
after we support spill to disk we can move this limit to analyze.
Fixes#3893
In a cluster with frequent load activities, FE will ignore most tablet report from BE
because currently it only handle reports whose version >= BE's latest report version
(which is increased each time a transaction is published). This can be observed from FE's log,
with many logs like `out of date report version 15919277405765 from backend[177969252].
current report version[15919277405766]` in it.
However many system functionalities rely on TabletReport processing to work properly. For example
1. bad or version miss replica is detected and repaired during TabletReport
2. storage medium migration decision and action is made based on TabletReport
3. BE's old transaction is cleared/republished during TabletReport
In fact, it is not necessary to update the report version after the publish task.
Because this is actually a problem left over by history. In the reporting logic of the current version,
we will no longer decrease the version information of the replica in the FE metadata according to the report.
So even if we receive a stale version of the report, it does not matter.
This CL contains mainly two changes
1. do not increase report version for publish task
2. populate `tabletWithoutPartitionId` out of read lock of TabletInvertedIndex
* [Enhance] Add MetaUrl and CompactionUrl for "show tablet" stmt
Add MetaUrl and CompactionUrl in result of following stmt:
`show tablet 10010`;
* fix ut
* add doc
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
Fix: #3946
CL:
1. Add prepare phase for `from_unixtime()`, `date_format()` and `convert_tz()` functions, to handle the format string once for all.
2. Find the cctz timezone when init `runtime state`, so that don't need to find timezone for each rows.
3. Add constant rewrite rule for `utc_timestamp()`
4. Add doc for `to_date()`
5. Comment out the `push_handler_test`, it can not run in DEBUG mode, will be fixed later.
6. Remove `timezone_db.h/cpp` and add `timezone_utils.h/cpp`
The performance shows bellow:
11,000,000 rows
SQL1: `select count(from_unixtime(k1)) from tbl1;`
Before: 8.85s
After: 2.85s
SQL2: `select count(from_unixtime(k1, '%Y-%m-%d %H:%i:%s')) from tbl1 limit 1;`
Before: 10.73s
After: 4.85s
The date string format seems still slow, we may need a further enhancement about it.
Replace some boost to std in OlapScanNode.
This refactor seems solve the problem describe in #3929.
Because I found that BE will crash to calling `boost::condition_variable.notify_all()`.
But after upgrade to this, BE does not crash any more.
ISSUE:#3960
PR #3454 introduce the caching for EsClient, but the initialization of the client was only during editlog replay, all this work should done also during image replay.
This happens when restart or upgrade FE
BTW: modify a UT failure for metric
Currently we choose BE random without check disk is available,
the create table will failed until create tablet task is sent to BE
and BE will check is there has available capacity to create tablet.
So check backend disk available by storage medium will reduce unnecessary RPC call.
1. Split /_cluster/state into /_mapping and /_search_shards requests to reduce permissions and make the logic clearer
2. Rename part es related objects to make their representation more accurate
3. Simply support docValue and Fields in alias mode, and take the first one by default
#3311
Fix#3920
CL:
1. Parse the TCP metrics header in `/proc/net/snmp` to get the right position of the metrics.
2. Add 2 new metrics: `tcp_in_segs` and `tcp_out_segs`
When we get default system time zone, it will return `PRC`, which is not supported by us, thus
will cause dynamic partition create failed. Fix#3919
This CL mainly changes:
1. Use a unified method to get the system default time zone
2. Now the default variable `system_time_zone` and `time_zone` is set to the default system
time zone, which is `Asia/Shanghai`.
3. Modify related unit test.
4. Support time zone `PRC`.
The following statement will throw error:
```
create table test.tbl2
(k1 int, k2 int, k3 float)
duplicate key(k1)
partition by range(k2)
(partition p1 values less than("10"))
distributed by hash(k3) buckets 1
properties('replication_num' = '1');
```
Error: `Only key column can be partition column`
But in duplicate key table, columns can be partition or distribution column
even if they are not in duplicate keys.
This bug is introduced by #3812
When starting FE with `start_fe.sh --helper xxx` command, do not allow to
point helper to FE itself. Because this is meaningless and may cause some
confusing problemes.
1. Fe checks the status of etl job regularly
1.1 If status is RUNNING, update etl job progress
1.2 If status is CANCELLED, cancel load job
1.3 If status is FINISHED, get the etl output file paths, update job state to LOADING and log job update info
2. Fe sends PushTask to Be and commits transaction after all push tasks execute successfully
#3433
mainly used for Spark Load process to calculate approximate deduplication value and then serialize to parquet file.
Try to keep the same calculation semantic with be's C++ version
This bug is introduced by PR #3784
In #3784, I remove the `Catalog.getInstance()`, and use `Catalog.getCurrentCatalog()` instead.
But actually, there are some place that should use the serving catalog explicitly.
Mainly changed:
1. Add a new method `getServingCatalog()` to explicitly return the real catalog instance.
2. Fix a compile bug of broker introduced by #3881
After user creates a spark load job which status is PENDING, Fe will schedule and submit the spark etl job.
1. Begin transaction
2. Create a SparkLoadPendingTask for submitting etl job
2.1 Create etl job configuration according to https://github.com/apache/incubator-doris/issues/3010#issuecomment-635174675
2.2 Upload the configuration file and job jar to HDFS with broker
2.3 Submit etl job to spark cluster
2.4 Wait for etl job submission result
3. Update job state to ETL and log job update info if etl job is submitted successfully
#3433
This configuration is specifically used to limit timeout setting for stream load.
It is to prevent that failed stream load transactions cannot be canceled within
a short time because of the user's large timeout setting.
Mainly change:
1. Fix the bug in `update_status(status)` of `PlanFragmentExecutor`.
2. When the FE Coordinator executes `execRemoteFragmentAsync()`, if it finds an RPC error, return a Future with an error code instead of exception.
3. Protect the `_status` in RuntimeState with lock
4. Move the `_runtime_profile` of RuntimeState before the `_obj_pool`, so that the profile will be
deconstructed after the object pool.
5. Remove the unused `ObjectPool` param in RuntimeProfile constructor. If I don't remove it,
RuntimeProfile will depends on the `_obj_pool` in RuntimeProfile.
* Forbidden float column in short key
When the user does not specify the short key column, doris will automatically supplement the short key column.
However, doris does not support float or double as the short key column, so when adding the short key column, doris should avoid setting those column as the key column.
The short key columns must be less then 3 columns and less then 36 bytes.
The CreateMaterailizedView, AddRollup and CreateDuplicateTable need to forbidden float column in short key.
If the float column is directly encountered during the supplement process, the subsequent columns are all value columns.
Also the float and double could not be the short key column. At the same time, Doris must be at least one short key column.
So the type of first column could not be float or double.
If the varchar is the short key column, it can only be the least one short key column.
Fixed#3811
For duplicate table without order by columns, the order by columns are same as short key columns.
If the order by columns have been designated, the count of short key columns must be <= the count of order by columns.
This CL mainly support timezone in dynamic partition:
1. use new Java Time API to replace Calendar.
2. support set time zone in dynamic partition parameters.
* 1. Add enable spilling in query option, support spill disk in Analytic_Eval_Node, FE can open enable spilling by
set enable_spilling = true;
Now, Sort Node and Analytic_Eval_Node can spill to disk.
2. Delete merge merge_sorter code we do not use now.
3. Replace buffered_tuple_stream by buffered_tuple_stream2 in Analytic_Eval_Node and support spill to disk. Delete the useless code of buffered_block_mgr and buffered_tuple_stream.
4. Add DataStreamRecvr Profile. Move the counter belong to DataStreamRecvr from fragment to DataStreamRecvr Profile to make clear of Running Profile.
* change some hint in code
* replace disable_spill with enable_spill which is better compatible to FE