The following statement will throw error:
```
create table test.tbl2
(k1 int, k2 int, k3 float)
duplicate key(k1)
partition by range(k2)
(partition p1 values less than("10"))
distributed by hash(k3) buckets 1
properties('replication_num' = '1');
```
Error: `Only key column can be partition column`
But in duplicate key table, columns can be partition or distribution column
even if they are not in duplicate keys.
This bug is introduced by #3812
When starting FE with `start_fe.sh --helper xxx` command, do not allow to
point helper to FE itself. Because this is meaningless and may cause some
confusing problemes.
1. Fe checks the status of etl job regularly
1.1 If status is RUNNING, update etl job progress
1.2 If status is CANCELLED, cancel load job
1.3 If status is FINISHED, get the etl output file paths, update job state to LOADING and log job update info
2. Fe sends PushTask to Be and commits transaction after all push tasks execute successfully
#3433
This bug is introduced by PR #3872
In that PR, I removed the obj_pool param of the RuntimeProfile constructor.
So the first param is std::string.
But in DataStreamRecv, it accidentally pass a nullptr to std::string, it compiles
OK but will cause runtime error.
Fix#3917
mainly used for Spark Load process to calculate approximate deduplication value and then serialize to parquet file.
Try to keep the same calculation semantic with be's C++ version
This bug is introduced by PR #3784
In #3784, I remove the `Catalog.getInstance()`, and use `Catalog.getCurrentCatalog()` instead.
But actually, there are some place that should use the serving catalog explicitly.
Mainly changed:
1. Add a new method `getServingCatalog()` to explicitly return the real catalog instance.
2. Fix a compile bug of broker introduced by #3881
After user creates a spark load job which status is PENDING, Fe will schedule and submit the spark etl job.
1. Begin transaction
2. Create a SparkLoadPendingTask for submitting etl job
2.1 Create etl job configuration according to https://github.com/apache/incubator-doris/issues/3010#issuecomment-635174675
2.2 Upload the configuration file and job jar to HDFS with broker
2.3 Submit etl job to spark cluster
2.4 Wait for etl job submission result
3. Update job state to ETL and log job update info if etl job is submitted successfully
#3433
This configuration is specifically used to limit timeout setting for stream load.
It is to prevent that failed stream load transactions cannot be canceled within
a short time because of the user's large timeout setting.
Mainly change:
1. Fix the bug in `update_status(status)` of `PlanFragmentExecutor`.
2. When the FE Coordinator executes `execRemoteFragmentAsync()`, if it finds an RPC error, return a Future with an error code instead of exception.
3. Protect the `_status` in RuntimeState with lock
4. Move the `_runtime_profile` of RuntimeState before the `_obj_pool`, so that the profile will be
deconstructed after the object pool.
5. Remove the unused `ObjectPool` param in RuntimeProfile constructor. If I don't remove it,
RuntimeProfile will depends on the `_obj_pool` in RuntimeProfile.
This commit fixs a bug that broker cannot read the full length of buffer size, when the buffer size is set larger than 128k.
This bug will cause the data size returned by pread request to be less than 128K all the time.
* Forbidden float column in short key
When the user does not specify the short key column, doris will automatically supplement the short key column.
However, doris does not support float or double as the short key column, so when adding the short key column, doris should avoid setting those column as the key column.
The short key columns must be less then 3 columns and less then 36 bytes.
The CreateMaterailizedView, AddRollup and CreateDuplicateTable need to forbidden float column in short key.
If the float column is directly encountered during the supplement process, the subsequent columns are all value columns.
Also the float and double could not be the short key column. At the same time, Doris must be at least one short key column.
So the type of first column could not be float or double.
If the varchar is the short key column, it can only be the least one short key column.
Fixed#3811
For duplicate table without order by columns, the order by columns are same as short key columns.
If the order by columns have been designated, the count of short key columns must be <= the count of order by columns.
This CL mainly support timezone in dynamic partition:
1. use new Java Time API to replace Calendar.
2. support set time zone in dynamic partition parameters.
This fix is based on UBSAN unit test. So if we create & use class obj in a different way, may have runtime error: load of value XX, which is not a valid value for type 'YYY' warnings again.
Unit test should build in DEBUG or XXSAN mode(at lease DEBUG). RELEASE mode will add -DNDEBUG, turn off dchecks/asserts/debug.
* 1. Add enable spilling in query option, support spill disk in Analytic_Eval_Node, FE can open enable spilling by
set enable_spilling = true;
Now, Sort Node and Analytic_Eval_Node can spill to disk.
2. Delete merge merge_sorter code we do not use now.
3. Replace buffered_tuple_stream by buffered_tuple_stream2 in Analytic_Eval_Node and support spill to disk. Delete the useless code of buffered_block_mgr and buffered_tuple_stream.
4. Add DataStreamRecvr Profile. Move the counter belong to DataStreamRecvr from fragment to DataStreamRecvr Profile to make clear of Running Profile.
* change some hint in code
* replace disable_spill with enable_spill which is better compatible to FE
This plugin is used to output data to Doris for logstash
Use the HTTP protocol to interact with the Doris FE Http interface
Load data through Doris's stream load
1. the table include key column of double/float type
2. when run checksum task, will use all of key columns to compare
3. schema.column(idx) of double/float type is NULL
#3735