This CL includes:
* Change the column metadata to a tree structure.
* Refactor the segment_v2.ColumnReader and sgment_v2.ColumnWriter to support complex type.
* Implements the reading and writing of array type.
In colocate join, the memory limit of each instance is usually less than the value of exec_mem_limit,
which could lead to query failure (Memory exceed limit).
Since the purpose of resetting colocate-join memory limit
(/fe/fe-core/src/main/java/org/apache/doris/qe/Coordinator.java) is unclear to me,
I just change the default value of query_colocate_join_memory_limit_penalty_factor from 8 to 1, as a hotfix.
I use containerized deployment of BE nodes, both using the same distributed disk.
When doing data migration, the current logic will lead to errors.
For example, my distributed disk has 10t and has been used by other services for 9T,
at this time, it is assumed that all the 9T data is used by BE nodes
bug introduced from pr #4825, will cause `schema_change` to report an error:
```
schema_change.cpp:1271] fail to check row num! source_rows=1, merged_rows=0, filtered_rows=0, new_index_rows=0
schema_change.cpp:1921] failed to process the version. version=2-2
schema_change.cpp:1615] failed to alter tablet. base_tablet=44643.1383650721.b140317f6662c1e0-65bcbc87db8d22bc, drop new_tablet=45680.1530531459.474e41f3dd538fb6-9284085daac24f83
```
This reverts commit c8df76a807b4856f71bcb6a3a023849f3bf294d7.
This commit has some problem when handling predicate like:
`k1 = "2020-10-10 10:00:00.000"`
This is a valid predicate, and FE Datetime can not support milli or micro seconds, so it will treat it as invalid date time value.
So we revert it, and may find some better solution later.
This CL refactor the storage medium migration task process in BE.
I did not modify the execution logic. Just extract part of the logic
in the migration task and put it in task_work_pool.
In this way, the migration task is only used to process the migration
from the specified tablet to the specified data dir.
Later, we can use this task to migrate of tablets between different disks. #4476
We use 'LastStartTime' in backends list to check whether there is an unexpected
restart of BE, but it will be changed as BE's first heartbeat time after FE
restarted, it would be better to set it to BE's actual start time.
It would be helpful to monitor the count of timeout canceled fragments
when there is any issuse cause fragments execute failed or queued too
long time.
mainly includes:
- `OLAP_SCAN_NODE` profile layering: `OLAP_SCAN_NODE`,`OlapScanner`, and `SegmentIterator`.
- Delete meaningless statistical values. mainly in scan_node.cpp.
- Increase `RowsConditionsFiltered` statistical, split from `RowsDelFiltered`, the meaning is the number of rows filtered by various column indexes, only in segment V2.
- Modify the document based on the above, and enhance readability.
* [Broker Load] Ignore empty file when file format is parquet or orc.
We can not ready empty parquet or orc format file. So we should skip them
when doing broker load.
* [Bug] Fix some bugs of load job scheduler
1. The fix load meta bug logic should be removed since 0.12.
2. The load task thread pool's waiting queue should be as long as desired pending jobs num.
3. Submit the load task outside database lock to prevent holding lock for long time.
* Update fe-idea-dev.md
use `brew install thrift@0.9` to install thrift 0.9.3.1
`brew edit thrift090 | head` shows thrift@0.9 uses thrift 0.9.3.1
* [Refactor] Remove the unnecessary if statement
Future<?> submit(Runnable task)
Submits a Runnable task for execution and returns a Future representing that task. The Future's get method will return null upon successful completion.
When `Load Job Task Queue` is filled, continue to submit more jobs to the queue will cause
`RejectedExecutionException`.
But `callback.onTaskFailed` function does not catch the exception, that will cause
re-submitting job failed, and status is not updated to failed.
issue: #4795
When LRUCache insert and evict a large number of entries, there are
frequently calls of HandleTable::remove(e->key, e->hash), it will
lookup the entry in the hash table. Now that we know the entry to
remove 'e', we can remove it directly from hash table's collision list
if it's a double linked list.
This patch refactor the collision list to double linked list, the simple
benchmark CacheTest.SimpleBenchmark shows that time cost reduced about
18% in my test environment.
SQL like:
`select a join b union select c join d`;
if a b is colocate join, and c d is also colocate join, the query may failed
with error like:
`failed to get tablet. tablet_id=26846, with schema_hash=398972982, reason=tablet does not exist`
1. Add a search boxer in the left tree view of Playground.
2. Fix some visual bugs of UI.
3. Fix bugs that link failed in QueryProfile view.
4. Fix bugs that cookie is always invalid.
5. Set cookie to HTTP_ONLY to make it more safe.
`select day from test where day='2020-10-32'`
table 'test' is parititioned by day. In this case, '2020-10-32' will be taken as CastExpr not LiteralExpr,
and condition "day='2020-10-32'" will not be recognized as partitionfilter.
This case will scan all partitions. To avoid scall all partitions, it is better to filter invalid date value.
issue: #4755
1.
When we decommission some BEs with SSD disks,
if there are no SSD disks on the remaining BEs, it will be impossible to select a suitable destination path.
In this case, we need to ignore the storage medium property and try to select the destination path again.
Set `isSupplement` to true will ignore the storage medium property.
2.
When the BE nodes where all replicas of a tablet are located are decommission,
and this task is a VERSION_INCOMPLETE task.
This will lead to failure to select a suitable dest replica.
At this time, we should try to convert this task to a REPLICA_MISSING task, and schedule it again.
The name and another config name are close to each other and are indistinguishable.
So this pr modify the name.
The document description has also been changed