At present, when some rpc errors occur, the client cannot obtain the error information well.
And this CL change the RPC error returned to client like this:
```
ERROR 1064 (HY000): errCode = 2, detailMessage = there is no scanNode Backend. [10002: in black list(A error
occurred: errorCode=2001 errorMessage:Channel inactive error!)]
ERROR 1064 (HY000): failed to send brpc batch, error=The server is overcrowded, error_text=[E1011]The server is
overcrowded @xx.xx.xx.xx:8060 [R1][E1011]The server is overcrowded @xx.xx.xx.xx:8060 [R2][E1011]The server is
overcrowded @xx.xx.xx.xx:8060 [R3][E1011]The server is overcrowded @xx.xx.xx.xx:8060, client: yy.yy.yy.yy
```
This is the last PR of proposal #4308
1. Add a new FE config `enable_http_server_v2` to enable new HTTP Server implementation. The default value is false.
2. Add a new FE config `http_api_extra_base_path` so that we can set base path for Frontend UI.
3. Refactor the new HTTP API response body. The return http status code is always 200, and using internal code in response body to indicate the certain error.
1. Support limit clause push down both odbc table and mysql table.
2. Code refactor of ODBC Scan Node, change `build_connect_string` and `query_string` from BE to FE to make it easily to modify
Currently, there are M threads to do base compaction and N threads to do cumulative compaction for each disk.
Too many compaction tasks may run out of memory, so the max concurrency of running compaction tasks
is limited by semaphore.
If the running threads cost too much memory, we can't defense it. In addition, reducing concurrency to avoid OOM
will lead to some compaction tasks can't be executed in time and we may encounter more heavy compaction.
Therefore, concurrency limitation is not enough.
The strategy proposed in #3624 may be effective to solve the OOM.
A CompactionPermitLimiter is used for compaction limitation, and use single-producer/multi-consumer model.
Producer will try to generate compaction tasks and acquire `permits` for each task.
The compaction task which can hold `permits` will be executed in thread pool and each finished task will
release its `permits`.
`permits` should be applied for before a compaction task can execute. When the sum of `permits`
held by executing compaction tasks reaches a threshold, subsequent compaction task will be no longer allowed,
until some `permits` are released. Tablet compaction score is used as `permits` of compaction task here.
To some extent, memory consumption can be limited by setting appropriate `permits` threshold.
Support new synatx CREATE TABLE [IF NOT EXISTS] [db_name].table_name AS [db_name2].table_name2;
to create a new table from existed table with same table schema.
ISSUE: #4355
Fix#4692
The reason of this bug is case-when statement may produce implicit CastExpr<SlotRef>
as SelectListItem in analyze step.
And these CastExpr<SlotRef> in SelectList will not be re-anlyze after rewrite step,
which will result in the incorrect number of self-incrementing SlotDescriptor ID in resultExprs .
So we need to reset the analysis state of CastExpr<SlotRef> in rewrite step.
#4619
Add time_round functions that provides `time_floor` & `time_ceil` at each time unit.
Fix two related bugs.
- #4618
- Fix `struct TimeInterval` to use `int64_t` instead of `int32_t`, in case when the second diff overflow
Use static local variable instead of create it every calls.
Time cost of the new added unit benchmark test could reduce
from about 60 seconds to 10 seconds.
It will crash when there are External engine tables in doris after executing "python show_segment_status.py"
The main reason is external engine tables don't have any index and partitions. So it should be ignored.
when a dynamic partitioned table is created, it will take some time to create the partitions.
A exception needs to be thrown when users try to load data into this table,
otherwise the load will stuck in loading phase all the time.
* When the different partition of the table is updated frequently, the partition key list of the cache is discontinuous,
and the partition key in the request cannot hit the key list in the cache, resulting in the access overrun,the BE will crash.
* Add some unit test case,add test cases that fail to hit the boundary value of cache
Describe the bug
DATA_TYPE in information_schema.columns is not compatible to mysql meta
To Reproduce
Steps to reproduce the behavior:
select * from information_schema.columns
Expected behavior
the result of data_type is (int, decimal, char, varchar, ...),but doris data_type is (int(11), varchar(20), ...)
Excess number will affect some BI systems or upper system can't get right type
After BE down, transactions related should be cleared.
The hostname is used to retrive the transactions.
But hostname is recorded as hostname + port, the port
should be removed.