* [Fix](inverted index) fix memeory leak when inverted index writer do not finish correctly
* [Update](inverted index) use smart pointer to avoid memeory leak
* [Chore](format) code format
---------
Co-authored-by: airborne12 <airborne12@gmail.com>
before
mysql [(none)]>select cast("10:10:10" as time);
+-------------------------------+
| CAST('10:10:10' AS TIMEV2(0)) |
+-------------------------------+
| 00:00:00 |
+-------------------------------+
after
mysql [(none)]>select cast("10:10:10" as time);
+-------------------------------+
| CAST('10:10:10' AS TIMEV2(0)) |
+-------------------------------+
| 10:10:10 |
+-------------------------------+
In the past, we supported this syntax.
mysql [(none)]>select cast("2023:05:01 13:14:15" as time);
+------------------------------------------+
| CAST('2023:05:01 13:14:15' AS TIMEV2(0)) |
+------------------------------------------+
| 13:14:15 |
+------------------------------------------+
However, "10:10:10" is also a valid datetime.
mysql [(none)]>select cast("10:10:10" as datetime);
+-----------------------------------+
| CAST('10:10:10' AS DATETIMEV2(0)) |
+-----------------------------------+
| 2010-10-10 00:00:00 |
+-----------------------------------+
So here, the order of parsing has been adjusted.
Refactoring the filtering conditions in the current ExecNode from an expression tree to an array can simplify the process of adding runtime filters. It eliminates the need for complex merge operations and removes the requirement for the frontend to combine expressions into a single entity.
By representing the filtering conditions as an array, each condition can be treated individually, making it easier to add runtime filters without the need for complex merging logic. The array can store the individual conditions, and the runtime filter logic can iterate through the array to apply the filters as needed.
This refactoring simplifies the codebase, improves readability, and reduces the complexity associated with handling filtering conditions and adding runtime filters. It separates the conditions into discrete entities, enabling more straightforward manipulation and management within the execution node.
We encountered one confusing situation where buffered reader were trapped in one endless loop when calling readat. Then we found out that it was all due to the return data size is less than requested.
As the following picture shows, the actual data size is about 2M, and when we called readat it only retrieved about 1MB.
before: the node will wait to retrieve all data from child, then send data to parent.
now: for data from child that does not require sorting, it can be sent to parent immediately.
* [Bug](point query) checkAndSetPointQuery before checkEnableTwoPhaseRead
1. checkEnableTwoPhaseRead rely on thr short circuit flag
2. add more metric to display lookup profile
* fix rebase
#18976 introduced merge small IO facility to optimize performance, and used by parquet reader.
This PR support this facility in orc reader. Current ORC reader implementation need to reposition parent present stream when reading lazy columns in lazy materialization facility. So let it works by removing `DCHECK_GE(offset, cached_data.end_offset)`.
Fix errors when inserting string/date/datetime values into SQLServer:
ERROR 1105 (HY000): errCode = 2, detailMessage = (172.21.0.101)[INTERNAL_ERROR]UdfRuntimeException: JDBC executor sql has error:
CAUSED BY: SQLServerException: Invalid column name '2021-10-30'.
When using double quotes enclose string values, it will be parsed as column name, so we should enclose string values with single quotes.
1. Before this PR if rowset does not contain column which should be read for related SlotDescriptor will call `insert_default` to column, but it's not this real defautl value.Real default value relevant information should be provided by the frontend side.
2. Support fetch when light schema change is not enabled, but disable for AGG or UNIQUE MOR model
When FE is deployed on a virtual machine and CN is deployed on k8s, FE needs to use a proxy IP to communicate with
CN nodes, and BE cannot resolve the proxy IP from the local network card.
We change the previous verification rules and obey the IP assigned by the master
Suppose three queries are executed in a resource group with a memory_limit of 8G, and they consume memory of query_a = 3G, query_b = 3G, and query_c = 3G. The total memory used is counted as 9G when the resource group GC is executed, which exceeds the resource group limit and cancels query_a.
When the resource group is next GC, the memory of query_a may not be freed yet, and it will be counted again in the total memory consumed by that resource group, which again exceeds the resource group limit and cancels query_b.
From the user's perspective, it is fine to execute query_a and query_b at the same time, but executing query_ a, query_b and query_c will be cancelled for two queries, which is not as expected.
This pr skips the queries that are cancelled when counting the memory used by the resource group. If this causes the process memory to grow, the process gc will handle it.
/home/zcp/repo_center/doris_master/doris/be/src/olap/rowset/segment_v2/column_reader.cpp:895:21: runtime error: load of value 423208544, which is not a valid value for type 'doris::ReaderType'
/home/zcp/repo_center/doris_master/doris/be/src/vec/columns/column_decimal.cpp:260:33: runtime error: load of misaligned address 0x7fa3348b301c for type 'int64_t' (aka 'long'), which requires 8 byte alignment
/home/zcp/repo_center/doris_master/doris/be/src/olap/block_column_predicate.cpp:82:24: runtime error: variable length array bound evaluates to non-positive value 0
/home/zcp/repo_center/doris_master/doris/be/src/vec/columns/column_string.h:225:26: runtime error: null pointer passed as argument 2, which is declared to never be null
1. reduce s3 buffer pool's ctor cost
2. before this pr, if one s3 file writer return err when calling append or close function, the caller will not call abort function which result in one confusing DCHECK failed like the following picture
* Revert "[fix](sink) fix END_OF_FILE error for pipeline caused by VDataStreamSender eof (#20007)"
This reverts commit 2ec1d282c5e27b25d37baf91cacde082cca4ec31.
* [fix](revert) data stream sender stop sending data to receiver if it returns eos early (#19847)"
This reverts commit c73003359567067ea7d44e4a06c1670c9ec37902.
For broadcast join, only one build fragment instance will build hash table, other fragment instances just receive and throw away build side data, this is waste of memory and cpu.
This PR improve this condition, data stream receiver tells sender that it does not need data from sender, and sender stops sending anydata to it.
After the query check process memory exceed limit in Allocator, it will wait up to 5s.
Before, Allocator will not check whether the query is canceled while waiting for memory, this causes the query to not end quickly.