_child_block in nest loop join , table value function, repeat node will be shared between ExecNode and related operator, but it should not be a unique ptr in operator, it belongs to exec node.
It will double free the block, if operator's close method is not called correctly.
It should be a shared ptr, then it will not core even if the opeartor's close method is not called.
This PR #27515 change the logic if Table's `isPartitioned()` method.
But this method has 2 usages:
1. To check whether a table is range or list partitioned, for some DML operation such as Alter, Export.
For this case, it should return true if the table is range or list partitioned. even if it has only
one partition and one buckets.
2. To check whether the data is distributed (either by partitions or by buckets), for query planner.
For this case, it should return true if table has more than one bucket. Even if this table is not
range or list partitioned, if it has more than one bucket, it should return true.
So we should separate this method into 2, for different usages.
Otherwise, it may cause some unreasonable plan shape
in some case, if set incorrectly, will be cause BE core dump
10:18:19 *** SIGFPE integer divide by zero (@0x564853c204c8) received by PID 2132555
int max_scanners =
config::doris_scanner_thread_pool_thread_num / state->query_parallel_instance_num();
Refactor write path code by abstract base class. Whether to use `StorageEngine` or `CloudStorageEngine` will be determined during compilation instead of runtime `config::cloud_mode` to avoid unexpected null pointer or undefined behavior issues caused by merging code.
Class that depend on `StorageEngine` but are shared by the cloud mode need to have an abstract base class. Common code should be extracted into the base class, while the code that depends on `StorageEngine` should be implemented in a `StorageEngine` mix-in class of the base class.
BE will core dump if result block is invalid when we doing result serialization.
An existing bug case is described in #28030, so we add check branch to avoid BE core dump due to out of range related problem.
* [regression-test](Variant) Add more cases related to schema changes
And fix bugs about schema change for variant:
fix bug schema change crash on doing schema change with tablet schema that contains extracted columns
If specified, got a column of constant. otherwise an incremental series like it always be.
mysql> select * from numbers("number" = "5", "const_value" = "-123");
+--------+
| number |
+--------+
| -123 |
| -123 |
| -123 |
| -123 |
| -123 |
+--------+
5 rows in set (0.11 sec)
Previously temporarily upgrade Arrow to dev version 15.0.0-SNAPSHOT, because the latest release version Arrow 14.0.1 jdbc:arrow-flight-sql has BUG, jdbc:arrow-flight-sql cannot be used normally, see: apache/arrow#38785
But Arrow 15.0.0-SNAPSHOT was not published to the Maven central repository, and the network could not be connected sometimes, so back to Arrow 14.0.1. jdbc:arrow-flight-sql will be supported after upgrading to Arrow 15.0.0 release version.
Otherwise accessing rows at `n` will lead to heap buffer overflow
```
5# SipHash::update(char const*, unsigned long) at /home/zcp/repo_center/doris_master/doris/be/src/vec/common/sip_hash.h:132
6# doris::vectorized::ColumnString::update_hash_with_value(unsigned long, SipHash&) const at /home/zcp/repo_center/doris_master/doris/be/src/vec/columns/column_string.h:452
7# doris::vectorized::ColumnObject::update_hash_with_value(unsigned long, SipHash&) const at /home/zcp/repo_center/doris_master/doris/be/src/vec/columns/column_object.cpp:1433
8# doris::vectorized::Block::update_hash(SipHash&) const at /home/zcp/repo_center/doris_master/doris/be/src/vec/core/block.cpp:721
9# doris::EngineChecksumTask::_compute_checksum() at
```
```
Caused by: java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437) ~[?:1.8.0_131]
at java.util.HashMap$EntryIterator.next(HashMap.java:1471) ~[?:1.8.0_131]
at java.util.HashMap$EntryIterator.next(HashMap.java:1469) ~[?:1.8.0_131]
at org.apache.doris.catalog.CatalogRecycleBin.write(CatalogRecycleBin.java:1047) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.catalog.Env.saveRecycleBin(Env.java:2298) ~[doris-fe.jar:1.2-SNAPSHOT]
```
When calling `/dump` api to dump image, ConcurrentModificationException may be thrown.
Because no lock to protect `CatalogRecycleBin`
If the original data is:
```sql
+-----------------------------------------------------+
| s_info |
+-----------------------------------------------------+
| {"s_id": 2, "s_name": "nereids", "s_address": "20"} |
| {"s_id": 1, "s_name": "doris", "s_address": "18"} |
+-----------------------------------------------------+
```
In the original logic, the struct type data exported to a csv file format did not contain column names,like
```
{2, "nereids", "20"}
{1, "doris", "18"}
```
This pr do not need to be merged into branch-2.0