pick (#38439)
1. Move the execution of testJdbcConnection() to checkWhenCreating
instead of the constructor
2. Move the logic of renaming lower_case_table_names to
lower_case_meta_names to setDefaultPropsIfMissing
bp #38432
## Proposed changes
Add `hive_parquet_use_column_names` and `hive_orc_use_column_names`
session variables to read the table after rename column in `Hive`.
These two session variables are referenced from
`parquet_use_column_names` and `orc_use_column_names` of `Trino` hive
connector.
By default, these two session variables are true. When they are set to
false, reading orc/parquet will access the columns according to the
ordinal position in the Hive table definition.
For example:
```mysql
in Hive :
hive> create table tmp (a int , b string) stored as parquet;
hive> insert into table tmp values(1,"2");
hive> alter table tmp change column a new_a int;
hive> insert into table tmp values(2,"4");
in Doris :
mysql> set hive_parquet_use_column_names=true;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from tmp;
+-------+------+
| new_a | b |
+-------+------+
| NULL | 2 |
| 2 | 4 |
+-------+------+
2 rows in set (0.02 sec)
mysql> set hive_parquet_use_column_names=false;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from tmp;
+-------+------+
| new_a | b |
+-------+------+
| 1 | 2 |
| 2 | 4 |
+-------+------+
2 rows in set (0.02 sec)
```
You can use `set
parquet.column.index.access/orc.force.positional.evolution = true/false`
in hive 3 to control the results of reading the table like these two
session variables. However, for the rename struct inside column parquet
table, the effects of hive and doris are different.
bp: #37565
Currently, Doris first obtains splits and then performs projection.
After column pruning, it calls `updateRequiredSlots` to update the
scanRange information. However, the Trino connector's column pruning
pushdown needs to be completed before obtaining splits.
Therefore, we move the finalize phase of `ScanNode` to after the end of
the `Physical Translate` phase, so that `createScanRangeLocations` can
use the final columns which have been pruning.
## Proposed changes
Issue Number: close #xxx
<!--Describe your changes.-->
pick (#38523)
Create a job:
```
CREATE ROUTINE LOAD testShow ON test_show_routine_load
COLUMNS TERMINATED BY ","
PROPERTIES
(
"max_batch_interval" = "5",
"max_batch_rows" = "300000",
"max_batch_size" = "209715200"
)
FROM KAFKA
(
"kafka_broker_list" = "127.0.0.1:19092",
"kafka_topic" = "test_show_routine_load",
"property.kafka_default_offsets" = "OFFSET_BEGINNING"
);
```
show routine load task:
```
SHOW ROUTINE LOAD TASK WHERE JobName = "testShow";
```
result:
```
ERROR 1105 (HY000): errCode = 2, detailMessage = The job named testshowdoes not exists or job state is stopped or cancelled
```
Do not use `toLowerCase` method;
## Proposed changes
upgrade spring-boot to 2.7.18
upgrade zookeeper to 3.9.2
upgrade jetty to 9.4.55.v20240627
upgrade ivy to 2.5.2
upgrade icu4j to 75.1
upgrade ini4j to 0.5.4
(cherry picked from commit 3f633c2018e86c6c842647262853d88ad63672bf)
pick #38509
## Proposed changes
Issue Number: close #xxx
<!--Describe your changes.-->
pick from master #38660
insert will hold readlock of target table before planning. if nereids
need db readlock after it, will lead to dead lock. because other
statement need to hold db lock before get table lock
for example:
insert: target table read lock -> database read lock
drop table: database write lock -> target table write lock
## Proposed changes
Issue Number: close#38590
If SSL connection closed, a specified packet will sent to indicate the
closing of connection. The SSL engine will be shut down and output an
empty unwrapped result.
Therefore, handle this case correctly to avoid buffer overflow by
breaking the reading flow and do the cleanup stuff initiatively.
bp: #38203
1. Previously, if the root path of the HDFS URI started with two
slashes, the outfile would be successfully exported without errors, but
the exported path would not be the expected path.
Currently, we will delete repeated '/' which specified by users in FE.
2. move the test case for outfile HDFS from p2 to p0.