Reverts apache/doris#42255
We have found that after closing the connection pool, there will be
class loading problems and connection release problems for some data
sources. We will remove this function first and re-add it after solving
and testing it completely.
pick (#41992)
We initially introduced jdbc connection pool to improve the connection
performance of jdbc catalog, but we always found that connection pool
would bring some unexpected errors, so we chose to add a catalog
property: `enable_connection_pool` to choose whether to enable the jdbc
connection pool of jdbc catalog, and the default false.However, the
created catalog will still open the connection pool when it is upgraded,
and only the newly created catalog will be false
And we conducted performance tests on this, the performance loss is
within the expected range.
- Enable connection pool: mysqlslap -uroot -h127.0.0.1 -P9030
--concurrency=1 --iterations=100 --query='SELECT * FROM mysql.test.test
limit 1;' --create-schema=mysql --delimiter=";" --verbose
Benchmark
Average number of seconds to run all queries: 0.008 seconds
Minimum number of seconds to run all queries: 0.004 seconds
Maximum number of seconds to run all queries: 0.133 seconds
Number of clients running queries: 1
Average number of queries per client: 1
- Disable connection pool: mysqlslap -uroot -h127.0.0.1 -P9030
--concurrency=1 --iterations=100 --query='SELECT * FROM
mysql_no_pool.test.test limit 1;' --create-schema=mysql --delimiter=";"
--verbose
Benchmark
Average number of seconds to run all queries: 0.054 seconds
Minimum number of seconds to run all queries: 0.047 seconds
Maximum number of seconds to run all queries: 0.184 seconds
Number of clients running queries: 1
Average number of queries per client: 1
pick (#42102)
Add a variable `enable_jdbc_cast_predicate_push_down`, the default value
is false, which prohibits the pushdown of non-constant predicates with
type conversion and all predicates with implicit conversion. This change
can prevent the wrong predicates from being pushed down to the Jdbc data
source, resulting in query data errors, because the predicates with cast
were not correctly pushed down to the data source before. If you find
that the data is read correctly and the performance is better before
this change, you can manually set this variable to true
```
| Expression | Can Push Down |
|-----------------------------------------------------|---------------|
| column type equals const type | Yes |
| column type equals cast const type | Yes |
| cast column type equals const type | No |
| cast column type equals cast const type | No |
| column type not equals column type | No |
| column type not equals cast const type | No |
| cast column type not equals const type | No |
| cast column type not equals cast const type | No |
```
Because Oracle versions below Oracle21 do not support null as an
operator, and considering that most users' Oracle versions are below
Oracle21, we disable Oracle's null operator pushdown by default.
pick (#41563)
bp #40225 , #40888 ,#41386
## Proposed changes
Among them, #40225 is the new api of mc,
#40888 is used to fix the bug when reading null between the new and old
apis,
#41386 is used for compatibility between the new and old versions
bp #41497
before #21790
## Proposed changes
This PR unifies the duplicate parts of `catalog tvf` and `show
catalogs`, adds permission check when querying `catalog tvf`, and hides
sensitive information.
pick from master #40801
This PR simplifies the handling of `ScanNode` projection logic.
Previously, the code included multiple conditional checks to determine
whether a `projectionTuple` should be generated. These conditions have
been removed, and now `projectionTuple `is always generated for
`ScanNode`, ensuring a consistent projection setup. Additionally,
redundant handling of `SlotId` and `SlotRef` has been eliminated, making
the code cleaner and easier to maintain. The behavior for `OlapScanNode`
remains unchanged.
## Proposed changes
Hadoop snappycodec source :
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/codec/SnappyCodec.cc
Example:
OriginData(The original data will be divided into several large data
block.) :
large data block1 | large data block2 | large data block3 | ....
The large data block will be divided into several small data block.
Suppose a large data block is divided into three small blocks:
large data block1: | small block1 | small block2 | small block3 |
CompressData: <A [B1 compress(small block1) ] [B2 compress(small block1)
] [B3 compress(small block1)]>
A : original length of the current block of large data block.
sizeof(A) = 4 bytes.
A = length(small block1) + length(small block2) + length(small block3)
Bx : length of small data block bx.
sizeof(Bx) = 4 bytes.
Bx = length(compress(small blockx))