ECB algorithm, block_encryption_mode does not take effect, it only takes effect when init vector is provided.
Solved: 192/256 supports calculation without init vector
For other algorithms, an error should be reported when there is no init vector
Initialization Vector. The default value for the block_encryption_mode system variable is aes-128-ecb, or ECB mode, which does not require an initialization vector. The alternative permitted block encryption modes CBC, CFB1, CFB8, CFB128, and OFB all require an initialization vector.
Reference: https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_aes-decrypt
Note: This fix does not support smooth upgrades. during upgrade process, query may report error: funciton not found
* A in(B) -> bitmap_contains(bitmap_union(B), A)
support bitmap runtime filter on nereids
* GroupPlan -> Plan
* fmt
* fix target cast problem
remove test code
Describe your changes.
In the past, pg catalog use sql SELECT schema_name FROM information_schema.schemata where schema_owner='<UserName>'; to select schemas of an user. Howerver, this sql can not find all schemas that a user can access, that because:
A user may not be the owner of an schema, but may have read permission on the schema.
A user may inherit the permissions of its user group and thus have read permissions on one schema.
For these reasons, we replace the sql statement with select nspname from pg_namespace where has_schema_privilege('<UserName>', nspname, 'USAGE');
Collect each estimated output rows and exact output rows for each plan node, and use this to measure the accuracy of derived statistics. The estimated result is managed by ProfileManager. We would get this estimated result in the http request by query id later.
* Upgrade log4j to 2.X
- binding log4j version to 2.18.0
- used log4j-1.2-api complete smooth upgrade
* Upgrade filerupload to 1.5
* Upgrade commons-io to 2.7
* Upgrade commons-compress to 1.22
* Upgrade gson to 2.8.9
* Upgrade guava to 30.0-jre
* Binding jackson version to 2.14.2
* Upgrade netty-all to 4.1.89.final
* Upgrade protobuf to 3.21.12
* Upgrade kafka-clints to 3.4.0
* Upgrade calcite version to 1.33.0
* Upgrade aws-java-sdk to 1.12.302
* Upgrade hadoop to 3.3.4
* Upgrade zookeeper to 3.4.14
* Binding tomcat-embed-core to 8.5.86
* Upgrade apache parent pom to 25
* Use hive-exec-core as a hive dependency, add the missing jar-hive-serde separately
* Basic public dependencies are extracted to parent dependencies
* Use jackson uniformly as the basic json tool
* Remove springloaded, spring-boot-devtools has the same functionality
* Modify the spark-related dependency scope to provide, which should be provided at runtime
# Proposed changes
1.The new optimizer supports the combination of subquery and disjunction.In the way of MarkJoin, it behaves the same as the old optimizer. For design details see:https://emmymiao87.github.io/jekyll/update/2021/07/25/Mark-Join.html.
2.Implicit type conversion is performed when conjects are generated after subquery parsing
3.Convert the unnesting of scalarSubquery in filter from filter+join to join + Conjuncts.
A new way just like c++ template is proposed in this PR. The previous functions can be defined much simpler using template function.
# map element extract template function
[['element_at', '%element_extract%'], 'E', ['ARRAY<E>', 'BIGINT'], 'ALWAYS_NULLABLE', ['E']],
# map element extract template function
[['element_at', '%element_extract%'], 'V', ['MAP<K, V>', 'K'], 'ALWAYS_NULLABLE', ['K', 'V']],
BTW, the plain type function is not affected and the legacy ARRAY_X MAP_K_V is still supported for compatability.
fix two problems:
1. push agg-fun in windowExpression down to AggregateNode
for example, sql:
select sum(sum(a)) over (order by b)
Plan:
windowExpression( sum(y) over (order by b))
+--- Agg(sum(a) as y, b)
2. push other expr to upper proj
for example, sql:
select sum(a+1) over ()
Plan:
windowExpression(sum(y) over ())
+--- Project(a + 1 as y,...)
+--- Agg(a,...)
1. Add a rule split limit, like Limit(Origin) ==> Limit(Global) -> Gather -> Limit(Local)
2. Add a rule: limit-> sort ==> topN
3. fix a bug about topN
4. make the type of limit,offset long in topN
And because this rule is always beneficial, we add a rule in the rewrite phase
we met error: Unknown column '{}DORIS_DELETE_SIGN{}' in 'default_cluster:db.table.
that because when we use alias as the tableName to construct a Table, all parts of the name will be lowercase if lowerCaseTableNames = 1.
To avoid it, we should extract tableName from alias and only lower tableName
Describe your changes.
1.support GRANT role [, role] TO user_identity
2.support REVOKE role [, role] FROM user_identity
3.’Show grants‘ Add a column to display the roles owned by users
4.‘alter user’ prohibit deleting user's role
5.Repair Logic of roleName cannot start with RoleManager.DEFAULT_ ROLE
when replay restore a table with reserve_dynamic_partition_enable=true,
must registerOrRemoveDynamicPartitionTable with isReplay=true, or maybe cause
OBSERVER can not replay restore auditlog success.
Fix unsigned int type compatibility value scope problem.
When defining columns, map UNSIGNED INT to BIGINT for compatibility.
The problems are as follows:
It is not consistent with this doc
image
We support the unsigned int type to be compatible with mysql types, but the unsigned int type is created as the int at the time of definition. This will cause numerical overflow.
This pr does three things:
1. Use Druid instead of HikariCP in JdbcClient
2. when download udf jar, add the name of the jar package after the local file name.
3. refactor some jdbcResource code
Fix three bugs:
1. `repeated_parent_def_level ` should be the definition of its repeated parent.
2. Failed to parse schema like `decimal(p, s)`
3. Fill wrong offsets for array type
* [enhancement](transaction) Reduce hold writeLock time for DatabaseTransactionMgr to clear transaction
* fix ut
* remove unnessary field for remove txn bdbje log
---------
Co-authored-by: caiconghui1 <caiconghui1@jd.com>
when a restore job which has a plenty of replicas, it may fail due to timeout. The error message is:
[RestoreJob.checkAndPrepareMeta():782] begin to send create replica tasks to BE for restore. total 381344 tasks. timeout: 600000
Currently, the max value of timeout is fixed, it's not suitable for such cases.
* [enhancement](stream load pipe) using queryid or load id to identify stream load pipe instead of fragment instance id
NewLoadStreamMgr already has pipe and other info. Do not need save the pipe into fragment state. and FragmentState should be more clear.
But this pr will change the behaviour of BE.
I will pick the pr to doris 1.2.3 and add the load id to FE support. The user could upgrade from 1.2.3 to 2.x
Co-authored-by: yiguolei <yiguolei@gmail.com>