we met error: Unknown column '{}DORIS_DELETE_SIGN{}' in 'default_cluster:db.table.
that because when we use alias as the tableName to construct a Table, all parts of the name will be lowercase if lowerCaseTableNames = 1.
To avoid it, we should extract tableName from alias and only lower tableName
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
* add ut for cooldown on be
Describe your changes.
1.support GRANT role [, role] TO user_identity
2.support REVOKE role [, role] FROM user_identity
3.’Show grants‘ Add a column to display the roles owned by users
4.‘alter user’ prohibit deleting user's role
5.Repair Logic of roleName cannot start with RoleManager.DEFAULT_ ROLE
when replay restore a table with reserve_dynamic_partition_enable=true,
must registerOrRemoveDynamicPartitionTable with isReplay=true, or maybe cause
OBSERVER can not replay restore auditlog success.
Fix unsigned int type compatibility value scope problem.
When defining columns, map UNSIGNED INT to BIGINT for compatibility.
The problems are as follows:
It is not consistent with this doc
image
We support the unsigned int type to be compatible with mysql types, but the unsigned int type is created as the int at the time of definition. This will cause numerical overflow.
This pr does three things:
1. Use Druid instead of HikariCP in JdbcClient
2. when download udf jar, add the name of the jar package after the local file name.
3. refactor some jdbcResource code
During concurrent import, the same row location may be marked delete multiple times by different versions of rowset.
Duplicate row location need to be removed.
be use EnginePublishVersionTask to publish all replica of all tablets of table of one transaction,
and EnginePublishVersionTask use TabletPublishTxnTask to truly publish tablet and make rowset visible.
but if TabletPublishTxnTask error, tablet id will add _error_tablet_ids but no return some errors,
and EnginePublishVersionTask will not report any error to fe, and fe make this transaction visible,
and partition's version add 1.
but if you query this table, will return error like "MySQL [test]> select * from test12;ERROR 1105 (HY000): errCode = 2, detailMessage = [INTERNAL_ERROR]failed to initialize storage reader. tablet=14023.730105214.d742d664692db946-386daa993d84d89d, res=[INTERNAL_ERROR][9.134.167.25]fail to find path in version_graph. spec_version: 0-3, backend=9.134.167.25".
after this pr, _error_tablet_ids will report to fe, this transaction will not be visible and add ErrMsg like "publish on tablet 14038 failed.".
Signed-off-by: nextdreamblue <zxw520blue1@163.com>
Fix three bugs:
1. `repeated_parent_def_level ` should be the definition of its repeated parent.
2. Failed to parse schema like `decimal(p, s)`
3. Fill wrong offsets for array type
This pr is a temporary circumvention fix.
In regression case `inverted_index_p0/test_add_drop_index.groovy`, both bitmap index and inverted index are created
on the same table, when create or drop bitmap index will change table's state to `SCHEMA_CHANGE`, create or drop
inverted index not change the table's state.
Before do create or drop inverted index check the table's state whether is `NORMAL` or not, Because of replay log for
'bitmap index' has change table state, and it didn't finish soon lead to table's state not change back to `NORMAL`, then
replay log for 'inverted index' failed, FE start failed.
remove duplicate type definition in function context
remove unused method in function context
not need stale state in vexpr context because vexpr is stateless and function context saves state and they are cloned.
remove useless slot_size in all tuple or slot descriptor.
remove doris_udf namespace, it is useless.
remove some unused macro definitions.
init v_conjuncts in vscanner, not need write the same code in every scanner.
using unique ptr to manage function context since it could only belong to a single expr context.
Issue Number: close #xxx
---------
Co-authored-by: yiguolei <yiguolei@gmail.com>
This is the first step to introduce official hadoop libhdfs to Doris.
Because the current hdfs client libhdfs3 lacks some important feature and is hard to maintain.
Download the hadoop 3.3.4 binary from hadoop website: https://hadoop.apache.org/releases.html
Extract libs and headers which are used for libhdfs, and pack them into hadoop_lib_3.3.4-x86.tar.gz
Upload it to https://github.com/apache/doris-thirdparty/releases/tag/hadoop-libs-3.3.4
TODO:
The hadoop libs for arm is missing, we need to find a way to build it
* [enhancement](transaction) Reduce hold writeLock time for DatabaseTransactionMgr to clear transaction
* fix ut
* remove unnessary field for remove txn bdbje log
---------
Co-authored-by: caiconghui1 <caiconghui1@jd.com>
Modify the default value of mem_limit to auto. auto means process mem limit is equal to max(physical mem * 0.9, 6.4G).
6.4G is the maximum memory reserved for the system.
when a restore job which has a plenty of replicas, it may fail due to timeout. The error message is:
[RestoreJob.checkAndPrepareMeta():782] begin to send create replica tasks to BE for restore. total 381344 tasks. timeout: 600000
Currently, the max value of timeout is fixed, it's not suitable for such cases.
fix heap-use-after-free
The OrcReader has a internal FileInputStream, If the file is empty, the memory of FileInputStream will leak.
Besides, there is a Statistics instance in FileInputStream. FileInputStream maybe delete if the orc reader
is inited failed, but Statistics maybe used when orc reader is closed, causing heap-use-after-free error.
Potential memory leak
When init file scanner in file scan node, the file scanner prepare failed, the memory of file scanner will leak.
segcompaction is async and in parallel with load job. If the load job is
canncelling, memory structures will be destroyed and cause segcompaction
crash. This commit will wait segcompaction finished before destruction.
* [enhancement](stream load pipe) using queryid or load id to identify stream load pipe instead of fragment instance id
NewLoadStreamMgr already has pipe and other info. Do not need save the pipe into fragment state. and FragmentState should be more clear.
But this pr will change the behaviour of BE.
I will pick the pr to doris 1.2.3 and add the load id to FE support. The user could upgrade from 1.2.3 to 2.x
Co-authored-by: yiguolei <yiguolei@gmail.com>