Commit Graph

6589 Commits

Author SHA1 Message Date
492a22dced select coordinator node from user's tag when exec streaming load (#27106) 2023-11-16 19:55:50 +08:00
754ca1fa46 [fix](Nereids) nested type coercion should not process struct (#27068) 2023-11-16 00:08:38 -06:00
2b401785ce [fix](Nereids) build array and map literal expression failed (#27060)
1. empty array and map literal
2. multi-layer nested array and map literal
2023-11-16 00:08:24 -06:00
343d58123d [Fix](nereids)Fix nereids fail to parse tablesample rows bug (#26981) 2023-11-16 12:23:37 +08:00
bf6a9383bc [fix](stats) table not exists error msg not print objects name (#27074) 2023-11-15 22:10:50 -06:00
6be74d22ea [fix](nereids)fix bug that query infomation_schema.rowsets fe send fragment to one of muilti be. (#27025)
Fixed the bug of incomplete query results when querying information_schema.rowsets in the case of multiple BEs.

The reason is that the schema scanner sends the scan fragment to one of multiple bes, and be queries the information of fe through rpc. Since the rowsets information requires information about all BEs, the scan fragment needs to be sent to all BEs.
2023-11-16 12:08:22 +08:00
7e82e7651a [Improve](txn) Add some fuzzy test stub in txn (#26712) 2023-11-16 11:50:06 +08:00
624d372dcd [FIX](map)fix map element_at with decimal value (#27030) 2023-11-16 11:49:51 +08:00
f3ee6dd55a [feature](Nereids): eliminate sort under subquery (#26993) 2023-11-16 10:30:28 +08:00
a54cfb7558 [fix](backup) return if status not ok and reduce summit job (#26940)
when backup is prepareAndSendSnapshotTask(), if some table has error, return status not ok, but not return, and other tables continue put snapshot job into batchTask and summit jobs to be while these jobs need cancel. so when status is not ok, return and do not summit jobs
2023-11-16 10:16:56 +08:00
612d9dd7c6 [fix](errmsg) fix multiple FE processes start err msg (#27009) 2023-11-16 10:16:35 +08:00
76d530e349 [fix](tablet sched) fix sched delete stale remain replica (#27050) 2023-11-15 21:26:22 +08:00
4dda5aad09 [fix](fe ut) fix unstable test DecommissionBackendTest (#26891) 2023-11-15 21:02:59 +08:00
26480b57ff [fix](fe ut) fix unstable ut TabletRepairAndBalanceTest (#27044) 2023-11-15 20:47:50 +08:00
83edcdead9 [enhancement](random_sink) change tablet search algorithm from random to round-robin for random distribution table (#26611)
1. fix race condition problem when get tablet load index
2. change tablet search algorithm from random to round-robin for random distribution table when load_to_single_tablet set to false
2023-11-15 19:55:31 +08:00
4e105e94a2 [fix](statistics) fix updated rows incorrect due to typo in code (#26979) 2023-11-15 05:25:46 -06:00
ad7dfd75d5 [enhancement](Nereids): make HyperGraph can be built from plan (#26592) 2023-11-15 19:06:14 +08:00
52d7725b36 [fix](auth) fix overwrite logic of user with domain (#27002)
Reproduce:
DBA do following operations:
1. create user user1@['domain'];   // the domain will be resolved as 2 ip: ip1 and ip2;
2. create user user1@'ip1';
3. wait at least 10 second
4. grant all on *.*.* to user1@'ip1';  // will return error: user1@'ip1' does not exist

This is because the daemon thread DomainResolver resolve the "domain" and overwrite the `user1@'ip1'`
which is created by DBA.

This PR fix it.
2023-11-15 18:19:54 +08:00
2f529c1c7b [feature](Nereids): remove True in Join condition (#26951)
Remove `True` in Join Condition like `SELECT * FROM t1 JOIN t2 on True`;
2023-11-15 15:58:33 +08:00
760c6cdeab [fix](ut) enable running ut BDBJEJournalTest inside IDE #27012 2023-11-15 14:55:22 +08:00
a1d139080d [fix](nereids) patition prune is affected by non-paritition-key condition (#26873)
stop propagate Context.childrenContainsNonInterestedSlots if expr changed to TRUE
2023-11-15 14:46:59 +08:00
dbac12bae8 [fix](memory)Modify the default conf values of mem_limit and cache_last_version_interval_second (#26945)
mem_limit from 80% to 90%
cache_last_version_interval_second from 900 to 30
2023-11-15 14:02:58 +08:00
2c6d2255c3 [fix](Nereids) nested type literal type coercion and insert values with map (#26669) 2023-11-14 21:13:26 -06:00
df867a1531 [fix](catalog) Fix ClickHouse DataTime64 precision parsing (#26977) 2023-11-15 10:23:21 +08:00
82a88366f1 [fix](fe ut) make colcoate test wait time longer (#26999) 2023-11-15 08:18:48 +08:00
08044a35ae (fix)[schema change] fix incorrect setting of schema change jobstate when replay editlog (#26992) 2023-11-14 23:53:22 +08:00
30d1e6036c [feature](runtime filter) New session variable runtime_filter_wait_infinitely (#26888)
New session variable: runtime_filter_wait_infinitely. If set runtime_filter_wait_infinitely = true, consumer of rf will wait on receiving until query is timeout.
2023-11-14 21:05:59 +08:00
9a4fd5be79 [nereids](datetime) fix wrong result type of datetime add with interval as first arg (#26957)
Incorrect result data type cause be cordump:

drop table if exists testaaa;
create table testaaa(k1 tinyint, k2 smallint, k3 int, k4 bigint, k5 decimal(9,3), k6 char(5), k10 date, k11 datetime, k7 varchar(20), k8 double max, k9 float sum) engine=olap distributed by hash(k1) buckets 5 properties("storage_type"="column","replication_num"="1") ;

insert into testaaa values(1,1,1,1,9.3, "k6", "2023-11-14", "2023-11-14", "k7", 9.99, 9.99);


select  interval 10 year + k10 from testaaa;
The plan result type is DATE:

mysql [test]>explain verbose select   interval 10 year + k10  from testaaa;
+-------------------------------------------------------------------------------------------------------+
| Explain String(Nereids Planner)                                                                       |
+-------------------------------------------------------------------------------------------------------+
| PLAN FRAGMENT 0                                                                                       |
|   OUTPUT EXPRS:                                                                                       |
|     years_add(k10, INTERVAL 10 YEAR)[#11]                                                             |
|   PARTITION: UNPARTITIONED                                                                            |
|                                                                                                       |
|   HAS_COLO_PLAN_NODE: false                                                                           |
|                                                                                                       |
|   VRESULT SINK                                                                                        |
|      MYSQL_PROTOCAL                                                                                   |
|                                                                                                       |
|   64:VEXCHANGE                                                                                        |
|      offset: 0                                                                                        |
|      tuple ids: 1N                                                                                    |
|                                                                                                       |
| PLAN FRAGMENT 1                                                                                       |
|                                                                                                       |
|   PARTITION: HASH_PARTITIONED: k1[#0]                                                                 |
|                                                                                                       |
|   HAS_COLO_PLAN_NODE: false                                                                           |
|                                                                                                       |
|   STREAM DATA SINK                                                                                    |
|     EXCHANGE ID: 64                                                                                   |
|     UNPARTITIONED                                                                                     |
|                                                                                                       |
|   58:VOlapScanNode                                                                                    |
|      TABLE: default_cluster:test.testaaa(testaaa), PREAGGREGATION: OFF. Reason: No aggregate on scan. |
|      partitions=1/1 (testaaa), tablets=5/5, tabletList=945025,945027,945029 ...                       |
|      cardinality=1, avgRowSize=9885.0, numNodes=1                                                     |
|      pushAggOp=NONE                                                                                   |
|      projections: years_add(k10[#6], INTERVAL 10 YEAR)                                                |
|      project output tuple id: 1                                                                       |
|      tuple ids: 0                                                                                     |
|                                                                                                       |
| Tuples:                                                                                               |
| TupleDescriptor{id=0, tbl=testaaa, byteSize=8}                                                        |
|   SlotDescriptor{id=6, col=k10, colUniqueId=6, type=DATEV2, nullable=true, isAutoIncrement=false}     |
|                                                                                                       |
| TupleDescriptor{id=1, tbl=testaaa, byteSize=32}                                                       |
|   SlotDescriptor{id=11, col=null, colUniqueId=null, type=DATE, nullable=true, isAutoIncrement=false}  |
+-------------------------------------------------------------------------------------------------------+
39 rows in set (1 min 31.50 sec)
coredump stack:

F1109 20:11:37.677680 323805 assert_cast.h:61] Bad cast from type:doris::vectorized::ColumnVector to doris::vectorized::ColumnVector
*** Check failure stack trace: ***
F1109 20:11:37.680608 323800 assert_cast.h:61] Bad cast from type:doris::vectorized::ColumnVector to doris::vectorized::ColumnVector
*** Check failure stack trace: ***
F1109 20:11:37.680608 323800 assert_cast.h:61] Bad cast from type:doris::vectorized::ColumnVector to doris::vectorized::ColumnVectorF1109 20:11:37.681102 323808 assert_cast.h:61] Bad cast from type:doris::vectorized::ColumnVector to doris::vectorized::ColumnVector
*** Check failure stack trace: ***
    @     0x56489d591d3d  google::LogMessage::Fail()
    @     0x56489d591d3d  google::LogMessage::Fail()
    @     0x56489d591d3d  google::LogMessage::Fail()
    @     0x56489d594279  google::LogMessage::SendToLog()
    @     0x56489d594279  google::LogMessage::SendToLog()
    @     0x56489d594279  google::LogMessage::SendToLog()
    @     0x56489d5918a6  google::LogMessage::Flush()
    @     0x56489d5918a6  google::LogMessage::Flush()
    @     0x56489d5918a6  google::LogMessage::Flush()
    @     0x56489d5948e9  google::LogMessageFatal::~LogMessageFatal()
    @     0x56489d5948e9  google::LogMessageFatal::~LogMessageFatal()
    @     0x56489d5948e9  google::LogMessageFatal::~LogMessageFatal()
    @     0x56487a2a8a0c  assert_cast<>()
    @     0x56487a2a8a0c  assert_cast<>()
    @     0x56487a2a8a0c  assert_cast<>()
    @     0x5648893d8312  doris::vectorized::ColumnVector<>::insert_range_from()
    @     0x5648893d8312  doris::vectorized::ColumnVector<>::insert_range_from()
    @     0x5648893d8312  doris::vectorized::ColumnVector<>::insert_range_from()
    @     0x56488924a670  doris::vectorized::ColumnNullable::insert_range_from()
    @     0x56488924a670  doris::vectorized::ColumnNullable::insert_range_from()
    @     0x56488924a670  doris::vectorized::ColumnNullable::insert_range_from()
    @     0x56487a454475  doris::ExecNode::do_projections()
    @     0x56487a454475  doris::ExecNode::do_projections()
    @     0x56487a454475  doris::ExecNode::do_projections()
    @     0x56487a454b89  doris::ExecNode::get_next_after_projects()
    @     0x56487a454b89  doris::ExecNode::get_next_after_projects()
*** Query id: a467995b35334741-b625042f56495aaf ***
*** tablet id: 0 ***
*** Aborted at 1699531898 (unix time) try "date -d @1699531898" if you are using GNU date ***
*** Current BE git commitID: 0d83327a7c ***
*** SIGABRT unknown detail explain (@0x190d64) received by PID 1641828 (TID 1642168 OR 0x7f6ff96c0700) from PID 1641828; stack trace: ***
    @     0x556ca2a3ab8f  std::_Function_handler<>::_M_invoke()
    @     0x556c9f322787  std::function<>::operator()()
    @     0x556ca29da0b0  doris::Thread::supervise_thread()
    @     0x556c9f322787  std::function<>::operator()()
    @     0x7f71b9c38609  start_thread
    @     0x556ca29da0b0  doris::Thread::supervise_thread()
    @     0x7f71b9c38609  start_thread
 0# doris::signal::(anonymous namespace)::FailureSignalHandler(int, siginfo_t*, void*) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/common/signal_handler.h:417
 1# 0x00007F71B9E09090 in /lib/x86_64-linux-gnu/libc.so.6
 2# raise at ../sysdeps/unix/sysv/linux/raise.c:51
 3# abort at /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:81
 4# 0x0000556CC51F3729 in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
 5# 0x0000556CC51E8D3D in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
 6# google::LogMessage::SendToLog() in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
 7# google::LogMessage::Flush() in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
 8# google::LogMessageFatal::~LogMessageFatal() in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
 9# doris::vectorized::ColumnVector const& assert_cast const&, doris::vectorized::IColumn const&>(doris::vectorized::IColumn const&) in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
10# doris::vectorized::ColumnVector::insert_range_from(doris::vectorized::IColumn const&, unsigned long, unsigned long) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/vec/columns/column_vector.cpp:354
11# doris::vectorized::ColumnNullable::insert_range_from(doris::vectorized::IColumn const&, unsigned long, unsigned long) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/vec/columns/column_nullable.cpp:289
12# doris::ExecNode::do_projections(doris::vectorized::Block*, doris::vectorized::Block*) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/exec/exec_node.cpp:573
13# doris::ExecNode::get_next_after_projects(doris::RuntimeState*, doris::vectorized::Block*, bool*, std::function const&, bool) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/exec/exec_node.cpp:592
14# doris::pipeline::SourceOperator::get_block(doris::RuntimeState*, doris::vectorized::Block*, doris::pipeline::SourceState&) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/pipeline/exec/operator.h:413
15# doris::pipeline::PipelineTask::execute(bool*) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/pipeline/pipeline_task.cpp:259
16# doris::pipeline::TaskScheduler::_do_work(unsigned
2023-11-14 20:28:41 +08:00
2d8438fa1b [fix](forward) add exception msg for ForwardToMasterException (#26956)
Signed-off-by: nextdreamblue <zxw520blue1@163.com>
2023-11-14 20:20:29 +08:00
8b097af9fa [fix](move-memtable) support http stream (#26929) 2023-11-14 20:19:57 +08:00
fad6770225 [improvement](statistics)Multi bucket columns using DUJ1 to collect ndv (#26950)
Using DUJ1 to collect ndv for multiple bucket columns.
2023-11-14 20:13:37 +08:00
4889c1d029 [hotfix](priv) Fix restore snapshot user priv with add cluster in UserIdentity (#26969)
Signed-off-by: Jack Drogon <jack.xsuperman@gmail.com>
2023-11-14 19:48:54 +08:00
13bc6b702b [refactor](Job)Refactor JOB (#26845)
##  Motivation:
In the past, our JOB only supported Timer JOB, which could only provide scheduling for fixed-time tasks. Meanwhile, the JOB was solely responsible for execution, and during execution, there might be inconsistencies in states, where the task was executed successfully, but the JOB's recorded task status was not updated.
This inconsistency in task states recorded by the JOB could not guarantee the correctness of the JOB's status. With the gradual integration of various businesses into the JOB, such as the export job and mtmv job, we found that scaling became difficult, and the JOB became particularly bulky. Hence, we have decided to refactor JOB.

## Refactoring Goals:
- Provide a unified external registration interface so that all JOBs can be registered through this interface and be scheduled by the JobScheduler.

- The JobScheduler can schedule instant JOBs, timer JOBs, and manual JOBs.

- JOB should provide a unified external extension class. All JOBs can be extended through this extension class, which can provide special functionalities like JOB status restoration, Task execution, etc.

- Extended JOBs should manage task states on their own to avoid inconsistent state maintenance issues.

- Different JOBs should use their own thread pools for processing to prevent inter-JOB interference.
###  Design:
- The JOBManager provides a unified registration interface through which all JOBs can register and then be scheduled by the JobScheduler.
- The TimerJob periodically fetches JOBs that need to be scheduled within a time window and hands them over to the Time Wheel for triggering. To prevent excessive tasks in the Time Wheel, it distributes the tasks to the dispatch thread pool, which then assigns them to corresponding thread pools for execution.
- ManualJob or Instant Job directly assigns tasks to the corresponding thread pool for execution.
- The JOB provides a unified extension class that all JOBs can utilize for extension, providing special functionalities like JOB status restoration, Task execution, etc.
- To implement a new JOB, one only needs to implement AbstractJob.class and AbstractTask.class.
<img width="926" alt="image" src="https://github.com/apache/doris/assets/16631152/3032e05d-133e-425b-b31e-4bb492f06ddc">

## NOTICE
This will cause the master's metadata to be incompatible
2023-11-14 18:18:59 +08:00
bfa50f08c1 [enhancement](Nereids): add nereids profile (#26935) 2023-11-14 16:43:17 +08:00
cd25579bdf [fix](statistics)Fix external table show column stats type bug (#26910)
The show column stats result for external table shows N/A for the columns of method, type, trigger and query_times. This pr is to fix this bug, to show the correct value.
2023-11-14 14:47:05 +08:00
23e2bded1a [fix](Nereids) column pruning under union broken unexpectedly (#26884)
introduced by PR #24060
2023-11-14 00:39:32 -06:00
1baf541532 [fix](config)Fix fe pom cdh download failed issue (#26913)
fix download net.sourceforge.czt.dev jar failed.

---------

Co-authored-by: Yijia Su <suyijia@selectdb.com>
2023-11-14 14:17:24 +08:00
de38ffe2b2 [Fix](multi-catalog) Fix NPE when replaying hms events (#26803)
Invoke ConnectContext.get() at replayer thread of slave FE nodes maybe return null, so a NPE will be thrown and slave nodes will be crashed.
Co-authored-by: wangxiangyu <wangxiangyu@360shuke.com>
2023-11-14 13:55:25 +08:00
f1d90ffc4e [regression](nereids) add test case for partition prune (#26849)
* list selected partition name in explain
* add prune partition test case (multi-range key)
2023-11-14 11:51:32 +08:00
0a9d71ebd2 [Fix](Planner) fix varchar does not show real length (#25171)
Problem:
when we create table with datatype varchar(), we regard it to be max length by default. But when we desc, it does not show
real length but show varchar()
Reason:
when we upgrade version from 2.0.1 to 2.0.2, we support new feature of creating varchar(), and it shows the same way with
ddl schema. So user would confuse of the length of varchar
Solved:
change the showing of varchar() to varchar(65533), which in compatible with hive
2023-11-14 10:49:21 +08:00
e0934166f5 [bugfix](es-catalog)fix exception when querying ES table (#26788) 2023-11-14 10:47:37 +08:00
fef627c0ba [Fix](Txn) Fix transaction write to sequence column error (#26748) 2023-11-14 10:30:10 +08:00
df6e444e75 [improvement](log) log desensitization without displaying user info (#26912) 2023-11-14 08:30:00 +08:00
e7a8022106 [enhancement](binlog) Add dbName && tableName in CreateTableRecord (#26901)
Signed-off-by: Jack Drogon <jack.xsuperman@gmail.com>
2023-11-14 08:29:38 +08:00
b19abac5e2 [fix](move-memtable) pass num local sink to backends (#26897) 2023-11-14 08:28:49 +08:00
37ca129fa7 [test](fe) Add more FE UT for org.apache.doris.journal.bdbje (#26629) 2023-11-13 23:06:17 +08:00
2853efd4ee [fix](partial update) Fix NPE when the query statement of an update statement is a point query in OriginPlanner (#26881)
close #26882
We should not use the singleNodePlan to generate the rootPlanFragment if the query is inside a insert statement or distributedPlanner will be null.
introduced in #15491
2023-11-13 22:08:06 +08:00
f4d5e6dd55 [improvement](backend balance) improve capacity cofficient of be load score (#26874) 2023-11-13 21:51:31 +08:00
4d3983335a [log](tablet invert) add preconditition check failed log (#26770) 2023-11-13 21:47:33 +08:00
ebc15fc6cc [fix](transaction) Fix concurrent schema change and txn cause dead lock (#26428)
Concurrent schema change and txn may cause dead lock. An example:

Txn T commit but not publish;
Run schema change or rollup on T's related partition, add alter replica R;
sc/rollup add a sched txn watermark M;
Restart fe;
After fe restart, T's loadedTblIndexes will clear because it's not save to disk;
T will publish version to all tablet, including sc/rollup's new alter replica R;
Since R not contains txn data, so the T will fail. It will then always waitting for R's data;
sc/rollup wait for txn before M to finish, only after that it will let R copy history data;
Since T's not finished, so sc/rollup will always wait, so R will nerver copy history data;
Txn T and sc/rollup will wait each other forever, cause dead lock;
Fix: because sc/rollup will ensure double write after the sched watermark M, so for finish transaction, when checking a alter replica:

if txn id is bigger than M, check it just like a normal replica;
otherwise skip check this replica, the BE will modify history data later.
2023-11-13 21:39:28 +08:00