# Proposed changes Issue Number: close #6238 Co-authored-by: HappenLee <happenlee@hotmail.com> Co-authored-by: stdpain <34912776+stdpain@users.noreply.github.com> Co-authored-by: Zhengguo Yang <yangzhgg@gmail.com> Co-authored-by: wangbo <506340561@qq.com> Co-authored-by: emmymiao87 <522274284@qq.com> Co-authored-by: Pxl <952130278@qq.com> Co-authored-by: zhangstar333 <87313068+zhangstar333@users.noreply.github.com> Co-authored-by: thinker <zchw100@qq.com> Co-authored-by: Zeno Yang <1521564989@qq.com> Co-authored-by: Wang Shuo <wangshuo128@gmail.com> Co-authored-by: zhoubintao <35688959+zbtzbtzbt@users.noreply.github.com> Co-authored-by: Gabriel <gabrielleebuaa@gmail.com> Co-authored-by: xinghuayu007 <1450306854@qq.com> Co-authored-by: weizuo93 <weizuo@apache.org> Co-authored-by: yiguolei <guoleiyi@tencent.com> Co-authored-by: anneji-dev <85534151+anneji-dev@users.noreply.github.com> Co-authored-by: awakeljw <993007281@qq.com> Co-authored-by: taberylyang <95272637+taberylyang@users.noreply.github.com> Co-authored-by: Cui Kaifeng <48012748+azurenake@users.noreply.github.com> ## Problem Summary: ### 1. Some code from clickhouse **ClickHouse is an excellent implementation of the vectorized execution engine database, so here we have referenced and learned a lot from its excellent implementation in terms of data structure and function implementation. We are based on ClickHouse v19.16.2.2 and would like to thank the ClickHouse community and developers.** The following comment has been added to the code from Clickhouse, eg: // This file is copied from // https://github.com/ClickHouse/ClickHouse/blob/master/src/Interpreters/AggregationCommon.h // and modified by Doris ### 2. Support exec node and query: * vaggregation_node * vanalytic_eval_node * vassert_num_rows_node * vblocking_join_node * vcross_join_node * vempty_set_node * ves_http_scan_node * vexcept_node * vexchange_node * vintersect_node * vmysql_scan_node * vodbc_scan_node * volap_scan_node * vrepeat_node * vschema_scan_node * vselect_node * vset_operation_node * vsort_node * vunion_node * vhash_join_node You can run exec engine of SSB/TPCH and 70% TPCDS stand query test set. ### 3. Data Model Vec Exec Engine Support **Dup/Agg/Unq** table, Support Block Reader Vectorized. Segment Vec is working in process. ### 4. How to use 1. Set the environment variable `set enable_vectorized_engine = true; `(required) 2. Set the environment variable `set batch_size = 4096; ` (recommended) ### 5. Some diff from origin exec engine https://github.com/doris-vectorized/doris-vectorized/issues/294 ## Checklist(Required) 1. Does it affect the original behavior: (No) 2. Has unit tests been added: (Yes) 3. Has document been added or modified: (No) 4. Does it need to update dependencies: (No) 5. Are there any changes that cannot be rolled back: (Yes)
104 lines
4.2 KiB
C++
104 lines
4.2 KiB
C++
// Licensed to the Apache Software Foundation (ASF) under one
|
|
// or more contributor license agreements. See the NOTICE file
|
|
// distributed with this work for additional information
|
|
// regarding copyright ownership. The ASF licenses this file
|
|
// to you under the Apache License, Version 2.0 (the
|
|
// "License"); you may not use this file except in compliance
|
|
// with the License. You may obtain a copy of the License at
|
|
//
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
//
|
|
// Unless required by applicable law or agreed to in writing,
|
|
// software distributed under the License is distributed on an
|
|
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
|
// KIND, either express or implied. See the License for the
|
|
// specific language governing permissions and limitations
|
|
// under the License.
|
|
|
|
#ifndef DORIS_BE_SRC_QUERY_EXEC_SCAN_NODE_H
|
|
#define DORIS_BE_SRC_QUERY_EXEC_SCAN_NODE_H
|
|
|
|
#include <string>
|
|
|
|
#include "exec/exec_node.h"
|
|
#include "gen_cpp/PaloInternalService_types.h"
|
|
#include "util/runtime_profile.h"
|
|
|
|
namespace doris {
|
|
|
|
class TScanRange;
|
|
|
|
// Abstract base class of all scan nodes; introduces set_scan_range().
|
|
//
|
|
// Includes ScanNode common counters:
|
|
// BytesRead - total bytes read by this scan node
|
|
//
|
|
// TotalRawHdfsReadTime - it measures the total time spent in the disk-io-mgr's reading
|
|
// threads for this node. For example, if we have 3 reading threads and each spent
|
|
// 1 sec, this counter will report 3 sec.
|
|
//
|
|
// TotalReadThroughput - BytesRead divided by the total time spent in this node
|
|
// (from Open to Close). For IO bounded queries, this should be very close to the
|
|
// total throughput of all the disks.
|
|
//
|
|
// PerDiskRawHdfsThroughput - the read throughput for each disk. If all the data reside
|
|
// on disk, this should be the read throughput the disk, regardless of whether the
|
|
// query is IO bounded or not.
|
|
//
|
|
// NumDisksAccessed - number of disks accessed.
|
|
//
|
|
// AverageIoMgrQueueCapcity - the average queue capacity in the io mgr for this node.
|
|
// AverageIoMgrQueueSize - the average queue size (for ready buffers) in the io mgr
|
|
// for this node.
|
|
//
|
|
// AverageScannerThreadConcurrency - the average number of active scanner threads. A
|
|
// scanner thread is considered active if it is not blocked by IO. This number would
|
|
// be low (less than 1) for IO bounded queries. For cpu bounded queries, this number
|
|
// would be close to the max scanner threads allowed.
|
|
//
|
|
// AverageHdfsReadThreadConcurrency - the average number of active hdfs reading threads
|
|
// reading for this scan node. For IO bound queries, this should be close to the
|
|
// number of disk.
|
|
//
|
|
// HdfsReadThreadConcurrencyCount=<i> - the number of samples taken when the hdfs read
|
|
// thread concurrency is <i>.
|
|
//
|
|
// ScanRangesComplete - number of scan ranges completed
|
|
//
|
|
class ScanNode : public ExecNode {
|
|
public:
|
|
ScanNode(ObjectPool* pool, const TPlanNode& tnode, const DescriptorTbl& descs)
|
|
: ExecNode(pool, tnode, descs) {}
|
|
|
|
// Set up counters
|
|
Status prepare(RuntimeState* state) override;
|
|
|
|
// Convert scan_ranges into node-specific scan restrictions. This should be
|
|
// called after prepare()
|
|
virtual Status set_scan_ranges(const std::vector<TScanRangeParams>& scan_ranges) = 0;
|
|
|
|
bool is_scan_node() const override { return true; }
|
|
|
|
RuntimeProfile::Counter* bytes_read_counter() const { return _bytes_read_counter; }
|
|
RuntimeProfile::Counter* rows_read_counter() const { return _rows_read_counter; }
|
|
RuntimeProfile::Counter* total_throughput_counter() const { return _total_throughput_counter; }
|
|
|
|
// names of ScanNode common counters
|
|
static const std::string _s_bytes_read_counter;
|
|
static const std::string _s_rows_read_counter;
|
|
static const std::string _s_total_throughput_counter;
|
|
static const std::string _s_num_disks_accessed_counter;
|
|
|
|
protected:
|
|
RuntimeProfile::Counter* _bytes_read_counter; // # bytes read from the scanner
|
|
// # rows/tuples read from the scanner (including those discarded by eval_conjuncts())
|
|
RuntimeProfile::Counter* _rows_read_counter;
|
|
// Wall based aggregate read throughput [bytes/sec]
|
|
RuntimeProfile::Counter* _total_throughput_counter;
|
|
RuntimeProfile::Counter* _num_disks_accessed_counter;
|
|
};
|
|
|
|
} // namespace doris
|
|
|
|
#endif
|