Commit Graph

4627 Commits

Author SHA1 Message Date
86d77084a4 [Fix](multi-catalog) fix oss access issue with aws s3 sdk (#20287) 2023-06-02 10:40:07 +08:00
8bec2b41db [pipeline](rpc) support closure reuse in pipeline exec engine (#20278) 2023-06-02 09:50:21 +08:00
608d2a3eca [Bug](exec) push down no group by agg min cause error result (#20289)
sql """
CREATE TABLE t1_int (
num int(11) NULL,
dgs_jkrq bigint(20) NULL
) ENGINE=OLAP
DUPLICATE KEY(num)
COMMENT 'OLAP'
DISTRIBUTED BY HASH(num) BUCKETS 1
PROPERTIES (
"replication_allocation" = "tag.location.default: 1",
"storage_format" = "V2",
"light_schema_change" = "true",
"disable_auto_compaction" = "false",
"enable_single_replica_compaction" = "false"
);
"""
sql """insert into t1_int values(1,1),(1,2),(1,3),(1,4),(1,null);"""
qt_sql """
select min(dgs_jkrq) from t1_int;
"""

get the error result:4

after change we get the right result:1
2023-06-01 17:29:46 +08:00
f0513a861d [Improve](Scan) add a session variable to make scan run serial (#20220)
Parallel scanning can result in some read amplification, for example, select * from xx where limit 1 actually requires only one row of data. However, due to parallel scanning of multiple tablets, read amplification occurs, leading to performance bottlenecks in high-concurrency scenarios. This PR Adding a SessionVariable to enforce serial scanning can help mitigate this issue.
2023-06-01 15:06:35 +08:00
519f01133a [feature](decimal)support cast rounding half up and div precision increment in decimalv3. (#19811) 2023-06-01 13:09:58 +08:00
4387f47fb5 [pipeline](load) support pipeline load (#20217) 2023-06-01 11:42:43 +08:00
9e21318834 [refactor](dynamic table) Make segment_writer unaware of dynamic schema, and ensure parsing is exception-safe. (#19594)
1. make ColumnObject exception safe
2. introduce FlushContext and construct schema at memtable flush stage to make segment independent from dynamic schema
3. add more test cases
2023-06-01 10:25:04 +08:00
5b6b1b38a6 [Enhancement](merge-on-write) Performance optimization of calculations of delete bitmap between segments (#20153)
1. Use heap sort to find duplicated keys between segments and update the delete-bitmap. The old implementation traversed all keys in all segments, used each key to search for duplicates in earlier segments, and then marked them for deletion.

2. Trick: Each time the heap top is popped as a key1, the new heap top is key2, allowing for jumping directly from key1 to key2 instead of advancing iteratively.

3. Effect: This technique works well when there are many segments within the same rowset and the imported data is relatively ordered.
2023-06-01 10:12:59 +08:00
09e6b6580f [fix](checksum) delete predicates might be inconsistent with rowset readers in checksum task (#20251)
The BlockReader capture rowsets and init delete_handler in different place. If there is a base compaction, it may result in obtaining inconsistent delete handlers. Therefore, place these two operations under the same lock.
2023-06-01 09:06:51 +08:00
6ee99c4138 [fix](load_profile) fix rows stat and add close_wait in sink (#20181) 2023-05-31 18:23:30 +08:00
1aefc26ca0 [Bug](memtable) fix a bug occurred when we were inserting data into duplicate table without keys (#20233) 2023-05-31 18:21:36 +08:00
6adb3fdf11 [fix](match_phrase) Fix the inconsistent query result for 'match_phrase' after creating index without support_phrase property (#20258)
if create inverted index without support_phrase property, remaining the match_phrase condition to filter by match function.
2023-05-31 18:09:50 +08:00
c03a19ea23 [improvement](bitmap) Using set to store a small number of elements to improve performance (#19973)
Test on SSB 100g:

select lo_suppkey, count(distinct lo_linenumber) from lineorder group by lo_suppkey;
exec time: 4.388s

create materialized view:

create materialized view customer_uv as select lo_suppkey, bitmap_union(to_bitmap(lo_linenumber)) from lineorder group by lo_suppkey;
select lo_suppkey, count(distinct lo_linenumber) from lineorder group by lo_suppkey;
exec time: 12.908s

test with the patch, exec time: 5.790s
2023-05-31 16:13:42 +08:00
f9dfcb923d [Enhancement] Change Create Resource Group Grammar (#20249) 2023-05-31 15:23:24 +08:00
6eb99d1219 [chore](arm) support build with hadoop libhdfs on arm (#20256)
hadoop-3.3.4.3-for-doris already support build on arm
2023-05-31 13:57:48 +08:00
ca88425bee [Enhancement](merge-on-write) optimize bloom filter for primary key index (#20182) 2023-05-31 09:49:15 +08:00
aae04d9680 [Chore](log) Remove some verbose log && Change log level (#20236) 2023-05-31 09:15:01 +08:00
56fa38de1d [Enhencement](JDBC Catalog) refactor jdbc catalog insert logic (#19950)
This PR refactors the old way of writing data to JDBC External Table & JDBC Catalog, mainly including the following tasks
1. Continuing the work of @BePPPower 's PR #18594, changing the logic of splicing Inster sql to operating off-heap memory and using preparedStatement.set to write data logic to complete
2. Supplement the support written by largeint type, mainly to adapt to Java.Math.BigInteger, which uses binary operations
3. Delete the splicing SQL logic in the JDBC External Table & JDBC Catalog related written code

ToDo: Binary type,like bit,binary, blob...

Finally, special thanks to @BePPPower , @AshinGau  for his work

Co-authored-by: Tiewei Fang <43782773+BePPPower@users.noreply.github.com>
2023-05-30 22:03:39 +08:00
accaff1026 [Feature](compaction) wip: single replica compaction (#19237)
Currently, compaction is executed separately for each backend, and the reconstruction of the index during compaction leads to high CPU usage. To address this, we are introducing single replica compaction, where a specific primary replica is selected to perform compaction, and the remaining replicas fetch the compaction results from the primary replica.

The Backend (BE) requests replica information for all peers corresponding to a tablet from the Frontend (FE). This information includes the host where the replica is located and the replica_id. By calculating hash(replica_id), the replica with the smallest hash value is responsible for executing compaction, while the remaining replicas are responsible for fetching the compaction results from this replica.
The compaction task producer thread, before submitting a compaction task, checks whether the local replica should fetch from its peer. If it should, the task is then submitted to the single replica compaction thread pool.
When performing single replica compaction, the process begins by requesting rowset versions from the target replica. These rowset_versions are then compared with the local rowset versions. The first version that can be fetched is selected.
2023-05-30 21:12:48 +08:00
Pxl
7415135ad4 [Enchancement](execute) make assert_cast can output derived class name (#20212)
before:
F0530 11:02:41.989699 1154607 assert_cast.h:54] Bad cast from type:doris::vectorized::IDataType const* to doris::vectorized::DataTypeAggState const*

after:
F0530 11:24:28.390286 1292475 assert_cast.h:46] Bad cast from type:doris::vectorized::DataTypeNullable* to doris::vectorized::DataTypeAggState const*
2023-05-30 20:23:04 +08:00
1919355c04 [Feature](Inverted index) add MATCH_ PHRASE query (#20156) 2023-05-30 19:28:57 +08:00
3d8440a1b7 [Feature-WIP](inverted index) support phrase for inverted index writer (#20193) 2023-05-30 17:07:45 +08:00
0c98355fff [fix](catalog) fix create catalog with resource replay issue and kerberos auth issue (#20137)
1. Fix create catalog with resource replay bug.
	If user create catalog using `create catalog hive with resource xxx`, when replaying edit log,
	there is a bug that resource may be dropped, causing NPE and FE will fail to start.

	In this PR, I add a new FE config `disallow_create_catalog_with_resource`, default is true.
	So that `with resource` will not be allowed, and it will be deprecated later.

	And also fix the replay bug to avoid NPE.

2. Fix issue when creating 2 hive catalogs to connect with and without kerberos authentication.

	When user create 2 hive catalogs, one use simple auth, the other use kerberos auth.
	The query may fail with error like: `Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.`

	So I add a default property for hive catalog: `"ipc.client.fallback-to-simple-auth-allowed" = "true"`.
	Which means this property will be added automatically when user creating hive catalog, to avoid such problem.

3. Fix calling `hdfsExists()` issue

	When calling `hdfsExists()` with non-zero return code, should check if it encounters error or is file not found.

3. Some code refactor

	Avoid import `org.apache.parquet.Strings`
2023-05-30 16:57:39 +08:00
4475a69c57 [Fix](multi-catalog) Fix q03 in text_external_brown regression test by handling correctly when text converter parsing error. (#20190)
Issue Number: close #20189

Fix `q03` in `text_external_brown` regression test by handling correctly when text converter parsing error.
2023-05-30 15:08:28 +08:00
de08c4a57b [enhance](match) Support match query without inverted index (#19936) 2023-05-30 15:02:57 +08:00
bb12a1cb49 [Enhance](array function) add support for DecimalV3 for array_enumerate_uniq() (#17724) 2023-05-30 13:09:19 +08:00
c7b8c83a7f [Improvement](runtimefilter) Build bloom filter according to the exact build size for IN_OR_BLOOM_FILTER (#20166)
* [Improvement](runtimefilter) Build bloom filter according to the exact build size for IN_OR_BLOOM_FILTER
2023-05-30 12:55:30 +08:00
945cb56fb6 [Bug](segment iterator) remove DCHECK for block row count (#20199)
CHECK rows count of block at segment iterator is not ready when `enable_common_expr_pushdown`
2023-05-30 11:34:25 +08:00
9b32d42ee4 [Fix](multi-catalog) fix all nested type test which introduced by #19518(support insert-only transactional table). (#20194)
Fix `qt_nested_types_orc` in `test_tvf_p2` which introduced by #19518(support insert-only transactional table).

### Test case error
`qt_nested_types_orc` in `test_tvf_p2`
```
select count(array0), count(array1), count(array2), count(array3), count(struct0), count(struct1), count(map0)
            from hdfs(
            "uri" = "hdfs://172.21.16.47:4007/catalog/tvf/orc/all_nested_types.orc",
            "format" = "orc",
            "fs.defaultFS" = "hdfs://172.21.16.47:4007")
```

**Error Message:**
errCode = 2, detailMessage = (172.21.0.101)[INTERNAL_ERROR]Wrong data type for colum 'struct1'
2023-05-30 09:55:40 +08:00
2abbc9f921 [Fix](multi-catalog) Fix parquet bugs of #19758 'replace the single pointer with an array of 'conjuncts' in ExecNode'. (#20191)
Fix some parquet reader bugs which introduced by #19758 'replace the single pointer with an array of 'conjuncts' in ExecNode'.
2023-05-30 09:55:12 +08:00
90b4e127e3 [Feature](inverted index) add parser_mode properties for inverted index parser (#20116)
We add parser mode for inverted index, usage like this:
```
CREATE TABLE `inverted` (
  `FIELD0` text NULL,
  `FIELD1` text NULL,
  `FIELD2` text NULL,
  `FIELD3` text NULL,
  INDEX idx_name1 (`FIELD0`) USING INVERTED PROPERTIES("parser" = "chinese", "parser_mode" = "fine_grained") COMMENT '',
  INDEX idx_name2 (`FIELD1`) USING INVERTED PROPERTIES("parser" = "chinese", "parser_mode" = "coarse_grained") COMMENT ''
) ENGINE=OLAP
);
```
2023-05-29 23:21:52 +08:00
Pxl
d1d0d9e5e8 [Chore](build) adjust some compile diagnostic (#20162) 2023-05-29 19:19:01 +08:00
f9478dbd9a [fix](function) Fix VcompoundPred execute const column #20158
recurrent:

./run-regression-test.sh  --run -suiteParallel 1 -actionParallel 1 -parallel 1 -d query_p0/sql_functions/window_functions

select      /*+ SET_VAR(query_timeout = 600) */ subq_0.`c1` as c0 from    (select           ref_1.`s_name` as c0,          ref_1.`s_suppkey` as c1,          ref_1.`s_address` as c2,          ref_1.`s_address` as c3       from          regression_test_query_p0_sql_functions_window_functions.tpch_tiny_supplier as ref_1       where (ref_1.`s_name` is NULL)          or (ref_1.`s_acctbal` is not NULL)) as subq_0 where (subq_0.`c3` is NULL)    or (subq_0.`c2` is not NULL)
reason:
FunctionIsNull and FunctionIsNotNull execute returns a const column, but their VectorizedFnCall::is_constant returns false, which causes problems with const handling when VCompoundPred::execute.

This pr converts const column to full column in VCompoundPred execute. In the future, there will be a more thorough solution to such problems.
2023-05-29 18:16:58 +08:00
ab8125d56f [Improve](performance) introduce SchemaCache to cache TabletSchame & Schema (#20037)
* [Improve](performance) introduce SchemaCache to cache TabletSchame & Schema

1. When the system is under high-concurrency load with wide table point queries, the frequent memory allocation and deallocation of Schema become evident system bottlenecks. Additionally, the initialization of TabletSchema and Schema also becomes a CPU hotspot.Therefore, the introduction of a SchemaCache is implemented to cache these resources for reuse.

2. Make some variables wrapped with std::unique<unique_ptr>

Performance:
| 状态              | QPS | 平均响应时间 (avg) | P99 响应时间 |
|------------------|-----|------------------|-------------|
| 开启 SchemaCache | 501 | 20ms             | 34ms        |
| 关闭 SchemaCache | 321 | 31ms             | 61ms        |

* handle schema change with schema version

* remove useless header

* rebase
2023-05-29 17:34:53 +08:00
cc20c430f6 [fix](partial update) use correct tablet schema for rowset writer in publish task (#20117) 2023-05-29 16:57:18 +08:00
91dae8a5b6 [FIX](mysql_writer) fix mysql output binary object works (#20154)
* fix struct_export out data

* fix mysql writer output with binary true
2023-05-29 16:53:33 +08:00
55ccddb62c [Conf](decimalv3) enable decimalv3 by default 2023-05-29 15:38:31 +08:00
Pxl
8376e5eefb [Chore](build) add non-virtual-dtor, remove no-embedded-directive/no-zero-length-array (#20118)
add non-virtual-dtor, remove no-embedded-directive/no-zero-length-array
2023-05-29 14:42:47 +08:00
Pxl
bbb3af6ce6 [Feature](agg_state) support agg_state combinators (#19969)
support agg_state combinators state/merge/union
2023-05-29 13:07:29 +08:00
8378ab5e41 [Fix](inverted index) fix memeory leak when inverted index writer do not finish correctly (#20028)
* [Fix](inverted index) fix memeory leak when inverted index writer do not finish correctly

* [Update](inverted index) use smart pointer to avoid memeory leak

* [Chore](format) code format

---------

Co-authored-by: airborne12 <airborne12@gmail.com>
2023-05-29 12:18:14 +08:00
a86134cb39 [fix](executor) Fixed an error with cast as time. #20144
before

mysql [(none)]>select cast("10:10:10" as time);
+-------------------------------+
| CAST('10:10:10' AS TIMEV2(0)) |
+-------------------------------+
| 00:00:00                      |
+-------------------------------+
after

mysql [(none)]>select cast("10:10:10" as time);
+-------------------------------+
| CAST('10:10:10' AS TIMEV2(0)) |
+-------------------------------+
| 10:10:10                      |
+-------------------------------+
In the past, we supported this syntax.

mysql [(none)]>select cast("2023:05:01 13:14:15" as time);
+------------------------------------------+
| CAST('2023:05:01 13:14:15' AS TIMEV2(0)) |
+------------------------------------------+
| 13:14:15                                 |
+------------------------------------------+
However, "10:10:10" is also a valid datetime.

mysql [(none)]>select cast("10:10:10" as datetime);
+-----------------------------------+
| CAST('10:10:10' AS DATETIMEV2(0)) |
+-----------------------------------+
| 2010-10-10 00:00:00               |
+-----------------------------------+
So here, the order of parsing has been adjusted.
2023-05-29 12:17:21 +08:00
9f8de89659 [refactor](exec) replace the single pointer with an array of 'conjuncts' in ExecNode (#19758)
Refactoring the filtering conditions in the current ExecNode from an expression tree to an array can simplify the process of adding runtime filters. It eliminates the need for complex merge operations and removes the requirement for the frontend to combine expressions into a single entity.

By representing the filtering conditions as an array, each condition can be treated individually, making it easier to add runtime filters without the need for complex merging logic. The array can store the individual conditions, and the runtime filter logic can iterate through the array to apply the filters as needed.

This refactoring simplifies the codebase, improves readability, and reduces the complexity associated with handling filtering conditions and adding runtime filters. It separates the conditions into discrete entities, enabling more straightforward manipulation and management within the execution node.
2023-05-29 11:47:31 +08:00
859b03dfdf [Improvement](topn) prevent memory usage of key topn increasing unlimited (#19978) 2023-05-29 10:16:15 +08:00
e0d9f7f955 [enhancement](load) add some profile items for load (#20141) 2023-05-29 09:54:03 +08:00
42239d635a [fix](tablet_manager_lock) fix create tablet timeout #20067 (#20069) 2023-05-28 23:05:13 +08:00
4573ee9a49 [enhance](PrefetchReader) abort load task when data size returned by S3 is smaller than requested (#19947)
We encountered one confusing situation where buffered reader were trapped in one endless loop when calling readat. Then we found out that it was all due to the return data size is less than requested.
As the following picture shows, the actual data size is about 2M, and when we called readat it only retrieved about 1MB.
2023-05-28 21:48:17 +08:00
9d44918036 [Improve](data-type) Clean datatype uselesscode (#20145)
* fix struct_export out data

* delete useless code with data type
2023-05-28 20:48:29 +08:00
c45da40ed7 [refactor-WIP](TaskWorkerPool) add specific classes for ALTER_TABLE, CLONE, STORAGE_MEDIUM_MIGRATE task (#20140) 2023-05-28 19:27:08 +08:00
ae352997b4 [Enhancement](alter inverted index) Improve alter inverted index performance with light weight add or drop inverted index (#19063) 2023-05-28 11:23:07 +08:00
da17c45c0b [enhance](FileWriter)enhance s3 file writer bvar to avoid adding abort bytes (#20138)
* don't add each time upload or it would add aborted bytes

* alloca memory
2023-05-28 10:52:37 +08:00