Commit Graph

5755 Commits

Author SHA1 Message Date
8845c2cf44 [fix](bdbje) remove System.exit(-1) in BDBEnvironment.close() (#19335)
* https://github.com/apache/doris/issues/18766
2023-05-11 01:01:38 +08:00
0f6c69de53 [Fix](multi-catalog) Fix sync hms event failed when start FE soon. (#19344)
* [Fix](multi-catalog) Fix sync hms event failed when start FE soon after.

* [Fix](multi-catalog) Fix sync hms event failed when start FE soon after.

---------

Co-authored-by: wangxiangyu@360shuke.com <wangxiangyu@360shuke.com>
2023-05-11 01:00:55 +08:00
b129c9901b [improvement](FQDN)Change the implementation of fqdn (#19123)
Main changes:
1. If fqdn is enabled in the configuration file, when fe starts, localAddr will obtain fqdn instead of IP, priority_ Networks will fail
2. The IP and host names of Backend and Front are combined into one field, host. When fqdn is enabled, it represents the host name, and when not enabled, it represents the IP address
3. The communication between clusters directly uses fqdn, and various Connection pool add authentication mechanisms to prevent the IP address of the domain name from changing and the connection between nodes from making errors
4. No longer requires polling to verify if the IP has changed, delete fqdnManager
5. Change the method of verifying the legitimacy of nodes between FEs from obtaining client IP to displaying the identity of the transmitting node itself in the HTTP request header or the message body of the throttle
6. When processing the heartbeat, if BE finds that the host stored by itself is inconsistent with the host stored by the master, after verifying the legitimacy of the host, it will change its own host instead of directly reporting an error
7. Simplify the generation logic of fe name

Scope of influence:
1. Establishing communication connections between clusters
2. Determine whether it is the same node through attributes such as IP
3. Print Log
4. Information display
5. Address Splicing
6. k8s deployment
7. Upgrade compatibility

Test plan:
1. Change the IP address of the node, while keeping the fqdn unchanged, change the IP addresses of fe and be, and verify whether the cluster can read and write data normally
2. Use the master code to generate metadata, and use the previous metadata on the current pr to verify whether it is compatible with the old version (upgrading is no longer supported if fqdn has been enabled before)
3. Deploy fe and be clusters using k8s to verify whether the cluster can read and write data normally
4. According to https://doris.apache.org/zh-CN/docs/dev/admin-manual/cluster-management/fqdn?_highlight=fqdn#%E6%97%A7%E9%9B%86%E7%BE%A4%E5%90%AF%E7%94%A8fqdn Upgrading old clusters
5. Use streamload to specify the fqdn of fe and be to import data separately
6. Use different users to start transactions and write data using insert statements
2023-05-11 00:44:48 +08:00
3a22af836e [fix](jdbc catalog) fix error to clickhouse uint64 type Conversion (#19463)
* [fix](jdbc catalog) fix error to clickhouse uint64 type Conversion

* add test case
2023-05-10 21:53:30 +08:00
d0a8cd0fc5 [fix](nereids) dphyper join reorder may lost some join conjuncts (#19318) 2023-05-10 19:02:35 +08:00
337732ae01 [fix](nereids) lost exchange before global limit merge node sometimes (#19396)
should add exchange node between global and local limit
2023-05-10 17:57:21 +08:00
894801f5ce [feature](load-refactor) Step1: InsertStmt as facade layer and run S3/Broker Load (#19142) 2023-05-10 17:48:50 +08:00
d20b5f90d8 [feature](executor) Automatically set the instance_num using the info from be. (#19345)
1. fixed some error regressions (results error with big nstance_num due to incorrect order by).
2. if set parallel_fragment_exec_instance_num to 0, the concurrency in the Pipeline execution engine will automatically be set to half of the number of CPU cores.
3. add limit to parallel_fragment_exec_instance_num that it cannot be set to more than fe.conf::max_instance_num(Default: 128)
```
mysql [(none)]>set parallel_fragment_exec_instance_num = 514;
ERROR 1231 (42000): errCode = 2, detailMessage = Variable 'parallel_fragment_exec_instance_num' can't be set to the value of '514(Should not be set to more than 128)'
```
2023-05-10 17:07:41 +08:00
4483e3a6e1 [Improvement](scan) add a config for scan queue memory limit (#19439) 2023-05-10 13:14:23 +08:00
ab8cfbbfb6 [bugfix](regression-test) add some window function test (#19460)
Only 2000 union will cause BE use a lot of memory, so that I enable other test in this PR only disable 2000 union case.



---------

Co-authored-by: yiguolei <yiguolei@gmail.com>
2023-05-10 12:06:02 +08:00
553068f7be [feat](Nereids): trace enumeration of DPHyp (#19394) 2023-05-10 11:57:35 +08:00
fae2e5fd22 [enchancement](statistics) implement automatically analyzing statistics and support table level statistics #19420
Add table level statistics, support SHOW TABLE STATS statement to show table level statistics.
Implement automatically analyze statistics, support ANALYZE... WITH AUTO ... statement to automatically analyze statistics.
TODO:

collate relevant p0 tests
Supplement the design description to README.md
Issue Number: close #xxx
2023-05-10 11:47:34 +08:00
601565341b [fix](gson) avoid gson serde with EsRepository (#19385)
To avoid error like:

class org.apache.doris.external.elasticsearch.EsRepository declares multiple JSON fields named runnable
2023-05-10 11:37:18 +08:00
78435823b6 [Fix](multi catalog)Return all partition values while reading hive table. (#19434)
Return all partition values while reading hive table.
Add a config item for the max value of hive table to partition list cache.
Default value is 100.
2023-05-10 10:55:33 +08:00
68eb420cab [fix](MySQL) the way Doris handles boolean type is consistent with MySQL (#19416) 2023-05-10 00:58:09 +08:00
096aa25ca6 [improvement](orc-reader) Implements ORC lazy materialization (#18615)
- Implements ORC lazy materialization, integrate with the implementation of https://github.com/apache/doris-thirdparty/pull/56 and https://github.com/apache/doris-thirdparty/pull/62.
- Refactor code: Move `execute_conjuncts()` and `execute_conjuncts_and_filter_block()` in `parquet_group_reader `to `VExprContext`, used by parquet reader and orc reader.
- Add session variables `enable_parquet_lazy_materialization` and `enable_orc_lazy_materialization` to control whether enable lazy materialization.
- Modify `build.sh` to update apache-orc submodule or download package every time.
2023-05-09 23:33:33 +08:00
1bc405c06f [fix](catalog) fix doris jdbc catalog largeint select error (#19407)
when I use mysql-jdbc 5.1.47 create a doris jdbc catalog, the largeint cannot select
When mysql-jdbc reads largeint, it will convert the format to string because it is too long

mysql> select `largeint` from type3;
ERROR 1105 (HY000): errCode = 2, detailMessage = (127.0.0.1)[INTERNAL_ERROR]Fail to convert jdbc type of java.lang.String to doris type LARGEINT on column: largeint. You need to check this column type between external table and doris table.
2023-05-09 17:34:48 +08:00
aeb3450151 [feature](graph)Support querying data from the Nebula graph database (#19209)
Support querying data from the Nebula graph database
This feature comes from the needs of commercial customers who have used Doris and Nebula, hoping to connect these two databases

changes mainly include:

* add New Graph Database JDBC Type
* Adapt the type and map the graph to the Doris type
2023-05-09 15:30:11 +08:00
e3d4723849 [fix](JDBC) set jdbc parameters to compatible with both MySQL and Doris when reading boolean type (#19399)
Fix errors when read boolean type from external doris cluster by jdbc catalog:
```
ERROR 1105 (HY000): errCode = 2, detailMessage = (172.16.10.11)[INTERNAL_ERROR]Fail to convert jdbc type of java.lang.Integer to doris type BOOL on column: deleted. 
You need to check this column type between external table and doris table.
```
MySQL Types and Return Values for GetColumnTypeName and GetColumnClassName are presented in https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-type-conversions.html.
However when tinyInt1isBit=false, GetColumnClassName of MySQL returns java.lang.Boolean, while that of Doris returns java.lang.Integer. In order to be compatible with both MySQL and Doris, Jdbc params should set tinyInt1isBit=true&transformedBitIsBoolean=true
2023-05-09 13:53:17 +08:00
4302ceaee8 [Improvement](data types) enhance show data types stmt (#18831) 2023-05-09 09:42:44 +08:00
af04c3acab [fix](sequence-column) Fix sequence_col column used default expr insert failed (#18933) 2023-05-08 17:18:25 +08:00
c7a04fa05a [improvement](JDBC Catalog)Added Presto connection to Presto/Trino (#19307) 2023-05-08 14:05:56 +08:00
7f0d6eb644 [log](fe)add log partitionInfo is null, fe not start service (#19143) 2023-05-08 14:04:16 +08:00
e78149cb65 [Enhencement](Export) add property for outfile/export and add test (#18997)
This pr does three things:
1. add `delete_existing_files` property for outfile/export. If `delete_existing_files = true`, export/outfile will delete all files under file_path first.
2. add p2 test for export
3. modify docs
2023-05-08 14:02:20 +08:00
05c5c5949c [refactor](FileCache) set FE session variable enable_file_cache=false as default (#19327)
Users should set `enable_file_cache=true` in FE session variables and BE configuration to enable file cache.
2023-05-08 13:53:51 +08:00
fb5b3029a7 [fix](meta) fix image file checksum error (#19363) 2023-05-08 10:00:09 +08:00
32273a7a9b [improvement](backend)Optimized error messages for insufficient replication (#19211)
optimized the error message for creating insufficient table replications
2023-05-07 20:45:21 +08:00
abc73ac1eb [refactor](cluster)(step-1) remove cluster related stmt (#19355)
* [refactor](cluster)(step-1) remove cluster stmt
2023-05-07 18:44:42 +08:00
9edbfa37cd [Enhancement](Broker Load) New progress manager for showing loading progress status (#19170)
This work is in the early stage, current progress is not accurate because the scan range will be too large
for gathering information, what's more, only file scan node and import job support new progress manager

## How it works

for example, when we use the following load query:
```
LOAD LABEL test_broker_load
(
	DATA INFILE("XXX")
	INTO TABLE `XXX`
        ......
)
```

Initial Progress: the query will call `BrokerLoadJob` to create job, then `coordinator` is called to calculate scan range and its location. 
Update Progress: BE will report runtime_state to FE and FE update progress status according to jobID and fragmentID

we can use `show load` to see the progress

PENDING:
```
         State: PENDING
      Progress: 0.00%
```

LOADING:
```
         State: LOADING
      Progress: 14.29% (1/7)
```

FINISH:
```
         State: FINISHED
      Progress: 100.00% (7/7)
```

At current time, full output of `show load\G` looks like:

```
*************************** 1. row ***************************
         JobId: 25052
         Label: test_broker
         State: LOADING
      Progress: 0.00% (0/7)
          Type: BROKER
       EtlInfo: NULL
      TaskInfo: cluster:N/A; timeout(s):250000; max_filter_ratio:0.0
      ErrorMsg: NULL
    CreateTime: 2023-05-03 20:53:13
  EtlStartTime: 2023-05-03 20:53:15
 EtlFinishTime: 2023-05-03 20:53:15
 LoadStartTime: 2023-05-03 20:53:15
LoadFinishTime: NULL
           URL: NULL
    JobDetails: {"Unfinished backends":{"5a9a3ecd203049bc-85e39a765c043228":[10080]},"ScannedRows":39611808,"TaskNumber":1,"LoadBytes":7398908902,"All backends":{"5a9a3ecd203049bc-85e39a765c043228":[10080]},"FileNumber":1,"FileSize":7895697364}
 TransactionId: 14015
  ErrorTablets: {}
          User: root
       Comment: 
```

## TODO:

1. The current partition granularity of scan range is too large, resulting in an uneven loading process for progress."
2. Only broker load supports the new Progress Manager, support progress for other query
2023-05-06 22:44:40 +08:00
2fe9ba7c2a [fix](jdbc catalog) fix trino jdbc catalog varchar type err (#19298) 2023-05-06 17:16:28 +08:00
4c6ca88088 Revert "[refactor](function) ignore DST for function from_unixtime (#19151)" (#19333)
This reverts commit 9dd6c8f87b73db238bfd38fb1d76f3796910f398.
2023-05-06 16:33:58 +08:00
3f6e5118e6 [enchancement](statistics) support periodic collection of statistics (#19247)
This PR enables periodic collection of statistics and is a precursor to automatic statistics collection. It mainly includes the following contents:

support periodic collection of statistics.
Change the type of Date in statistics p0 to DateV2(see [Enhancement](data-type) add FE config to prohibit create date and decimalv2 type #19077) for test locally. complement cases(remove Chinese characters, optimize code, etc) , improve stability.
Supports setting whether to keep records of statistics synchronization job info, convenient for use in p0 testing.
The statistics job table was modified, and some auxiliary judgments were added to avoid the user perceiving the modification. This function was removed when the table schema is stable.
2023-05-06 14:53:06 +08:00
ccd22c508a [chore](fe) Fix the build on Centos 6 (#19255) 2023-05-06 14:50:56 +08:00
3287f350de [feature](table) implement the round robin selection be when create tablet (#19167) 2023-05-06 14:46:48 +08:00
ff6e0d3943 [Improvement](meta) support return no partition info for show_create_table (#19030)
Some tables have a mount of partitions, when use show create table stmt on them,
you will get so many lines of result that a whole screen cannot  show them all, even if you scroll up to the top.

show create table table2;
| table2 | CREATE TABLE `table2` (
  `k1` int(11) NULL COMMENT 'test column k1',
  `k2` int(11) NULL COMMENT 'test column k2'
) ENGINE=OLAP                                        
DUPLICATE KEY(`k1`, `k2`)
COMMENT 'test table1'          
PARTITION BY RANGE(`k1`)           
(PARTITION p01 VALUES [("-2147483648"), ("10")),
PARTITION p02 VALUES [("10"), ("100"))) 
 DISTRIBUTED BY HASH(`k1`) BUCKETS 1
PROPERTIES (                                                                                                                        
"replication_allocation" = "tag.location.default: 1",
"storage_format" = "V2",
"light_schema_change" = "true",
"disable_auto_compaction" = "false"
);


show brief create table table2;
| table2 | CREATE TABLE `table2` (  `k1` int(11) NULL COMMENT 'test column k1',
  `k2` int(11) NULL COMMENT 'test column k2'
) ENGINE=OLAP
DUPLICATE KEY(`k1`, `k2`)
COMMENT 'test table1'
DISTRIBUTED BY HASH(`k1`) BUCKETS 1
PROPERTIES (
"replication_allocation" = "tag.location.default: 1",
"storage_format" = "V2",
"light_schema_change" = "true",
"disable_auto_compaction" = "false"
); |
2023-05-06 14:45:08 +08:00
bd23db762d [minor](stats) Add doc for stats framework (#19311) 2023-05-06 13:30:55 +08:00
cdfbfd1f6b [fix](replica) Fix inconsistent replica id between FE and BE (#18688) 2023-05-06 11:06:29 +08:00
a72eee24f1 [fix](nereids) fix merge project with window function bug (#19280)
1. don't merge projects if any window function exists
2. bypass SimplifyArithmeticRule for decimalV3 type
2023-05-06 10:38:14 +08:00
42bac3343d [Refactor](StmtExecutor)(step-1) Extract profile logic from StmtExecutor and Coordinator (#19219)
Previously, we use RuntimeProfile class directly, and because there are multiple level in profile, so you can see there may be several RuntimeProfile instances be to maintain.

I created several new classes for profile:

class Profile:
	The root profile of a execution task(query or load)
	
class SummaryProfile:
	The profile that contains summary info of a execution task,
	such as start time, end time, query id. etc.
	
class ExecutionProfile:
	The profile for a single Coordinator. Each Coordinator will
	have a ExecutionProfile.
The profile structure is as following:

Profile:
	SummaryProfile:
	ExecutionProfile 1:
		Fragment 0:
			Instance 0:
			Instance 1:
			...
		Fragment 1:
		...
	ExecutionProfile 2:
		...
You can see, each Profile has a SummaryProfile and one or more ExecutionProfile.
For most kinds of job, such as query/insert, there is only one ExecutionProfile. But for broker load job, will may be more than one ExecutionProfile, corresponding to each sub task of the load job.

How to use
For query/insert, etc:

Each StmtExcutor will have a Profile instance.
Each Coordinator will have a ExecutionProfile instance.
StmtExcutor is responsible for the SummaryProfile, it will update the SummaryProfile during the execution.
Coordinator is responsible for the ExecutionProfile, it will first add ExecutionProfile to the child of Profile, and update the ExecutionProfile periodically during the execution.
For Load/Export, etc:

Each job will hava a Profile instance.
For each Coordinator of this job, add its ExecutionProfile to the children of job's Profile.
Behavior Change
The columns of show load profile/show query profile and QueryProfile Web UI has changed to:

| Profile ID | Task Type | Start Time | End Time | Total | Task State | User | Default Db| Sql Statement | Is Cached | Total Instances Num | Instances Num Per BE | Parallel Fragment Exec Instance Num | Trace ID |
The Query Id and Job Id is removed and using Profile ID instead.
For load job, the profile id is job id, for query/insert, is query id.
2023-05-06 09:01:51 +08:00
5210c04241 [Refactor](ScanNode) Split interface refactor (#19133)
Move getSplits function to ScanNode, remove Splitter interface.
For each kind of data source, create a specific ScanNode and implement the getSplits interface. For example, HiveScanNode.
Remove FileScanProviderIf move the code to each ScanNode.
2023-05-05 23:20:29 +08:00
c9fa10ac10 [fix](doc) avoid generate config doc automatically (#19302)
After #19246, when compilng FE, it will automatically generate Config and Session Variables doc and overwrite the origin one.
Need to avoid it because it is not ready to use yet
2023-05-05 20:39:05 +08:00
159344792f [enhance](Nereids) make getExplorationRule static (#19278)
make getExplorationRule static to avoid new ArrayList() multiple times.
2023-05-05 19:58:24 +08:00
3e3262361c [fix](fe)havingClause should be substituted the same way as resultExprs (#19261)
substituted havingClause in the same way as resultExprs to prevent " HAVING clause not produced by aggregation output" error
2023-05-05 18:03:43 +08:00
96d729f719 [refactor](fs)(step3)use filesystem instead of old storage, new storage just access remote object storage (#19098)
see #18960

PR1: add new storage file system template and move old storage to new package
PR2: extract some method in old storage to new file system.
PR3: use storages to access remote object storage, and use file systems to access file in local or remote location. Will add some unit tests.

this is PR3.
2023-05-05 16:20:20 +08:00
70236adc1f [Refactor](doc)(config)(variable) use script to generate doc for FE config and session variables (#19246)
The document of configs(FE and BE) and session variables is hard to maintain.
Because developer need to modify both code and document.
And you can see that some of config's document is missing.

So I plan to write the document of config or variables directly in code, and using
script to generate document automatically.

How To
This CL mainly changes:

Add field in Config and Session Variables' annaotion

description: The description of the config or variable item. It is a String array. And first element is in Chinese, second is in English
options: the valid options if the config or variable is enum.
Add a scripts docs/generate-config-and-variable-doc.sh

Simple run sh docs/generate-config-and-variable-doc.sh and it will generate docs of FE config and variables,
And save it under docs/admin-manual/config/fe-config.md and docs/advanced/variables.md,
both in Chinese and in English.

And there are template markdowns for this script to read and replace with real doc content.

TODO
Too many description need to be filled. I will finish them in next PR. And now the origin doc remain unchanged.
Find a way to check the description field of config and variables, to make sure we won't missing it.
Generate doc for BE config.
2023-05-05 14:42:43 +08:00
b6c7f3aeb8 [opt](FileCache) Add file cache metrics and management (#19177)
Add file cache metrics and management.
1. Get file cache metrics
> If the performance of file cache is not efficient, there are currently no metrics to investigate the cause. In practice, hit ratio, disk usage, and segments removed status are very important information. 

API: `http://be_host:be_webserver_port/metrics`
File cache metrics for each base path start with `doris_be_file_cache_` prefix. `hits_ratio` is the hit ratio of the cache since BE startup; `removed_elements` is the num of removed segment files since BE startup; Every cache path has three queues: index, normal and disposable. The capacity ratio of the three queues is 1:17:2.
```
doris_be_file_cache_hits_ratio{path="/mnt/datadisk1/gaoxin/file_cache"} 0.500000
doris_be_file_cache_hits_ratio{path="/mnt/datadisk1/gaoxin/small_file_cache"} 0.500000
doris_be_file_cache_removed_elements{path="/mnt/datadisk1/gaoxin/file_cache"} 0
doris_be_file_cache_removed_elements{path="/mnt/datadisk1/gaoxin/small_file_cache"} 0

doris_be_file_cache_normal_queue_max_size{path="/mnt/datadisk1/gaoxin/file_cache"} 912680550400
doris_be_file_cache_normal_queue_max_size{path="/mnt/datadisk1/gaoxin/small_file_cache"} 8500000000
doris_be_file_cache_normal_queue_max_elements{path="/mnt/datadisk1/gaoxin/file_cache"} 217600
doris_be_file_cache_normal_queue_max_elements{path="/mnt/datadisk1/gaoxin/small_file_cache"} 102400

doris_be_file_cache_normal_queue_curr_size{path="/mnt/datadisk1/gaoxin/file_cache"} 14129846
doris_be_file_cache_normal_queue_curr_size{path="/mnt/datadisk1/gaoxin/small_file_cache"} 14874904
doris_be_file_cache_normal_queue_curr_elements{path="/mnt/datadisk1/gaoxin/file_cache"} 18
doris_be_file_cache_normal_queue_curr_elements{path="/mnt/datadisk1/gaoxin/small_file_cache"} 22

...
```
2. Release file cache
> Frequent segment files swapping can seriously affect the performance of file cache. Adding a deletion interface helps users clean up the file cache.

API: `http://be_host:be_webserver_port/api/file_cache?op=release&base_path=${file_cache_base_path}`
Return the number of released segment files. If `base_path` is not provide in url, all cache paths will be released.
It's thread-safe to call this api, so only the segment files not been read currently can be released.
```
{"released_elements":22}
```
3. Specify the base path to store cache data
> Currently, regression testing lacks test cases of file cache, which cannot guarantee the stability of file cache. This interface is generally used in regression testing scenarios. Different queries use different paths to verify different usage cases and performance.

User can set session variable `file_cache_base_path` to specify the base path to store cache data. `file_cache_base_path="random"` as default, means chosing a random path from cached paths to store cache data.  If `file_cache_base_path` is not one of the base paths in BE configuration, a random path is used.
2023-05-05 14:28:01 +08:00
Pxl
09b9aba243 [Bug](web) fix web of frontend meet error (#19279)
* fix web of frontend meet error

upgrade servelet api version
2023-05-05 12:26:50 +08:00
9dd6c8f87b [refactor](function) ignore DST for function from_unixtime (#19151) 2023-05-05 11:51:49 +08:00
1a1aee3886 [fix](load) exclude canceled job when canceling load (#19268) 2023-05-05 10:31:16 +08:00
9813406757 [Enhancement](HttpServer) Add http interface authentication for BE (#17753) 2023-05-04 23:46:49 +08:00