Commit Graph

271 Commits

Author SHA1 Message Date
8679095e5c [feature](debug) support debug point used in debug code (#24502) 2023-09-25 17:56:12 +08:00
c943a05065 [fix](stats) Fix data size calculation of auto sample (#24672)
1. Fix data size calculation of auto sample, before this pr, the data size is include all the replicas
2. Move some auto analyze related options to global session variable
3. Add some logs
2023-09-22 18:12:39 +08:00
c9b2f4cb92 [workload](pipeline) Add cgroup cpu controller (#24052) 2023-09-21 21:49:33 +08:00
e4b551e2ce [fix](Config): Remove unused config max_connection_scheduler_threads_num (#24597) 2023-09-20 18:11:56 +08:00
5eb8fe3d6e [improvement](type) modify the inner type display of the Array/Map/Struct type (#24459)
In the old code, when using desc command to view the table schema
It will display as follows
```
ARRAY<TINYINT(4)>
ARRAY<SMALLINT(6)>
ARRAY<INT(11)>
ARRAY<BIGINT(20)>
ARRAY<LARGEINT(40)>
```
However, for normal integer type displays, the width is not displayed
So, I changed it to the following
```
ARRAY<TINYINT>
ARRAY<SMALLINT>
ARRAY<INT>
ARRAY<BIGINT>
ARRAY<LARGEINT>
```
2023-09-20 17:03:03 +08:00
fc12362a6d [feature-wip](arrow-flight)(step2) FE support Arrow Flight server (#24314)
This is a POC, the design documentation will be updated soon
2023-09-20 14:42:54 +08:00
0be0b8ff58 [opt](stats) Support display of auto analyze jobs (#24135)
### Support dispaly of auto analyze jobs

After this PR, users and DBA could use such grammar to check the execution status of auto analyze jobs:

```sql

SHOW AUTO ANALYZE [tbl_name] [WHERE STATE='SOME STATE']
```

Record count of history auto analyze job could be configured by setting FE option: auto_analyze_job_record_count, default value is 2000

### Enhance auto analyze

After this PR, auto jobs those created automatically will no longer execute beyond a specific time frame.
2023-09-14 17:10:04 +08:00
4fbb25bc55 [Enhancement](function) Support date_trunc(date) and use it in auto partition (#24341)
Support date_trunc(date) and use it in auto partition
2023-09-14 16:53:09 +08:00
1ef22d7f7c [Feature](variant) add variant type (#24170)
Add variant type for metadata Add persistent information for variant, including the path of variant sub-columns, persisting them to the segment footer and tablet schema of the rowset.
2023-09-14 14:21:53 +08:00
f8692bef4b [fix](io): use try with resource make io stream close automatically to avoid resource leak (#24297) 2023-09-14 11:51:30 +08:00
64337a8698 [Improve](metadata)Start the script to set metadata_failure_recovery (#24308) 2023-09-14 10:02:35 +08:00
786a721e03 [feat](stats) Support analyze with sample automatically (#23978)
1. Analyze with sample automatically when table size is greater than huge_table_lower_bound_size_in_bytes(5G by default). User can disable this feature by fe option enable_auto_sample
2. Support grammer like `ANALYZE TABLE test WITH FULL` to force do full analyze whatever table size is
3. Fix bugs that tables stats doesn't get updated properly when stats is dropped, or only few column is analyzed
2023-09-13 19:42:10 +08:00
9e0d843501 [fix](publish) publish go ahead even if quorum is not met (#23806)
Co-authored-by: Yongqiang YANG <dataroaring@gmail.com>
2023-09-12 14:29:01 +08:00
232f120edc [Improve](Job)Support other types of Job query interfaces (#24172)
- Support MTMV job
- Task info add create time and sql
- Optimize scheduling logic
2023-09-12 13:55:56 +08:00
b5227af6a1 [Feature](partitions) Support auto partition FE part (#24079) 2023-09-11 17:48:19 +08:00
1df2e4454f [improvememt](file-cache) increase virtual node number to make file cache more even (#24143)
The origin virtual number is Math.max(Math.min(512 / backends.size(), 32), 2);, which is too small,
causing uneven cache distribution when enabling file cache.
2023-09-10 19:56:53 +08:00
Pxl
ab7c2b9d22 [Bug](type) fix wildcard char's tostring get wrong result (#24041)
fix wildcard char's tostring get wrong result
2023-09-07 20:25:38 +08:00
fdb7a44f57 Revert "[Feature](partitions) Support auto partition" (#24024)
* Revert "[Feature](partitions) Support auto partition (#23236)"

This reverts commit 6c544dd2011d731b8c9c51384c77bcf19c017981.

* Update config.h
2023-09-07 17:08:26 +08:00
6c544dd201 [Feature](partitions) Support auto partition (#23236)
Co-authored-by: zhangstar333 <2561612514@qq.com>
2023-09-06 16:26:45 +08:00
728ee90462 [improvement](deploy) Forbid LocalDeployManager drop node (#23875)
Forbid LocalDeployManager drop nodes to prevent errors in the cluster.info file from causing nodes to be dropped.
2023-09-06 08:58:25 +08:00
44bb94d5e7 [fe](default parameters) change remote_fragment_exec_timeout_ms from 5s to 30s (#23909) 2023-09-06 00:16:23 +08:00
72b709d6a9 [opt](stats) split period collector from auto collector (#23622)
1. Split period analyze from auto collector
2. Analyze table incrementally by default
3. Rename StatisticsAutoAnalyzer to StatisticsAutoCollector
2023-09-04 17:04:16 +08:00
Pxl
bb3fadc5d3 [Bug](materialized-view) fix mv not match because cast and alias name (#23580)
fix mv not match because cast and alias name
2023-09-04 12:46:33 +08:00
45414db1ba [enhancement](table-meta) flush column unique ids for tables before 1.2 automatically (#23616) 2023-09-02 14:56:48 +08:00
e680d42fe7 [feature](information_schema)add metadata_name_ids for quickly get catlogs,db,table and add profiling table in order to Compatible with mysql (#22702)
add information_schema.metadata_name_idsfor quickly get catlogs,db,table.

1. table  struct :   
```mysql
mysql> desc  internal.information_schema.metadata_name_ids;
+---------------+--------------+------+-------+---------+-------+
| Field         | Type         | Null | Key   | Default | Extra |
+---------------+--------------+------+-------+---------+-------+
| CATALOG_ID    | BIGINT       | Yes  | false | NULL    |       |
| CATALOG_NAME  | VARCHAR(512) | Yes  | false | NULL    |       |
| DATABASE_ID   | BIGINT       | Yes  | false | NULL    |       |
| DATABASE_NAME | VARCHAR(64)  | Yes  | false | NULL    |       |
| TABLE_ID      | BIGINT       | Yes  | false | NULL    |       |
| TABLE_NAME    | VARCHAR(64)  | Yes  | false | NULL    |       |
+---------------+--------------+------+-------+---------+-------+
6 rows in set (0.00 sec) 


mysql> select * from internal.information_schema.metadata_name_ids where CATALOG_NAME="hive1" limit 1 \G;
*************************** 1. row ***************************
   CATALOG_ID: 113008
 CATALOG_NAME: hive1
  DATABASE_ID: 113042
DATABASE_NAME: ssb1_parquet
     TABLE_ID: 114009
   TABLE_NAME: dates
1 row in set (0.07 sec)
```

2. when you create / drop catalog , need not refresh catalog . 
```mysql
mysql> select count(*) from internal.information_schema.metadata_name_ids\G; 
*************************** 1. row ***************************
count(*): 21301
1 row in set (0.34 sec)


mysql> drop catalog hive2;
Query OK, 0 rows affected (0.01 sec)

mysql> select count(*) from internal.information_schema.metadata_name_ids\G; 
*************************** 1. row ***************************
count(*): 10665
1 row in set (0.04 sec) 


mysql> create catalog hive3 ... 
mysql> select count(*) from internal.information_schema.metadata_name_ids\G;                                                                        
*************************** 1. row ***************************
count(*): 21301
1 row in set (0.32 sec)
```

3. create / drop table , need not refresh catalog .  
```mysql
mysql> CREATE TABLE IF NOT EXISTS demo.example_tbl ... ;


mysql> select count(*) from internal.information_schema.metadata_name_ids\G; 
*************************** 1. row ***************************
count(*): 10666
1 row in set (0.04 sec)

mysql> drop table demo.example_tbl;
Query OK, 0 rows affected (0.01 sec)

mysql> select count(*) from internal.information_schema.metadata_name_ids\G; 
*************************** 1. row ***************************
count(*): 10665
1 row in set (0.04 sec) 

```

4. you can set query time , prevent queries from taking too long . 
```

fe.conf :  query_metadata_name_ids_timeout 

the time used to obtain all tables in one database

```
5. add information_schema.profiling in order to Compatible with  mysql

```mysql
mysql> select * from information_schema.profiling;
Empty set (0.07 sec)

mysql> set profiling=1;                                                                                 
Query OK, 0 rows affected (0.01 sec)
```
2023-08-31 21:22:26 +08:00
ca55bd88ad [Fix](Job)Fix the window time is not updated when no job is registered (#23628)
Fix resume job grammar definition is inconsistent
Show Job task Add execution results
JOB allows to define update operations
2023-08-30 09:48:21 +08:00
e02747e976 [feature](Nereids) support struct type (#23597)
1. support struct data type
2. add array / map / struct literal syntax
3. fix array union / intersect / except type coercion
4. fix explict cast data type check for array
5. fix bound function type coercion
2023-08-29 20:41:24 +08:00
6ac694aede [Configuration](multi-catalog) Modify default external cache expire time to 10 mins. (#23490)
Configuration Modify default external cache expire time to 10 mins.
2023-08-28 16:16:43 +08:00
ef2fc44e5c [Improve](Job)Allows modify the configuration of the Job queue and the number of consumer threads (#23547) 2023-08-28 12:01:49 +08:00
7cfb3cc0aa [fix](functions) fix function substitute for datetimeV1/V2 (#23344)
* fix

* function fe
2023-08-25 09:59:38 +08:00
6a4976921d [fix](auth)Disable column auth temporarily (#23295)
- add config `enable_col_auth` to temporarily disable column permissions(because old/new planner has bug when select from view)
- Restore the old optimizer to the previous authentication method
- Support for new optimizer authentication(Legacy issue: When querying the view, the permissions of the base table will be authenticated. The view's own permissions should be authenticated and processed after the new optimizer is improved)
- fix: show grants for non-existent users
- fix: role:`admin` can not grant/revoke to/from user
2023-08-24 23:37:06 +08:00
35d0c9e71e [refactor](nereids) Refactor stats collection framework (#22963)
* remove auto analyze grammer
* refactor ResultRow
2023-08-23 10:05:57 +08:00
a4e041ea55 [improve](alter-job) Add a config for forbiding doing alter job (#23294) 2023-08-22 16:28:36 +08:00
b670dd0db7 [feature](Nereids) support array type (#22851)
FEATURE:
1. enable array type in Nereids
2. support generice on function signature
3. support array and map type in type coercion and type check
4. add element_at and element_slice syntax in Nereids parser

REFACTOR:
1. remove AbstractDataType

BUG FIX:
1. remove FROM from nonReserved keyword list

TODO:
1. support lambda expression
2. use Nereids' way do function type coercion
3. use castIfnotSame when do implict cast on BoundFunction
4. let AnyDataType type coercion do same thing as function type coercion
5. add below array function
- array_apply
- array_concat
- array_filter
- array_sortby
- array_exists
- array_first_index
- array_last_index
- array_count
- array_shuffle shuffle
- array_pushfront
- array_pushback
- array_repeat
- array_zip
- reverse
- concat_ws
- split_by_string
- explode
- bitmap_from_array
- bitmap_to_array
- multi_search_all_positions
- multi_match_any
- tokenize
2023-08-22 09:47:55 +08:00
ae9f04f969 [fix](array) fix typeExtactMatch for array() type (#23264)
if we write sql with : `select cast(array() as array<varchar(10)>)`
castexpr in fe will call analyze() with `Type.matchExactType(childType, type, true);`
here array type only check contains_null , but should check inner type to make array matchExactType right
2023-08-21 19:41:09 +08:00
0967d7ec04 [improvement](agg) Do not serialize bitmap to string (#23172) 2023-08-21 10:10:15 +08:00
10abbd2b62 [Feauture](Export) support parallel export job using Job Schedule (#22854) 2023-08-18 22:24:42 +08:00
1f19d0db3e [improvement](tablet clone) improve tablet balance, scaling speed etc (#22317) 2023-08-17 22:30:49 +08:00
3efa06e63e [Fix](View)varchar type conversion error (#22987) 2023-08-16 11:49:04 +08:00
d7a5c37672 [improvement](tablet clone) update the capacity coeficient for calculating backend load score (#22857)
update the capacity coeficient for calcutating the backend load score:
1. Add fe config entry `backend_load_capacity_coeficient` to allow setting the capacity coeficient manually;
2. Adjust calculating capacity coeficient as below.

We emphasize disk usage for calculating load score. 
If a be has a high used capacity percent, we should increase its load score.
So we increase capacity coefficient with a be's used capacity percent.

But this is not enough. For example, if the tablets have a big difference in data size.
Then for below two BEs, their load score maybe the same:
BE A:  disk usage = 60%,  replica number = 2000  (it contains the big tablets)
BE B:  disk usage = 30%,  replica number = 4000  (it contains the small tablets)

But what we want is: firstly move some big tablets from A to B, after their disk usages are close,
then move some small tablets from B to A, finally both of their disk usages and replica number
are close.

To achieve this, when the max difference between all BE's disk usages >= 30%,  we set the capacity cofficient to 1.0 and avoid the affect of replica num. After the disk usage difference decrease, then decrease the capacity cofficient to make replica num effective.
2023-08-15 17:27:31 +08:00
94a7b44540 [Improvement](log) add config to controll compression of fe log & fe audit log (#22865)
fe log is large for a busy doris cluster, if you want to preserve some historical logs, it cost too much disk space.
enable compression is a good way to save space.
and a gzip compressed text file can be viewed without decompression.
2023-08-11 14:08:08 +08:00
b9b9071c9b [improvement](create partition) create partition require quorum replicas succ (#22554) 2023-08-11 11:59:05 +08:00
8e5b4005dc [enhancement](data type) add use_mysql_bigint_for_largeint config Tell Doris to use bigint when returning largeint type to mysql jdbc (#22835) 2023-08-10 18:53:31 +08:00
f2658dc7bd [Feature](multi-catalog) Truncate char or varchar columns if size is smaller than file columns or not found in the file column schema. (#22318)
Truncate char or varchar columns if size is smaller than file columns or not found in the file column schema by session var `truncate_char_or_varchar_columns`.
2023-08-10 14:37:20 +08:00
77d3d4e324 [fix](cache) add sql cache conf cache_result_max_data_size (#22645)
Only the maximum number of rows in sql cache cache_result_max_row_count is not enough. If a row of data is too large, FE may OOM.
2023-08-09 14:46:23 +08:00
7bfcee6e71 [improvement](variable) add annotations for variables (#22292) 2023-08-08 22:16:42 +08:00
97adbaadb9 fix full auto analyze (#22650) 2023-08-07 11:41:38 +08:00
95aa4d8631 [Feature](Export) Supports concurrently export of table data (#21911) 2023-08-04 18:50:17 +08:00
672acb8784 [fix](show-table-status) fix hive view NPE and external meta cache refresh issue (#22377) 2023-08-04 16:55:10 +08:00
4f9969ce1e [feature](show-frontends-disk) Add Show frontend disks (#22040)
Co-authored-by: yuxianbing <yuxianbing@yy.com>
Co-authored-by: yuxianbing <iloveqaz123>
2023-08-03 14:04:48 +08:00