Currently we have implemented the plugin framework in FE.
This CL make the original audit log logic pluggable.
The following classes are mainly implemented:
1. AuditPlugin
The interface of audit plugin
2. AuditEvent
An AuditEvent contains all information about an audit event, such as a query, or a connection.
3. AuditEventProcessor
Audit event processor receive all audit events and deliver them to all installed audit plugins.
This CL implements two audit module plugins:
1. The builtin plugin `AuditLogBuilder`, which act same as the previous logic, to save the
audit log to the `fe.audit.log`
2. An optional plugin `AuditLoader`, which will periodically inserts the audit log into a Doris table
specified by the user. In this way, users can conveniently use SQL to query and analyze this
audit log table.
Some documents are added:
1. HELP docs of install/uninstall/show plugin.
2. Rename the `README.md` in `fe_plugins/` dir to `plugin-development-manual.md` and move
it to the `docs/` dir
3. `audit-plugin.md` to introduce the usage of `AuditLoader` plugin.
ISSUE: #3226
SchemaChangeJobV2 will use too much memory in FE, which may cause FullGC. But these data is useless after job is done, so we need to clean it up.
NOTICE: update FE meta version to 80
select date_format(k10, '%Y%m%d') as myk10 from baseall group by myk10;
The date_format function in query above will be stored in MemPool during
the query execution. If the query handles millions of rows, it will
consume much memory. Should clear the MemPool at interval.
This CL fixes a bug that could cause wrong answer for beta rowset with nullable column. The root cause is that NullBitmapBuilder is not reset when the current page doesn't contain NULL, which leads to wrong null map to be written for the next page.
Added a test case to reproduce the problem.
When creating a schema change job, we will create a corresponding shadow replica for each replica.
Here we should check the state of the replica and only create replicas in the normal state.
The process here may need to be modified later. We should completely allow users to submit alter jobs
under any circumstances, and then in the job scheduling process, dynamically detect changes in the replicas
and do replica repairs, instead of forcing a check on submission.
This CL fix the bug described in issue #3224 by
1. Forbid UDF in broker load process
2. Improving the function checking logic to avoid NPE when trying to
get default database from ConnectionContext.
1. Change word of palo to doris in conf file.
2. Set default meta_dir to ${DORIS_HOME}/doris-meta
3. Comment out FE meta_dir, leave it to ${DORIS_HOME}/doris-meta, as exsting in FE Config.java.
4. Comment out BE storage_root_path, leave it to ${DORIS_HOME}/storage, as exsting in BE config.h.
NOTICE: default config is changed.
After doris support aggregation materialized view on duplicate table,
desc stmt of metadata is confused in sometimes. The reason is that
there is no grouping information in desc stmt of metadata.
For example:
There are two materialized view as following.
1. create materialized view k1_k2 as select k1, k2 from table;
2. create materialzied view deduplicated_k1_k2 as select k1, k2 from table group by k1, k2;
Before this commit, the metatdata in desc stmt is the same.
```
+-----------------------+-------+----------+------+-------+---------+-------+
| IndexName | Field | Type | Null | Key | Default | Extra |
+-----------------------+-------+----------+------+-------+---------+-------+
| k1_k2 | k1 | TINYINT | Yes | true | N/A | |
| | k2 | SMALLINT | Yes | true | N/A | |
| deduplicated_k1_k2 | k1 | TINYINT | Yes | true | N/A | |
| | k2 | SMALLINT | Yes | true | N/A | |
+-----------------------+-------+----------+------+-------+---------+-------+
```
So, we need to show the KeysType of materialized view in desc stmt.
Now, the desc stmt of all mvs is changed as following:
```
+-----------------------+---------------+-------+----------+------+-------+---------+-------+
| IndexName | IndexKeysType | Field | Type | Null | Key | Default | Extra |
+-----------------------+---------------+-------+----------+------+-------+---------+-------+
| k1_k2 | DUP_KEYS | k1 | TINYINT | Yes | true | N/A | |
| | | k2 | SMALLINT | Yes | true | N/A | |
| deduplicated_k1_k2 | AGG_KEYS | k1 | TINYINT | Yes | true | N/A | |
| | | k2 | SMALLINT | Yes | true | N/A | |
+-----------------------+---------------+-------+----------+------+-------+---------+-------+
```
NOTICE: this modify the the column of `desc` stmt.
The bug is described in issue: #3200.
This CL solve the problem by:
1. Refactor the alter operation conflict checking logic by introducing new classes `AlterOperations` and `AlterOpType`.
2. Allow add/drop temporary partition when dynamic partition feature is enabled.
3. Allow modifying table's property when there is temporary partition in table.
4. Make the properties `dynamic_partition.enable` optional, and default is true.
Doris support choose medium when create table, and the cluster balance strategy is dependent
between different storage medium, and most use will not specify the storage medium when create table,
even they kown that they should choose a storage medium, they have no idea about the
cluster's storage medium, so, I think we should make storage_medium and storage_cooldown_time
configurable, and this should be the admin's responsibility.
For Example, if the cluster's storage medium is HDD, but we need to change part of machines to SSD,
if we change the machine, the tablets before change is stored in HDD and they can't find a dest path
to migrate, and user will create table as usual, it will make all tablets stored in old machines and
the new machines will only store a little tablets. Without this config the only way is admin need
to traverse all partitions in cluster and change the property of storage_medium, it will increase
operational and maintenance costs.
So I add a FE config default_storage_medium, so that user can set the default storage medium.
This PR is to reduce the time cost for waiting transactions to be completed in same db by filter the running transactions in table level.
NOTICE: Update FE meta version to 79
Currently, in the Spark-Doris-Connector, when Spark iteratively obtains each row of data,
it needs to synchronously convert the Arrow format data into the row format required by Spark.
In order to speed up the conversion process, we can add an asynchronous thread in the Connector,
which is responsible for obtaining the Arrow format data from BE and converting it into the row
format required by Spark calculation
In our test environment, Doris cluster used 1 fe and 7 be (32C+128G). When using Spark-Doris-Connector
to query a table containing 67 columns, the original query returned 69 million rows of data
took about 2.5min, but after improvement, it reduced to about 1.6min, which reduced the time by about 30%
The subquery in having clause should be rewritten too.
If not, ExprRewriteRule will not be apply in subquery.
For example:
select k1, sum (k2) from table group by k1 having sum(k2) > (select t1 from table2 where t2 between 1 and 2);
```t1 between 1 and 2``` should be rewritten to ```t1 >=1 and t1<=2```.
Fixed#3205. TPC-DS 14 will be passed after this commit.
The main optimization points:
1. Use std::unordered_set instead of std::set, and use RowsetId.hi as RowsetId's hash value.
2. Minimize the scope of SpinLock in UniqueRowsetIdGenerator.
Profile comparation:
* Run UniqueRowsetIdGeneratorTest.GenerateIdBenchmark 10 times
old version | new version
6s962ms | 3s647ms
6s139ms | 3s393ms
6s234ms | 3s686ms
6s060ms | 3s447ms
5s966ms | 4s127ms
5s786ms | 3s994ms
5s778ms | 4s072ms
6s193ms | 4s082ms
6s159ms | 3s560ms
5s591ms | 3s654ms
issue #2344
* Add install/unintall Plugin statement
* Add show plugin statement
* Support install plugin through two ways:
* Built-in Plugin: use PluginMgr's register method.
* Dynamic Plugin: install by SQL statement, and the process:
1. check Plugin has already install?
2. download Plugin file from remote source or copy from local source
3. extract Plugin's .zip
4. read Plugin's plugin.properties, and check Plugin's Value
5. dynamic load .jar and init Plugin's main Class
6. invoke Plugin's init method
7. register Plugin into PluginMgr.
8. update meta
* Support FE Plugin dynamic uninstall process
1. check Plugin has install?
2. invoke Plugin's close method
3. delete Plugin from PluginMgr
4. update meta
* Add audit plugin interface
* Add plugin enable flags in Config
* Add plugin install path in Config, default plugin will install in ${DORIS_FE_PATH}/plugins
* Add FE plugins project
* Add audit plugin demo
The usage:
```
// install plugin and show plugins;
mysql>
mysql> install plugin from "/home/users/seaven/auditplugin.zip";
Query OK, 0 rows affected (0.05 sec)
mysql>
mysql> show plugins;
+-------------------+-------+---------------+---------+-------------+------------------------+--------+---------------------------------------+
| Name | Type | Description | Version | JavaVersion | ClassName | SoName | Sources |
+-------------------+-------+---------------+---------+-------------+------------------------+--------+---------------------------------------+
| audit_plugin_demo | AUDIT | just for test | 0.11.0 | 1.8.31 | plugin.AuditPluginDemo | NULL | /home/users/hekai/auditplugindemo.zip |
+-------------------+-------+---------------+---------+-------------+------------------------+--------+---------------------------------------+
1 row in set (0.00 sec)
mysql> show plugins;
+-------------------+-------+---------------+---------+-------------+------------------------+--------+---------------------------------------+
| Name | Type | Description | Version | JavaVersion | ClassName | SoName | Sources |
+-------------------+-------+---------------+---------+-------------+------------------------+--------+---------------------------------------+
| audit_plugin_demo | AUDIT | just for test | 0.11.0 | 1.8.31 | plugin.AuditPluginDemo | NULL | /home/users/hekai/auditplugindemo.zip |
+-------------------+-------+---------------+---------+-------------+------------------------+--------+---------------------------------------+
1 row in set (0.00 sec)
mysql> uninstall plugin audit_plugin_demo;
Query OK, 0 rows affected (0.04 sec)
mysql> show plugins;
Empty set (0.00 sec)
```
TODO:
*Config.plugin_dir should be created if missing
Support BE plugin framework, include:
* update Plugin Manager, support Plugin find method
* support Builtin-Plugin register method
* plugin install/uninstall process
* PluginLoader:
* dynamic install and check Plugin .so file
* dynamic uninstall and check Plugin status
* PluginZip:
* support plugin remote/local .zip file download and extract
TODO:
* We should support a PluginContext to transmit necessary system variable when the plugin's init/close method invoke
* Add the entry which is BE dynamic Plugin install/uninstall process, include:
* The FE send install/uninstall Plugin statement (RPC way)
* The FE meta update request with Plugin list information
* The FE operation request(update/query) with Plugin (maybe don't need)
* Add the plugin status upload way
* Load already install Plugin when BE start
All of columns which belong to top of tupleIds in query should be considered in mv selector.
For example:
`select k1 from table group by k1 having sum(v1) >1;`
The candidate index should contain k1 and v1 columns instead of only k1.
The rollup which only has k1 column should not be selected.
The issue #3174 describe in detail.
Earlier we introduced `BlockManager` to separate data access logic from
underlying file read and write logic.
This CL further unifies all `SegmentV2` data access to the `BlockManager`,
removes the previous `FileManager` class, and move the file cache to the `FileBlockManager`.
There are no logical changes to this CL.
After this CL, all user table data is read through the `WritableBlock` and `ReadableBlock`
returned by the `BlockManager`, and no file operations are performed directly.
Generates partition names based on the granularity.
eg:
Year:prefix2020
Day: prefix20200325
Week: prefix2020_#, # is the week of year.
At the same time, for all granularity, align the partition range to 00:00:00.
#3153
implement subquery support for sub query in case when statement like
```
SELECT CASE
WHEN (
SELECT COUNT(*) / 2
FROM t
) > k4 THEN (
SELECT AVG(k4)
FROM t
)
ELSE (
SELECT SUM(k4)
FROM t
)
END AS kk4
FROM t;
```
this statement will be rewrite to
```
SELECT CASE
WHEN t1.a > k4 THEN t2.a
ELSE t3.a
END AS kk4
FROM t, (
SELECT COUNT(*) / 2 AS a
FROM t
) t1, (
SELECT AVG(k4) AS a
FROM t
) t2, (
SELECT SUM(k4) AS a
FROM t
) t3;
```
This commit support the non-correlated subquery in having clause.
For example:
select k1, sum(k2) from table group by k1 having sum(k2) > (select avg(k1) from table);
Also the non-scalar subquery is supportted in Doris.
For example:
select k1, sum(k2) from table group by k1 having sum(k2) > (select avg(k1) from table group by k2);
Doris will check the result row numbers of subquery in executing.
If more then one row returned by subquery, the query will thrown exception.
The implement method:
The entire outer query is regarded as inline view of new query.
The subquery in having clause is changed to the where predicate in this new query.
After this commit, tpc-ds 23,24,44 are supported.
This commit also support the subquery in ArithmeticExpr.
For example:
select k1 from table where k1=0.9*(select k1 from t);
to resolve the ISSUE: #3139
When user execute query by some client library such as python MysqlDb, if user execute like:
"select * from tbl1;" (with a comma at the end of statement)
The sql parser will produce 2 statements: `SelectStmt` and `EmptyStmt`.
Here we discard the `EmptyStmt` to make it act like one single statement.
This is for some compatibility. Because in python MysqlDb, if the first `SelectStmt` results in
some warnings, it will try to execute a `SHOW WARNINGS` statement right after the
SelectStmt, but before the execution of `EmptyStmt`. So there will be an exception:
`(2014, "Commands out of sync; you can't run this command now")`
I though it is a flaw of python MysqlDb.
However, in order to maintain the consistency of user use, here we remove all EmptyStmt
at the end to prevent errors.(Leave at least one statement)
But if user execute statements like:
`"select * from tbl1;;select 2"`
If first `select * from tbl1` has warnings, python MysqlDb will still throw exception.