1. Add license/total line/release badegs.
2. Add monthly active contributor and contributor growth graph
3. fix a pom.xml bug
4. Modify some routine load log on BE side
This CL mainly changes:
1. the `storage_page_cache_limit` is based on config `mem_limit`
the default is 20% of `mem_limit`.
2. the `buffer_pool_limit` is based on config `mem_limit`
the default is 20% of `mem_limit`.
3. the `buffer_pool_clean_pages_limit` is based on config `buffer_pool_limit`
the default is 50% of `buffer_pool_limit`
4. Fix some show bugs of lru cache hit ratio and usage ratio
5. Fix a create view bug that `notEvalNondeterministicFunction` should be reset after analyze.
fix#6269
The outline of our changes is to improve our memory in case of OOM in BE and to speed up the calculation.
1. We do not need to do Aggregation in load, which has already been done in the ETL spark job.
2. Based on 1, we do not need to serialize/deserialize bitmap/HLL objects.
Encapsulate some http interfaces for better management and maintenance of doris clusters.
The http interface includes getting cluster connection information, node information, node configuration information, batch modifying node configuration, and getting query profile.
For details, please refer to the document:
`docs/zh-CN/administrator-guide/http-actions/fe/manager/`
Fix#6366
There is a bloom filter for each data page in a column which has bloom filter index.
`_has_null` flag can help to judge whether `null` exists in a data page.
If `null` value is added to a data page, `_has_null` will be set `true`.
After bloom filter for a data page finished, `_has_null`should be reset to `false` to prepare for next data page.
```
SELECT count(distinct products_id) FROM a_table as a WHERE 1=1 AND products_id in ( SELECT products_id from b_table );
```
Because hash table construction errors may lead to unstable results
The problem I want to solve is described in #6355.
This CL mainly changes:
1. Support compacting tablets under alter operations
On BE side, the compaction logic will select tablets which state is "TABLET_NOTREADY" to do cumulative compaction.
2. Remove "alter_task" field in tablet's meta on BE side.
"alter_task" field is never used long time ago
3. Support doing delete operation when table is doing alter operation.
Previously, when a table is doing alter operation, execution of delete will return error: Table's state is not NORMAL.
But now, delete can be executed successfully only if the condition column is not under schema change.
And delete condition will be applied to all materialized indexes.
1. `StorageEngine::_delete_tablets_on_unused_root_path` will try to obtain tablet shard write lock in `TabletManager`
```
StorageEngine::_delete_tablets_on_unused_root_path
TabletManager::drop_tablets_on_error_root_path
obtain each tablet shard's write lock
```
2. `TabletManager::build_all_report_tablets_info` and other methods will obtain tablet shard read lock frequently.
So, `StorageEngine::_delete_tablets_on_unused_root_path` will hold `_store_lock` for a long time.
This will make it difficult for other threads to get write `_store_lock`, such as `StorageEngine::get_stores_for_create_tablet`
`drop_tablets_on_error_root_path` is a small probability event, `TabletManager::drop_tablets_on_error_root_path` should return when its param `tablet_info_vec` is empty
* [Optimize] optimize the speed of converting integer to string
* Use fmt and std::from_chars to make convert integer to string and convert string to integer more efficient
Co-authored-by: caiconghui <caiconghui@xiaomi.com>
In RuntimeFilter BloomFilter, decimal column will got a wrong hash value because violating aliasing rules
decimal12_t decimal = { 12, 12 };
murmurhash3(decimal) in bloom filter: 2167721464
expect: 4203026776
Use `commitAsync` to commit offset to kafka, instead of using `commitSync`, which may block for a long time.
Also assign a group.id to routine load if user not specified "property.group.id" property, so that all consumer of
this job will use same group.id instead of a random id for each consume task.
Fix#6316
If the size of memtable is greater than max segment size and the memtable will flush more than
one segment file. BE coredump will be triggered when flushing memtable.
when right table has null value in string column, runtime filter may coredump
```
select count(*) from baseall t1 join test t2 where t1.k7 = t2.k7;
```
Currently, the function lower()/upper() can only handle one char at a time.
A vectorized function has been implemented, it makes performance 2 times faster. Here is the performance test:
The length of char: 26, test 100 times
vectorized-function-cost: 99491 ns
normal-function-cost: 134766 ns
The length of char: 260, test 100 times
vectorized-function-cost: 179341 ns
normal-function-cost: 344995 ns
## Proposed changes
Add transaction for the operation of insert. It will cost less time than non-transaction(it will cost 1/1000 time) when you want to insert a amount of rows.
### Syntax
```
BEGIN [ WITH LABEL label];
INSERT INTO table_name ...
[COMMIT | ROLLBACK];
```
### Example
commit a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
commit;
```
rollback a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
rollback;
```
commit a transaction with label:
```
begin with label test_label;
insert into Tbl values(11, 22, 33);
commit;
```
### Description
```
begin: begin a transaction, the next insert will execute in the transaction until commit/rollback;
commit: commit the transaction, the data in the transaction will be inserted into the table;
rollback: abort the transaction, nothing will be inserted into the table;
```
### The main realization principle:
```
1. begin a transaction in the session. next sql is executed in the transaction;
2. insert sql will be parser and get the database name and table name, they will be used to select a be and create a pipe to accept data;
3. all inserted values will be sent to the be and write into the pipe;
4. a thread will get the data from the pipe, then write them to disk;
5. commit will complete this transaction and make these data visible;
6. rollback will abort this transaction
```
### Some restrictions on the use of update syntax.
1. Only ```insert``` can be called in a transaction.
2. If something error happened, ```commit``` will not succeed, it will ```rollback``` directly;
3. By default, if part of insert in the transaction is invalid, ```commit``` will only insert the other correct data into the table.
4. If you need ```commit``` return failed when any insert in the transaction is invalid, you need execute ```set enable_insert_strict = true``` before ```begin```.
At present, some constant expression calculations are implemented on the FE side,
but they are incomplete, and some expressions cannot be completely consistent with
the value calculated by BE (such as part of the time function)
Therefore, we provide a way to pass all the constants in SQL to BE for calculation,
and then begin to analyze and plan SQL. This method can also solve the problem that some
complex constant calculations issued by BI cannot be processed on the FE side.
Here through a session variable enable_fold_constant_by_be to control this function,
which is disabled by default.
* Revert "[Optimize] Put _Tuple_ptrs into mempool when RowBatch is initialized (#6036)"
This reverts commit f254870aeb18752a786586ef5d7ccf952b97f895.
* [BUG][Timeout][QueryLeak] Fixed memory not released in time, Fix Core dump in bloomfilter