This Bug is introduced by PR #7936 , which change key type of connectionMap from Long to Integer,
which cause connectionMap could not find connectContext by connectionId
Enable to check the Java version when Doris starts, to prevent the user experience caused by the inconsistency
between the compiled version and the running version.
If the Java version is compiled and the Java version is run, it will not start, and a prompt message will be given.
1. Fix the problem of BE crash caused by destruct sequence. (close#8058)
2. Add a new BE config `compaction_task_num_per_fast_disk`
This config specify the max concurrent compaction task num on fast disk(typically .SSD).
So that for high speed disk, we can execute more compaction task at same time,
to compact the data as soon as possible
3. Avoid frequent selection of unqualified tablet to perform compaction.
4. Modify some log level to reduce the log size of BE.
5. Modify some clone logic to handle error correctly.
Hive Bitmap UDF provides UDFs for generating bitmap and bitmap operations in hive tables.
The bitmap in Hive is exactly the same as the Doris bitmap.
The bitmap in Hive can be imported into Doris through spark bitmap load.
The two phase batch commit means:
During Stream load, after data is written, the message will be returned to the client,
the data is invisible at this point and the transaction status is PRECOMMITTED.
The data will be visible only after COMMIT is triggered by client.
1. User can invoke the following interface to trigger commit operations for transaction:
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:commit" \
http://fe_host:http_port/api/{db}/_stream_load_2pc
or
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:commit" \
http://be_host:webserver_port/api/{db}/_stream_load_2pc
2.User can invoke the following interface to trigger abort operations for transaction:
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:abort" \
http://fe_host:http_port/api/{db}/_stream_load_2pc
or
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:abort" \
http://be_host:webserver_port/api/{db}/_stream_load_2pc
Framework code for statistics collection,
containing only the main data structures, no implementation details.
This pr will not affect any existing code
and users will not be able to create statistics job.
1. set both `tuple_offsets` and `new_tuple_offsets` in PRowBatch for compatibility
2. set FE config `repair_slow_replica` default to false
Avoid impacting the load process after upgrading.
Eg, if there are only 2 replicas, one is with high version count. After upgrade,
that replica will be set to bad, so that the load process will be stopped
because only 1 replica is alive.
3. Fix a bug that NodeChannel may be blocked at `close_wait()`
Forget to set `add_batch_finish` flag after the last rpc finished.
4. Fix a NPE of RoutineLoadScheduler
1. Add `iceberg_table_creation_strict_mode` in `fe.conf` to control iceberg external table creation, when data type is not supported in Doris.
2. Add `REFRESH` syntax to synchronize the Iceberg table and database.
3. Support create Iceberg external table with specific column definitions.
This PR #7936 change some FE log level to debug, so that when error happens, it is not easy to find out
which SQL cause the error.
So I add stmt id and query id in error log, so that user can use these identifiers to find SQL in fe.audit.log
If the tableRef behind represents a CTE or a view,
the tableRef will be reset during semantic parsing.
The new tableRef needs to inherit the lateral view property of the origin tableRef
to ensure that the lateral view is not accidentally lost during parsing.
This PR mainly changes:
1. Change the define of PBlock
The new PBlock consists of a set of PColumnMeta and a binary buffer.
The PColumnMeta records the metadata information of all columns in the Block,
while the buffer stores the serialized binary data of all columns.
2. Refactor the serialize/deserialize method of data type
Rewrite the `serialize()/deserialize()` of IDataType. And also add
a new method `get_uncompressed_serialized_bytes()` to get the total length
of uncompressed serialized data of a column.
3. Rewrite the serialize/deserialize method of Block
Now, when serializing a Block to PBlock, it will first get the total length
of uncompressed serialized data of all columns in this Block, and then allocate
the memory to write the serialized data to the buffer.
4. Use brpc attachment to transmit the serialized column data
1. If the table or db has been dropped,we will get write lock failed or just skip or throw exception,
2. and if we recover table or db, we must ensure that unmark dropped state after writing recover journal.
3. db.dropTable corresponds to db.createTable, I don't move table.markDropped method to the db.dropTable,
for that all meta added to db or catalog must after writing recover journal, so we must invoke markDropped
and unmarkDropped method outside the dropTable and createTable method.
Support implement UDF through GRPC protocol. This brings several benefits:
1. The udf implementation language is not limited to c++, users can use any familiar language to implement udf
2. UDF is decoupled from Doris, udf will not cause doris coredump, udf computing resources are separated from doris, and doris services are not affected
But RPC's UDF has a fixed overhead, so its performance is much slower than C++ UDF, especially when the amount of data is large.
Create function like
```
CREATE FUNCTION rpc_add(INT, INT) RETURNS INT PROPERTIES (
"SYMBOL"="add_int",
"OBJECT_FILE"="127.0.0.1:9999",
"TYPE"="RPC"
);
```
Function service need to implement `check_fn` and `fn_call` methods
Note:
THIS IS AN EXPERIMENTAL FEATURE, THE INTERFACE AND DATA STRUCTURE MAY BE CHANGED IN FUTURE !!!
This PR mainly includes the following two changes:
1. Shorten FE single measurement time
In Doris's FE unit test, starting a Doris cluster is a time-consuming operation.
In this PR, the unit tests of some small functions are merged into @QueryPlanTest,
the same cluster is used centrally,
so as to avoid the problem that the overall unit test time of FE is too long.
2. Refine the logic of "PR 7851"
Although the function can be implemented correctly in PR #7851,
the logic is not brief enough.
This PR mainly succinct redundant code in terms of engineering implementation.
Change 1: Support an adaptive runtime filter: IN_OR_BLOOM_FILTER
The processing logic is
If the number of rows in the right table < runtime_filter_max_in_num, then IN predicate will work
If the number of rows in the right table >= runtime_filter_max_in_num, then Bloom filter can take effect
Change 2: The default runtime filter is changed to filter: IN_OR_BLOOM_FILTER
* [improvement](show) Support that user can use show data skew statement instead of admin
This PR mainly do two things:
1. Support that user can use show data skew statement instead of admin
2. Fix fe ut failed caused by pr [improvement](rewrite) Make RewriteDateLiteralRule to be compatible with mysql #7876 and pr [feature-wip](iceberg) Step1: Support create Iceberg external table #7391
Co-authored-by: caiconghui1 <caiconghui1@jd.com>
The tuple ids of the empty set node must be exactly the same as the tuple ids of the origin root node.
In the issue, we found that once the tree where the root node is located has a window function,
the tuple ids of the empty set node cannot be calculated correctly.
This pr mostly fixes the problem.
In order to calculate the correct tuple ids,
the tuple ids obtained from the SelectStmt.getMaterializedTupleIds() function in the past
are changed to directly use the tuple ids of the origin root node.
Although we tried to fix#7929 by modifying the SelectStmt.getMaterializedTupleIds() function,
this method can't get the tuple of the last correct window function.
So we use other ways to construct tupleids of empty nodes.
fix bugs of lateral view with CTE or where clause.
The error case can be found in newly added tests in `TableFunctionPlanTest.java`
But there are still some bugs not being fixed, so the unit test is annotated with @Ignore
This PR contains the change is #7824 :
> Issue Number: close#7823
>
> After the subquery is rewritten, the rewritten stmt needs to be reset
> (that is, the content of the first analyze semantic analysis is cleared),
> and then the rewritten stmt can be reAnalyzed.
>
> The lateral view ref in the previous implementation forgot to implement the reset function.
> This caused him to keep the first error message in the second analyze.
> Eventually, two duplicate tupleIds appear in the new stmt and are marked with different tuple.
> From the explain string, the following syntax will have an additional wrong join predicate.
> ```
> Query: explain select k1 from test_explode lateral view explode_split(k2, ",") tmp as e1 where k1 in (select k3 from tbl1);
> Error equal join conjunct: `k3` = `k3`
> ```
>
> This pr mainly adds the reset function of the lateral view
> to avoid possible errors in the second analyze
> when the lateral view and subquery rewrite occur at the same time.
Close related #7389
Support create Iceberg external table in Doris.
This is the first step to support Iceberg external table.
### Create Iceberg external table
This pr describes two ways to create Iceberg external tables. Both ways do not require explicitly specifying column definitions, Doris automatically converts them based on Iceberg's column definitions.
1. Create an Iceberg external table directly
```sql
CREATE [EXTERNAL] TABLE table_name
ENGINE = ICEBERG
[COMMENT "comment"]
PROPERTIES (
"iceberg.database" = "iceberg_db_name",
"iceberg.table" = "icberg_table_name",
"iceberg.hive.metastore.uris" = "thrift://192.168.0.1:9083",
"iceberg.catalog.type" = "HIVE_CATALOG"
);
```
2. Create an Iceberg database and automatically create all the tables under that db.
```sql
CREATE DATABASE db_name
[COMMENT "comment"]
PROPERTIES (
"iceberg.database" = "iceberg_db_name",
"iceberg.hive.metastore.uris" = "thrift://192.168.0.1:9083",
"iceberg.catalog.type" = "HIVE_CATALOG"
);
```
### Show table creation
1. For individual tables you can view them with `help show create table`.
```sql
mysql> show create table iceberg_db.logs_1;
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| logs_1 | CREATE TABLE `logs_1` (
`level` varchar(-1) NOT NULL COMMENT "null",
`event_time` datetime NOT NULL COMMENT "null",
`message` varchar(-1) NOT NULL COMMENT "null"
) ENGINE=ICEBERG
COMMENT "ICEBERG"
PROPERTIES (
"iceberg.database" = "doris",
"iceberg.table" = "logs_1",
"iceberg.hive.metastore.uris" = "thrift://10.10.10.10:9087",
"iceberg.catalog.type" = "HIVE_CATALOG"
) |
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```
2. For Iceberg database, you can view it with `help show table creation`.
```sql
mysql> show table creation from iceberg_db;
+--------+---------+---------------------+---------------------------------------------------------+
| Table | Status | Create Time | Error Msg |
+--------+---------+---------------------+---------------------------------------------------------+
| logs | fail | 2021-12-14 13:50:10 | Cannot convert unknown type to Doris type: list<string> |
| logs_1 | success | 2021-12-14 13:50:10 | |
+--------+---------+---------------------+---------------------------------------------------------+
2 rows in set (0.00 sec)
```
This is a new syntax.
Show table creation records in Iceberg database:
Syntax:
```sql
SHOW TABLE CREATION [FROM db] [LIKE mask]
```