* [Feature] Support lateral view
The syntax:
```
select k1, e1 from test lateral view explode_split(k1, ",") tmp as e1;
```
```explode_split``` is a special function of doris,
which is used to separate the string column according to the specified split string,
and then convert the row to column.
This is a conforming function of string separation + table function,
and its behavior is equivalent to explode in hive ```explode(split(string, string))```
The implement:
A tablefunction operator is added to the implementation to handle the syntax of the lateral view separately.
The query plan is following:
```
MySQL [test]> explain select k1, e1 from test_explode lateral view explode_split (k2, ",") tmp as e1;
+---------------------------------------------------------------------------+
| Explain String |
+---------------------------------------------------------------------------+
| PLAN FRAGMENT 0 |
| OUTPUT EXPRS:`k1` | `e1` |
| |
| RESULT SINK |
| |
| 1:TABLE FUNCTION NODE |
| | table function: explode_split(`k2`, ',') |
| | |
| 0:OlapScanNode |
| TABLE: test_explode |
+---------------------------------------------------------------------------+
```
* Add ut
* Add multi table function node
* Add session variables 'enable_lateral_view'
* Fix ut
I tested hex in a 1000w times for loop with random numbers,
old hex avg time cost is 4.92 s,optimize hex avg time cost is 0.46 s which faster nearly 10x.
This reverts part of commit 11ec38dd6fd9f86632d83c47bd9d8bc05db69a2b(#6736)
Because it will cause view query problem described in #6792
The following bug fix kept:
1. Fix the problem that the WITH statement cannot be printed when UNION is included in SQL
1. Replace std::max with a ternary expression, std::max is much heavier than the ternary operator
2. Replace std::set with arrays, std::set is based on red-black trees, traversal will follow the chain domain, and cache hits are not good
3. Optimize the serialize function, improve the calculation speed of num_non_zero_registers by reducing branches, and the serialization of _registers after optimization is faster
4. The test found that the performance improvement is more obvious
When creating sync jobs, we should ban that different jobs can connect to the same canal instance,
or else these jobs will compete with each other for the data produced by the same canal instance,
which may cause data inconsistency.
while query with multi where conditions, such as `where dt in (20210926,20210919) and hour<=13`,
will cause int * int product overflow result. and then in the function extend_scan_key will call
`range.convert_to_fixed_value()` mistakenly. And for a big `range[_low_value, _high_value)`,
mass value will be inserted into _fixed_values, result in oom finally.
Remove part of dynamic_cast, reduce the overhead caused by type conversion,
and probably reduce the cpu consumption of parquet file import by about 10%
Fixed#6726
If the plan fragment contains colocated agg plan node, it will be a colocated fragment.
The scan range and backend id of colocated fragment instance should be different from ordinary scheduler logic.
Tablets in the same bucket must fall on the same be.
For example, for the same bucket in different partitions,
even though the tablet id is different, they must be scheduled to the same be for scan node.
1. This bug is introduced from #6582
2. Optimize the error log of Address used used error msg.
3. Add some document about compilation.
1. Add a custom thirdparty download url.
2. Add a custom com.alibaba maven jar package for DataX.
4. Fix bug that BE crash when closing scan node, introduced from #6622.
* [Bug]:fix when data null , throw NullPointerException
* [Bug]:Distinguish between null and empty string
* [Feature]:flink-connector supports streamload parameters
* [Fix]:code style
* [Fix]: support json format import and use httpclient to streamload
* [Fix]:remove System out
* [Fix]:upgrade httpclient version
* [Doc]: add json format import doc
Co-authored-by: wudi <wud3@shuhaisc.com>
1. Fix bug of UNKNOWN Operation Type 91
2. Support using resource_tag property of user to limit the usage of BE
3. Add new FE config `disable_tablet_scheduler` to disable tablet scheduler.
4. Add documents for resource tag.
5. Modify the default value of FE config `default_db_data_quota_bytes` to 1PB.
6. Add a new BE config `disable_compaction_trace_log` to disable the trace log of compaction time cost.
7. Modify the default value of BE config `remote_storage_read_buffer_mb` to 16MB
8. Fix `show backends` results error
9. Add new BE config `external_table_connect_timeout_sec` to set the timeout when connecting to odbc and mysql table.
10. Modify issue template to enable blank issue, for release note or other specific usage.
11. Fix a bug in alpha_row_set split_range() function.
This reverts commit dedb57f87e31305db3e2a13e374ba4fd58043fca.
Reverts #6252
This commit may cause tablet which segments are all empty never to compaction, and results in -235 error.
I will revert this commit, and the problem will be solved in #6671
1. Fix the problem that the WITH statement cannot be printed when `UNION` is included in SQL
2. In the `toSql` method, convert the normal VIEW into the final statement
3. Replace `selectStmt.originSql` with `selectStmt.toSql`
when use 3 FE follower, when restart the fe, and regardless of order, we probability can't start fe success,
and bdb throw RollbackException,
In this scenario, the bdb suggests to catch the exception, simply closing all your ReplicatedEnvironment handles,
and then reopening.
so we catch the RollbackException, and reopen the ReplicatedEnvironment
1、Fix bug that the sync jobs are not cancelled after deleting the database.
2、The MySQL and Doris tables should have a one-to-one correspondence.
If they are not, they should fail when creating the task.
3、When the cluster has multiple FE, the non-master will core when replay create the sync job.
4、Inconsistent data when updating key column
5、Failed to synchronize data when there are multiple tables in single sync job.
6、After restarting the master, resuming the paused syncjob will fail.
In some cases, the query plan thrift structure of a query may be very large
(for example, when there are many columns in SQL), resulting in a large number
of "send fragment timeout" errors.
This PR adds an FE config to control whether to transmit the query plan in a compressed format.
Using compressed format transmission can reduce the size by ~50%. But it may reduce
the concurrency by ~10%. Therefore, in the high concurrency small query scenario,
you can choose to turn off compaction.
This demo includes reading hdfs files and writing doris through streaming load、 reading kafka message queues and writing doris through streaming load and reading doris tables through spark doris connector to build DataFrame dataset.