The buffered reader's _cur_offset should be initialized as same as the inner file reader's,
to make sure that the reader will start to read at rignt position.
Support when creating a kafka routine load, start consumption from a specified point in time instead of a specific offset.
eg:
```
FROM KAFKA
(
"kafka_broker_list" = "broker1:9092,broker2:9092",
"kafka_topic" = "my_topic",
"property.kafka_default_offsets" = "2021-10-10 11:00:00"
);
or
FROM KAFKA
(
"kafka_broker_list" = "broker1:9092,broker2:9092",
"kafka_topic" = "my_topic",
"kafka_partitions" = "0,1,2",
"kafka_offsets" = "2021-10-10 11:00:00, 2021-10-10 11:00:00, 2021-10-10 12:00:00"
);
```
This PR also reconstructed the analysis method of properties when creating or altering
routine load jobs, and unified the analysis process in the `RoutineLoadDataSourceProperties` class.
The old colocate aggregation can only cover the case where the child is scan.
In fact, as long as the child's data distribution meets the requirements,
no matter what the plan node on the child node is, a colocate aggregation can be performed.
This PR also fixes the correct data partition attribute of fragment.
The data partition of fragment which contains scan node is Hash Partition rather than Random.
This modification is mainly to determine the possibility of colocate
through the correct distribution of child fragments.
The expose annotation is used in the persistence logic used by the old backup recovery.
This annotation by itself is meant to ignore some variables when serializing and deserializing.
However, this variable was used incorrectly and gson did not ignore the variables that should have been ignored.
This results in duplicate initialization when fe is restarted.
This pr uses the doris wrapped Gson directly, than eliminates the use of the expose annotation.
Fixed sortedTabletInfoList being repeatedly initialized resulting in incorrect numbers.
Fixed#5852
Modify spark, flink doris connector to send request to FE, fix the problem of POST method,
it should be the same as the method when sending the request
1 Make some MemTracker have reasonable parent MemTracker not the root tracker
2 Make each MemTracker can be easily to trace.
3 Add show level of MemTracker to reduce the MemTracker show in the web page to have a way to control show how many tracker in web page.
Currently, the `show data` does not support sorting. When the number of tables increases, it is inconvenient to manage. Need to support sorting
like:
```
mysql> show data order by ReplicaCount desc,Size asc;
+-----------+-------------+--------------+
| TableName | Size | ReplicaCount |
+-----------+-------------+--------------+
| table_c | 3.102 KB | 40 |
| table_d | .000 | 20 |
| table_b | 324.000 B | 20 |
| table_a | 1.266 KB | 10 |
| Total | 4.684 KB | 90 |
| Quota | 1024.000 GB | 1073741824 |
| Left | 1024.000 GB | 1073741734 |
+-----------+-------------+--------------+
```
The cause of the problem is that after query cancel, OlapScanNode::transfer_thread still continues to schedule
OlapScanNode::scanner_thread until all tasks are scheduled.
Although each task does not scan data and exits quickly, it still consumes a lot of resources.
(Guess)This may be the cause of the BUG (#5767) causing the I/O to be full.
So after query cancel, immediately exit the scheduling loop in transfer_thread, and after waiting for
the end of all scanner_threads, transfer_thread will also exit.
* Solve the situation that the hardware information of the Web UI home page cannot be loaded
Solve the situation that the hardware information of the Web UI home page cannot be loaded
* Add flink doris connector design document
Add flink doris connector design document
* flink doris connector design english docment
flink doris connector design english docment
Co-authored-by: zhangjf@shuhaisc.com <zhangfeng800729>
When a query is retried, the FE log cannot quickly associate the new and old queries by query id.
This will increase the complexity of troubleshooting.
Modify the log printing logic of FE to associate the new and old query ids, and the print log looks like this:
Query {old_query_id} {retry_times} times with new query id: {new_query_id}
1. relocation R_X86_64_32 against `__gxx_personality_v0' can not be used when making a shared object; recompile with -fPIC
2. warning: the use of `tmpnam' is dangerous, better use `mkstemp'
3. Death tests use fork(), which is unsafe particularly in a threaded context. For this test, Google Test couldn't detect the number of threads.
* Solve the situation that the hardware information of the Web UI home page cannot be loaded
Solve the situation that the hardware information of the Web UI home page cannot be loaded
* Http v2 version is enabled by default
Http v2 version is enabled by default
Co-authored-by: zhangjf@shuhaisc.com <zhangfeng800729>
* [Bug] Fix bug that database not found when replaying batch transaction remove log
[GlobalTransactionMgr.replayBatchRemoveTransactions():353] replay batch remove transactions failed. db 0
org.apache.doris.common.AnalysisException: errCode = 2, detailMessage = databaseTransactionMgr[0] does not exist
at org.apache.doris.transaction.GlobalTransactionMgr.getDatabaseTransactionMgr(GlobalTransactionMgr.java:84) ~[palo-fe.jar:3.4.0]
at org.apache.doris.transaction.GlobalTransactionMgr.replayBatchRemoveTransactions(GlobalTransactionMgr.java:350) [palo-fe.jar:3.4.0]
at org.apache.doris.persist.EditLog.loadJournal(EditLog.java:601) [palo-fe.jar:3.4.0]
at org.apache.doris.catalog.Catalog.replayJournal(Catalog.java:2452) [palo-fe.jar:3.4.0]
at org.apache.doris.master.Checkpoint.runAfterCatalogReady(Checkpoint.java:101) [palo-fe.jar:3.4.0]
at org.apache.doris.common.util.MasterDaemon.runOneCycle(MasterDaemon.java:58) [palo-fe.jar:3.4.0]
at org.apache.doris.common.util.Daemon.run(Daemon.java:116) [palo-fe.jar:3.4.0]
The id of information_scheam database is 0, and it has no txn at all.
1. Reduce lock conflicts in RuntimeProfile of be;
2. can view query profile when the query is executing;
3. reduce wait time for 'show proc /current_queries'.
1. Add a new dynamic partition property `create_history_partition`.
If set to true, Doris will create all partitions from `start` to `end`.
2. Add a new FE config `max_dynamic_partition_num`
To limit the number of partitions created when creating one table.
1. Fix NPE in ReplicasProcNode when backend does not exist
2. Forbid the create table like statement to specify the view.
3. Check self ip when starting FE to see if it use the origin ip.
4. Modify the error msg of tablet sink to show more detail errors.
In the previous code, the routine load task did not catch the exception of opening the transaction.
As a result, although the task cannot be executed,
no exceptions can be seen during show routine load, only the routine load job is stuck.
The PR catch the QuotaExceedException when opening a transaction.
If the routine load task cannot be executed due to the exhaustion of the quota,
the routine load will be paused and an error message will be presented to the user.
Similarly, other load method will also catch similar exceptions and cancel job.
1. Add /api/compaction/run_status to show the running compaction tasks.
2. Support do base and cumulative compaction for one tablet at same time.
3. Modify some log level.
4. Add a feedback document.
The distinct count result of bitmap/hll column may be incorrect in the spark load mode.
Fix some bugs in spark load to solve the above problem.
1. FE is big end but BE is little end. BitmapValues should be transfered to little end in FE's serialization
2. BitmapUnionAggregator/HllUnionAggregator ignore `null` value
3. Make sure encodeVarint64 in FE is consistent with BE
Co-authored-by: weixiang <weixiang06@meituan.com>