The new session variable 'close_join_reorder' is used to turn off all automatic join reorder algorithms.
If close_join_reorder is true, the Doris will execute query by the order in the original query.
1. Migrate some of the best practice articles to the Blog
2. Changed the names of performance tests and best practices to performance tests and examples
1. reduce hll memory occupied:
replace uint64_t _explicit_data[1602] with uint64_t
new memory for explicit data when really needed
2. trace HLL memory usage
1. Optimize HighWaterMarkCounter::add(), call `UpdateMax()` only if delta greater than 0
to reduce function call times
2. delete useless code lines to keep MemTracker clean
some member datas never be set, but check its value,the if condition never meet, so clean these codes
in debug mode,query memory not enough, may cause be down
fe set useStreamingPreagg true, but be function CreateHashPartitions check is_streaming_preagg_ should false.
then casue core dump.
```
*** Check failure stack trace: ***
@ 0x2aa48ad google::LogMessage::Fail()
@ 0x2aa6734 google::LogMessage::SendToLog()
@ 0x2aa43d4 google::LogMessage::Flush()
@ 0x2aa7169 google::LogMessageFatal::~LogMessageFatal()
@ 0x24703be doris::PartitionedAggregationNode::CreateHashPartitions()
@ 0x2468fd6 doris::PartitionedAggregationNode::open()
@ 0x1e3b153 doris::PlanFragmentExecutor::open_internal()
@ 0x1e3af4b doris::PlanFragmentExecutor::open()
@ 0x1d81b92 doris::FragmentExecState::execute()
@ 0x1d840f7 doris::FragmentMgr::_exec_actual()
```
we should remove DCHECK(!is_streaming_preagg_)
The `defineExpr` in `Column` must be analyzed before calling its `treeToThrift` method.
And fro CreateReplicaTask, no need to set `defineExpr` in TColumn.
Mainly changes:
1. Fix [Bug] Colocate group can not redistributed after dropping a backend #7019
2. Add detail msg about why a colocate group is unstable.
3. Add more suggestion when upgrading Doris cluster.
Add a new field `runningTxns` in the result of `SHOW ROUTINE LOAD`. eg:
```
Id: 11001
Name: test4
CreateTime: 2021-11-02 00:04:54
PauseTime: NULL
EndTime: NULL
DbName: default_cluster:db1
TableName: tbl1
State: RUNNING
DataSourceType: KAFKA
CurrentTaskNum: 1
JobProperties: {xxx}
CustomProperties: {"kafka_default_offsets":"OFFSET_BEGINNING","group.id":"test4"}
Statistic: {"receivedBytes":6,"runningTxns":[1001, 1002],"errorRows":0,"committedTaskNum":1,"loadedRows":2,"loadRowsRate":0,"abortedTaskNum":13,"errorRowsAfterResumed":0,"totalRows":2,"unselectedRows":0,"receivedBytesRate":0,"taskExecuteTimeMs":20965}
Progress: {"0":"10"}
ReasonOfStateChanged:
ErrorLogUrls:
OtherMsg:
```
So that user can view the status of corresponding transactions of this job by executing `show transaction where id=xx`;
The jar file compiled by Flink and Spark Connector, with the corresponding Flink, Spark version
and Scala version at compile time, so that users can know whether the version number matches when using it.
Example of output file name:doris-spark-1.0.0-spark-3.2.0_2.12.jar
Add the sharing blog function to the document site, including the blog list and detail page. At the same time, a guide on how to share blogs has been added to the developer guide.
Checking _encoding_map in the original code to return in advance will cause some encoding methods cannot be pushed to default_encoding_type_map_ or value_seek_encoding_map_ in EncodingInfoResolver constructor.
E.g:
EncodingInfoResolver::EncodingInfoResolver() {
....
_add_map<OLAP_FIELD_TYPE_BOOL, PLAIN_ENCODING>();
_add_map<OLAP_FIELD_TYPE_BOOL, PLAIN_ENCODING, true>();
...
}
The second line code is invilid.
The union(set operation) stmt also need to analyze 'OutFileClause'.
Whether the fragment is colocate only needs to check the plan node belonging to this fragment.
Added bprc stub cache check and reset api, used to test whether the bprc stub cache is available, and reset the bprc stub cache
add a config used for auto check and reset bprc stub
schema change fail as memory allocation fail on row block sorting, however, it should do internal sorting first before schema change fail as memory allocation fail on row block sorting in case there are enough memory after internal sorting.