the tuple String Slot's ptr and len are not assigned appropriately on send side, the receive side may crash in some situation.
detail description:
on send side, when we call RowBatch::serialize(PRowBatch* output_batch) to pack RowBatch, the Tuple::deep_copy()
will be called, for each String Slot, only String Slots that is not null will set ptr and len with proper value, the null String
Slots will keep original status, the ptr member will point randomly and the len member may unexpect.
on recv side, unpack is processed by RowBatch::RowBatch(const RowDescriptor&, const PRowBatch&...), in this
function, each String Slot will transfer offset to valid string_val->ptr whether the String Slot is null or not.
but some business logic depends on string_val->len=0, such as AggregateFuncTraits::init(), HyperLogLog::deserialize()
will return correctly if slice.size<=0. so if string_val->len is set to 0 in send side, everything will be ok, otherwise server
may crash.
by netcomm viewpoint, we should make sure transfer correct data, it's sender's responsibility to set data with proper
value, and do not make any presume which way the recv side will use it.
Transfer RowBatch in Protobuf Request to Controller Attachment,
when the maximum length of the RowBatch in the Protobuf Request is exceeded.
This can avoid reaching the upper limit of the Protobuf Request length (2G),
and it is expected that performance can be improved.
This is beacuse of an const MAX_PHYSICAL_PACKET_LENGTH in fe should be 2^24 -1,
but it is set as 2^24 -2 by mistake.
2. Fix bitmap_to_string may failed when the result is large than 2G
The broker scan node has two tuple descriptors:
One is dest tuple and the other is src tuple.
The src tuple is used to read the lines of the original file,
and the dest tuple is used to save the converted lines.
The preceding filter is executed on the src tuple, so src tuple descriptor should be used
to initialize the filter expression
1. setting _report_thread_active to false is not necessary protected by _report_thread_lock, because
_report_thread_active's type is bool, writing data is multi-threadly safety if size <= marchine word length
2. report_profile thread terminates early is possiable, in the function report_profile(), while (_report_thread_active) may
break if _report_thread_active is false, the thread of calling open() may be scheduled out between
_report_thread_started_cv.wait(l) and _report_thread_active = true, we should not assume that how long time elapsed
between a thread be scheduled twice
Now minidump file will be created when BE crashes.
And user can manually trigger a minidump by sending SIGUSR1 to BE process.
More details can be found in minidump.md documents
Increase compatibility with mysql
1. Added two system tables files and partitions
2. Improved the return logic of mysql error code to make the error code more compatible with mysql
3. Added lock/unlock tables statement and show columns statement for compatibility with mysql dump
4. Compatible with mysqldump tool, now you can use mysql dump to dump data and table structure from doris
now use mysqldump may print error message like
```
$ mysqldump -h127.0.0.1 -P9130 -uroot test_query_qa > a
mysqldump: Error: 'errCode = 2, detailMessage = select list expression not produced by aggregation output (missing from GROUP BY clause?): `EXTRA`' when trying to dump tablespaces
```
This error message not effect the export file, you can add `--no-tablespaces` to avoid this error
BitShufflePageDecoder reuses the memory for storing decoder results, allocate memory directly from the
`ChunkAllocator`, the performance is improved to a certain extent.
In the case of #6285, the total time consumption is reduced by 13.5%, and the time consumption ratio of `~Reader()`
has also been reduced from 17.65% to 1.53%, and the memory allocation is unified to `ChunkAllocator` for centralized
management , Which is conducive to subsequent memory optimization.
which can avoid the memory waste caused by `Mempool`, because the chunk can be free at any time, but the
performance is lower than the allocation from `Mempool`. The guess is that there is no `Mempool` after secondary
allocation of large chunks , Will directly apply for a large number of small chunks from `ChunkAllocator`, and it takes
longer to lock in `pop_free_chunk` and `push_free_chunk` (but this is not proven from the flame graphs of BE's cpu and
contention).
1. replace all boost::shared_ptr to std::shared_ptr
2. replace all boost::scopted_ptr to std::unique_ptr
3. replace all boost::scoped_array to std::unique<T[]>
4. replace all boost:thread to std::thread
Users can directly query the data in the hive table in Doris, and can use join to perform complex queries without laboriously importing data from hive.
Main changes list below:
FE:
Extend HiveScanNode from BrokerScanNode
HiveMetaStoreClientHelper communicate with HIVE and HDFS.
BE:
Treate HiveScanNode as BrokerScanNode, treate HiveTable as BrokerTable.
broker_scanner.cpp: suppot read column from HDFS path.
orc_scanner.cpp: support read hdfs file.
POM:
Add hive.version=2.3.7, hive-metastore and hive-exec
Add hadoop.version=2.8.0, hadoop-hdfs
Upgrade commons-lang to fix incompatiblity of Java 9 and later.
Thrift:
Add THiveTable
Add read_by_column_def in TBrokerRangeDesc
1. reduce hll memory occupied:
replace uint64_t _explicit_data[1602] with uint64_t
new memory for explicit data when really needed
2. trace HLL memory usage
1. Optimize HighWaterMarkCounter::add(), call `UpdateMax()` only if delta greater than 0
to reduce function call times
2. delete useless code lines to keep MemTracker clean
some member datas never be set, but check its value,the if condition never meet, so clean these codes
in debug mode,query memory not enough, may cause be down
fe set useStreamingPreagg true, but be function CreateHashPartitions check is_streaming_preagg_ should false.
then casue core dump.
```
*** Check failure stack trace: ***
@ 0x2aa48ad google::LogMessage::Fail()
@ 0x2aa6734 google::LogMessage::SendToLog()
@ 0x2aa43d4 google::LogMessage::Flush()
@ 0x2aa7169 google::LogMessageFatal::~LogMessageFatal()
@ 0x24703be doris::PartitionedAggregationNode::CreateHashPartitions()
@ 0x2468fd6 doris::PartitionedAggregationNode::open()
@ 0x1e3b153 doris::PlanFragmentExecutor::open_internal()
@ 0x1e3af4b doris::PlanFragmentExecutor::open()
@ 0x1d81b92 doris::FragmentExecState::execute()
@ 0x1d840f7 doris::FragmentMgr::_exec_actual()
```
we should remove DCHECK(!is_streaming_preagg_)
Checking _encoding_map in the original code to return in advance will cause some encoding methods cannot be pushed to default_encoding_type_map_ or value_seek_encoding_map_ in EncodingInfoResolver constructor.
E.g:
EncodingInfoResolver::EncodingInfoResolver() {
....
_add_map<OLAP_FIELD_TYPE_BOOL, PLAIN_ENCODING>();
_add_map<OLAP_FIELD_TYPE_BOOL, PLAIN_ENCODING, true>();
...
}
The second line code is invilid.
Added bprc stub cache check and reset api, used to test whether the bprc stub cache is available, and reset the bprc stub cache
add a config used for auto check and reset bprc stub
schema change fail as memory allocation fail on row block sorting, however, it should do internal sorting first before schema change fail as memory allocation fail on row block sorting in case there are enough memory after internal sorting.
schema change fail as memory allocation fail on row block sorting.
however, it should do internal sorting first before schema change fail
as memory allocation fail on row block sorting in case there are enough
memory after internal sorting.