the num segments should be read from rowset meta pb.
But the previous code error caused this value not to be set in some cases.
So when init the rowset meta and find that the num_segments is 0(not set),
we will try to calculate the num segments from AlphaRowsetExtraMetaPB,
and then set the num_segments field.
This should only happen in some rowsets converted from old version.
and for all newly created rowsets, the num_segments field must be set.
time(NULL) returns second-resolution timestamp, however all compaction related time in Tablet are in millis-resolution. Therefore should use UnixMillis() instead.
1. upgrade gutil code from imapla to new verison, include `cpuinfo`, `spinlock` and `linux_syscall_support `
2. impliments arm version utf8 check code
3. remove incompatible code from stopwatch
This CL supports arrow's zero copy read interface, which can make code
comply with arrow 0.15.
And the schema change unit test has some problem, I disable it in run-ut.sh
In some scenarios, when a user creates an olap table that is range partition by time, the user needs to periodically add and remove partitions to ensure that the data is valid. As a result, adding and removing partitions dynamically can be very useful for users.
* [Alter Table] No need to check whether table is stable when doing some kinds of alter operation.
Not all alter table operation require table to be stable. Such as rename, modify meta data.
Compaction task may sometimes consume much memory and results in OOM.
And currently, there is no good way to predict the mem consumption of
a compaction task, so I add a new BE config: max_compaction_concurrency
to limit the max concurrency of running compaction tasks manually.
When merge reads from one rowset with multi overlapping segments,
I introduce a priority queue(A Minimum heap data structure) for multipath merge sort,
to replace the old N*M time complexity algorithm.
This can significantly improve the read efficiency when merging large number of
overlapping data.
In mytest:
1. Compaction with 187 segments reduce time from 75 seconds to 42 seconds
2. Compaction with 3574 segments cost 43 seconds, and with old version, I kill the
process after waiting more than 10 minutes...
This CL only change the reads of alpha rowset. Beta rowset will be changed in another CL.
ISSUE: #2631
ISSUE #1553
This commit will remove function count_distinct().
We already have function multi_distinct_count as an alternative to help us calculate "count distinct" of any type value.
Besides, the count_distinct() function is with the the same symbol as count() function, which fails to express the meaning.
So I suggest to remove count_distinct() function.
Doris support AGG_KEYS/UNIQUE_KEYS/DUP_KEYS/ three storage model.
Among these three model, UNIQUE_KYES/DUP_KEYS is added after AGG_KEYS.
For historical tablet, the keys_type field to indicate storage model
may be missed for AGG_KEYS.
So upgrade from historical tablet, this situation should be taken into
consideration and set to be AGG_KEYS.