Fix failed in regression_test/suites/query_p0/group_concat/test_group_concat.groovy
select
group_concat( distinct b1, '?'), group_concat( distinct b3, '?')
from
table_group_concat
group by
b2
exception:
lowestCostPlans with physicalProperties(GATHER) doesn't exist in root group
The root cause is '?' is push down to slot by NormalizeAggregate, AggregateStrategies treat the slot as a distinct parameter and generate a invalid PhysicalHashAggregate, and then reject by ChildOutputPropertyDeriver.
I fix this bug by avoid push down literal to slot in NormalizeAggregate, and forbidden generate stream aggregate node when group by slots is empty
get_query_ctx(hold query ctx map lock) ---> QueryCtx ---> runtime statistics mgr --->
runtime statistics mgr ---> allocate block memory ---> cancel query
memtracker will try to cancel query when memory is not available during allocator.
BUT the allocator is a foundermental API, if it call the upper API it may deadlock.
Should not call any API during allocator.
In previously, when enabling FQDN, Doris will call dns resolver to get IP from hostname
each time when 1) FE gets BE's grpc client. 2) BE gets other BE's brpc client.
So when in high concurrency case, the dns resolver be overloaded and failed to resolve hostname.
This PR mainly changes:
1. Add DNSCache for both FE and BE.
The DNSCache will run on every FE and BE node. It has a cache, key is hostname and value is IP.
Caller can get IP by hostname from this cache, and if hostname does not exist, it will try to resolve it
and update the cache.
In addition, DNSCache has a daemon thread to refresh the cache every 1 min, in case that the IP may
be changed at anytime.
There are other implements of this dns cache:
1. 36fed13997
This is for BE side, but it does not handle the IP change case.
3. https://github.com/apache/doris/pull/28479
This is for FE side, but it can only work with Master FE. Other FE node will not be aware of the IP change.
And there are a bunch of BackendServiceProxy, this PR only handle cache in one of them.
1. variant type core dump at call get_data_at function, as not impl this function.
2. some case can't pass at old planner and fold_constant_by_be = on.
3. open enable_fold_constant_by_be = true.
both global lock in fragment mgr should only protect the map logic, could not use it to protect cancel method.
fragment ctx cancel method should be protected by a lock.
query ctx cancel --> pipelinex fragment cancel ---> query ctx cancel will dead lock.
The standard said that the input parameter `pos` of std::vector::erase
must be valid and dereferenceable, the `end()` iterator cannot be used
as a value of `pos`. I did some tests and the crash only occurs when the
vector is empty. Fortunately `local_files` is usually not empty.
File meta cache on BE is used to cache the meta for external table's file such as parquet footer.
This cache is counted by number, not memory consumption.
So if the cache object is big(eg, a large parquet footer), the total memory consumption of this cache
will be large and causing OOM.
This PR mainly changes:
1. Add a new method `exceed_prune_limit()` for `CachePolicy`
For `ObjLRUCache`, it always return true so that the minor of full gc on BE will prune the cache each time.
2. Reduce the default capability of file meta cache, from 20000 to 1000
Also change the default capability of hdfs file handle cache, from 20000 to 1000
4. Change judgement of whether enable file meta cache when querying
If the number of file need to be read is larger than the 1/3 of the file meta cache's capability, file meta cache
will be disabled for this query. Because cache is useless if there are too many files.
In order to add common code to the value deleter of LRU cache, let all lru cache values inherit from LRUCacheValueBase class and tracking memory in destructor.