When compaction case, memory map offsets coming to same olap convertor which is from 0 to 0+size
but it should be continue in different pages when in one segment writer .
eg :
last block with map offset : [3, 6, 8, ... 100]
this block with map offset : [5, 10, 15 ..., 100]
the same convertor should record last offset to make later coming offset followed last offset.
so after convertor :
the current offset should [105, 110, 115, ... 200], then column writer just call append_data() to make the right offset data append pages
1. Add Http interface for query q-error
2. Fix the selectivity calculation of inner join, it would always be 0 if there is only one join condition before
The problem is an exception when doing analyze:
java.lang.IllegalStateException: exceptions :
errCode = 2, detailMessage = select list expression not produced by aggregation output (missing from GROUP BY clause?): xxx
The scenario is:
select aes_decrypt(xxx,xxx) as c0 from table group by c0;
Analyze of problem:
The direct problem is mismatched of slotref, and this mismatched due to the mismatched of parameter number of aes_decrypt function. When debuging, we can see the slotref of group column is added to ExprSubstitutionMap, but can not matching with select result columns. And this is because when substiting expr it will analyze again, so the parameter would be added twice. This will cause the mismatching of function, so it would not be substitute as a slotref, the exception would be throw.
Fix:
Add call once to adding third parameter of aes_decrypt type function. Compare the child we want to add to the last child of function. If they are the same, do not add it.
* when add two not exist fe and drop two not exit fe, we may meet exception like this:
'''
java.lang.IllegalArgumentException: com.sleepycat.je.config.IntConfigParam:
param je.rep.electableGroupSizeOverride doesn't validate, -1 is less than min of 0
at com.sleepycat.je.config.IntConfigParam.validate(IntConfigParam.java:47)
at com.sleepycat.je.config.IntConfigParam.validateValue(IntConfigParam.java:75)
at com.sleepycat.je.dbi.DbConfigManager.setVal(DbConfigManager.java:648)
at com.sleepycat.je.dbi.DbConfigManager.setIntVal(DbConfigManager.java:694)
at com.sleepycat.je.rep.ReplicationMutableConfig.setElectableGroupSizeOverrideVoid(ReplicationMutableConfig.java:523)
at com.sleepycat.je.rep.ReplicationMutableConfig.setElectableGroupSizeOverride(ReplicationMutableConfig.java:512)
at org.apache.doris.ha.BDBHA.removeUnReadyElectableNode(BDBHA.java:236)
at org.apache.doris.catalog.Env.dropFrontend(Env.java:2533)
'''
Nereids use AutoCloseable to do tables read lock and unlock.
However, if AutoCloseable throw exception when open resource, its close function would not be called.
So, we do close manually when exception thrown in opening stage.
Add a new distributed cost model in Nereids. The new cost model models the cost of the pipeline execute engine by dividing cost into run and start costs. They are:
* START COST: the cost from starting to emitting the fist tuple
* RUN COST: the cost from emitting the first tuple to emitting all tuples
For the parent operator and child operator, we assume the timeline of them is:
```
child start ---> child run --------------------> finish
|---> parent start ---> parent run -> finish
```
Therefore, in the parallel model, we can get:
```
start_cost(parent) = start_cost(child) + start_cost(parent)
run_cost(parent) = max(run_cost(child), start_cost(parent) + run_cost(parent))
```
make it usable in hive.
current issue: type of partition column are wrapped by ``, it's not illegal in hive. One problem case:
CREATE TABLE t3p_parquet(
id int,
name string)
PARTITIONED BY (
dt int)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
'hdfs://path/to/t3p_parquet'
TBLPROPERTIES (
'transient_lastDdlTime'='1671700883')
update broadcast join cost estimate according to BE implementation.
there is an enhancement on BE. in broadcast join, BE only build one hash table, not instanceNum hash tables.
1. Support for more expression type
2. Support derive with histogram
3. Use StatisticRange to abstract to logic
4. Use Statistics rather than StatisDeriveResult
After adding a unique ID, the unRankTest fail because each plan has a different ID in the string.
To avoid the effect of unique ID, Compare the plan with the output rather than the string
* [Feature](vectorized)(quantile_state): support vectorized quantile state functions
1. now quantile column only support not nullable
2. add up some regression test cases
3. set default enable_quantile_state_type = true
---------
Co-authored-by: spaces-x <weixiang06@meituan.com>
1. introduce a new type `VARIANT` to encapsulate dynamic generated columns for hidding the detail of types and names of newly generated columns
2. introduce a new expression `SchemaChangeExpr` for doing schema change for extensibility
WITH t0 AS(
SELECT report.date1 AS date2 FROM(
SELECT DATE_FORMAT(date, '%Y%m%d') AS date1 FROM cir_1756_t1
) report GROUP BY report.date1
),
t3 AS(
SELECT date_format(date, '%Y%m%d') AS date3
FROM cir_1756_t2
)
SELECT row_number() OVER(ORDER BY date2)
FROM(
SELECT t0.date2 FROM t0 LEFT JOIN t3 ON t0.date2 = t3.date3
) tx;
The DATE_FORMAT(date, '%Y%m%d') was calculated in GROUP BY node, which is wrong. This expr should be calculated inside the subquery.
Some HDP/CDH Hive versions use gzip to compress the message body of hms NotificationEvent,
so com.qihoo.finance.hms.event.MetastoreEventFactory can not transfer it rightly.