Hive store all the data without partition columns to a default partition named __HIVE_DEFAULT_PARTITION__.
Doris will fail to get the this partition when the partition column type is INT or something else that
__HIVE_DEFAULT_PARTITION__ couldn't convert to.
This pr is to support hive default partition, set the column value to NULL for the missing partition columns.
Loading a big local file will cause `INTERNAL_ERROR]too many filtered rows` issue since the bytebuffer from mysql client always use the same byte array.
And the later bytes will overwrite the previous one and make wrong bytes order among the network.
Copy the byte array and then fill it into network.
* disable setting storage policy on MoW table
* fix error in regression test
* make the name of test table unique
* use Strings.isNullOrEmpty to replace equals
* fix error in if statement
* Support mapping es date format, default/yyyy-MM-dd HH:mm:ss/yyyy-MM-dd/epoch_millis
* Replace simple json with jackson, resolve column order random problem
* Add es array doc version
Enhance aggregate function `collect_set` and `collect_list` to support optional `max_size` param,
which enables to limit the number of elements in result array.
Demo:
```
# HELP doris_fe_mtmv_job Total job number of mtmv.
# TYPE doris_fe_mtmv_job gauge
doris_fe_mtmv_job{type="TOTAL-JOB"} 1
doris_fe_mtmv_job{type="ACTIVE-JOB"} 1
# HELP doris_fe_mtmv_task Running task number of mtmv.
# TYPE doris_fe_mtmv_task gauge
doris_fe_mtmv_task{type="RUNNING-TASK"} 0
doris_fe_mtmv_task{type="PENDING-TASK"} 0
doris_fe_mtmv_task{type="FAILED-TASK"} 0
doris_fe_mtmv_task{type="TOTAL-TASK"} 1
```
when emitCsgCmp, we should check if there is some missed edges should be used as connection edge. If there is missed edge but can't be used as connection edge, the emitCsgCmp should return and seek for another plan.
Add use_fix_replica session variable, so that we can be better debug replica inconsistencies problem.
If use_fix_replica default is -1, which means not fix,
else we will choose the {use_fix_replica} smallest replica.
function pushdown: #10355
NGram BloomFilter Index apply like pushdown: #11579
Enabled by default, make sure it stays active.
If NGram BloomFilter Index is not used, this like pushdown can be replaced by #15917, which can push down all expressions including like.
[WARNING:gensrc/thrift/parquet.thrift:22] Uncaptured doctext at on line 18.
[WARNING:gensrc/thrift/parquet.thrift:23] Uncaptured doctext at on line 22.
[WARNING:gensrc/thrift/parquet.thrift:436] Uncaptured doctext at on line 428.
WARNING in asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).WARNING in asset size limit: The following asset(s) exceed the
recommended size limit (244 KiB). This can impact web performance
WARNING in entrypoint size limit: The following entrypoint(s) combined asset size exceeds the recommended limit
Warning : Macro "NonTerminator" has been declared but never used.
when stream load with 2pc, the table was droped before commit, it will get error commit or abort, trasaction can not finish.
if commit or abort ,will get error:
{
"status": "ANALYSIS_ERROR",
"msg": "errCode = 7, detailMessage = unknown table, tableId=52579"
}
after this pr, i can abort success.
The data type `NUMBER(p,s)` of oracle has some different of doris decimal type in semantics.
For Oracle Number(p,s) type:
1.
if s<0 , it means this is an Interger. This `NUMBER(p,s)` has (p+|s| ) significant digit,
and rounding will be performed at s position.
eg: if we insert 1234567 into `NUMBER(5,-2)` type, then the oracle will store 1234500. In this case,
Doris will use
int type (`TINYINT/SMALLINT/INT/.../LARGEINT`).
2. if s>=0 && s<p , it just like doris Decimal(p,s) behavior.
3. if s>=0 && s>p, it means this is a decimal(like 0.xxxxx).
p represents how many digits can be left to the left after the decimal point,
the figure after the decimal point s will be rounded. eg: we can not insert 0.0123456 into `NUMBER(5,7)` type,
because there must be two zeros on the right side of the decimal point,
we can insert 0.0012345 into `NUMBER(5,7)` type. In this case, Doris will use `DECIMAL(s,s)`
4. if we don't specify p and s for `NUMBER(p,s)` like `NUMBER`,
the p and s of `NUMBER` are uncertain. In this case, doris can not determine p and s,
so doris can not determine data type.
1. Enhencement:
For single-charset column separator,csv_reader use another method of `split value`.
2. BugFix
Set `json` file format loading to be sensitive.
Support parsing map&struct type in parquet&orc reader.
## Remaining Problems
1. Doris use array type to build the key and value column of a `map`, but doesn't fill the offsets in value column, so the offsets in value column is wasted.
2. Parquet support reading only key or value column in `map`, this PR hasn't supported yet.
3. Parquet support reading partial columns in `struct`, this PR hasn't supported yet.
This PR mainly changes:
When upgrading from old version to master, the ADMIN_PRIV for normal user may be lost.
This may only happen if:
Create a user with ADMIN_PRIV privilege.
Upgrade Doris to v1.2.x or master before the meta image which contains the edit log in step 1 is generate.
And the ADMIN_PRIV will be lost in Global Privileges
This PR will rectify this bug and set ADMIN_PRIV to the right place
Refactor the user's implicit role name
In [feature](auth)Implementing privilege management with rbac model #16091, we refactor the Doris auth model by introducing RBAC. And each user will have an implicit role,
named with prefix default_role_rbac_. But it has wrong format like:
default_role_rbac_'default_cluster:user1'@'%'
This PR change the role name's format, like:
default_role_rbac_user1@%
default_role_rbac_user2@[domain]
NOTICE: this change may cause incompatible metadata, but since [feature](auth)Implementing privilege management with rbac model #16091 is not released, we should fix it soon.
Add a new session variable show_user_default_role
When set to true, it will show implicit role of user in the result of show roles stmt. Default is false
This pr implements the list default partition referred in related #15507.
It's similar as GreenPlum's default's partition which would store all data not satisfying prior partition key's
constraints and optimizer wouldn't filter default partition which means default partition would be scanned
each time you try to select data from one table with default partition.
User could either create one table with default partition or alter add one default partition.
```sql
PARTITION LIST(key) {
PARTITION p1 values in (xx,xx),
PARTITION DEFAULT
}
ALTER TABLE XXX ADD PARTITION DEFAULT
```
We don't support automatically migrate data inside default partition which meets newly added partition key's
constraint to newly add partition when alter add new partition. User should select default partition using new
constraints as predicate and insert them to new partition.
```sql
insert into tbl select * from tbl partition default where partition_key=xx;
```
Consider the sql bellow:
select sum(cc.qlnm) as qlnm
FROM
outerjoin_A
left join (SELECT
outerjoin_B.b,
coalesce(outerjoin_C.c, 0) AS qlnm
FROM
outerjoin_B
inner JOIN outerjoin_C ON outerjoin_B.b = outerjoin_C.c
) cc on outerjoin_A.a = cc.b
group by outerjoin_A.a;
The coalesce(outerjoin_C.c, 0) was calculated in the agg node, which is wrong.
This pr correct this, and the expr is calculated in the inner join node now.
1. Organize http documents
2. Add http interface authentication for FE
3. Support https interface for FE
4. Provide authentication interface
5. Add http interface authentication for BE
6. Support https interface for BE
For add or drop inverted index, when replay the logModifyTableAddOrDropInvertedIndices will new a schema change job, that has a new CreateTime, here should new a schema change job when not replay log.
Add hint NTH_OPTIMIZED_PLAN to let the optimzier can select n-th optimized plan. For example, you could use,
select /*+SET_VAR("nth_optimized_plan"=2) */ * from table;
to select the second-best plan in the optimizer.
Support to decode nested array column in parquet reader:
1. FE should generate the right nested column type. FE doesn't check the nesting depth and legality, like map\<array\<int\>, int\>.
2. `ParquetColumnReader` has removed the filtering of page index to support nested array type.
It's too difficult to skip values in nested complex types. Maybe we should support the filtering of page index and lazy read in later PR.
3. `ExternalFileScanNode` has a bug in creating default value expression.
4. Maybe it's slow to read repetition levels in a while loop. I'll optimize this in next PR.
5. Array column has temporary `SchemaElement` in its thrift definition,
we have removed them and keep its parent in former implementation.
The remaining parent should inherit the repetition and definition level of its child.
This pr do two things:
1. fix:
It use `column[0]` to judge class type in JdbcExecutor, but column[0] may be null !
2. Enhencement
In the original logic, all fields in jdbc catalog table will be set Nullable.
However, it is inefficient for nullable fields. Actually, we can know if the fields in data source table
is nullable through jdbc. So we can set the corresponding fields in Doris jdbc catalog to nullable or not.
The LoadScanProvider doesn't get Hidden Columns from stream load parameter.
This may cause stream load delete operation fail. This pr is to pass the hidden columns to LoadScanProvider.
the body of create view stmt is parsed twice.
in the second parse, we get sql string from CreateViewStmt.viewDefStmt.toSql() function, which missed selectlist.
consider sql select *
from
(select * from test_1) a
inner join
(select * from test_2) b
on a.id = b.id
inner join
(select * from test_3) c
on a.id = c.id
Because a.id is from a subquery, to find its source table, need use function getSrcSlotRef().