```
CREATE ROW POLICY test_row_policy_1 ON test.table1
AS {RESTRICTIVE|PERMISSIVE} [TO user] [TO ROLE role] USING (id in (1, 2)); // add `to role`
DROP [ROW] POLICY [IF EXISTS] test_row_policy;//delete `for user` and `on table`
SHOW ROW POLICY [FOR user][FOR ROLE role] // add `for role`
```
Problem:
It will return a result although we use wrong ak/sk/bucket name, such as:
```sql
mysql> select * from demo.student
-> into outfile "s3://xxxx/exp_"
-> format as csv
-> properties(
-> "s3.endpoint" = "https://cos.ap-beijing.myqcloud.com",
-> "s3.region" = "ap-beijing",
-> "s3.access_key"= "xxx",
-> "s3.secret_key" = "yyyy"
-> );
+------------+-----------+----------+----------------------------------------------------------------------------------------------------+
| FileNumber | TotalRows | FileSize | URL |
+------------+-----------+----------+----------------------------------------------------------------------------------------------------+
| 1 | 3 | 26 | s3://xxxx/exp_2ae166e2981d4c08-b577290f93aa82ba_ |
+------------+-----------+----------+----------------------------------------------------------------------------------------------------+
1 row in set (0.15 sec)
```
The reason for this is that we did not catch the error returned by `close()` phase.
Sometimes, the partitions of a hive table may on different storage, eg, some is on HDFS, others on object storage(cos, etc).
This PR mainly changes:
1. Fix the bug of accessing files via cosn.
2. Add a new field `fs_name` in TFileRangeDesc
This is because, when accessing a file, the BE will get a hdfs client from hdfs client cache, and different file in one query
request may have different fs name, eg, some of are `hdfs://`, some of are `cosn://`, so we need to specify fs name
for each file, otherwise, it may return error:
`reason: IllegalArgumentException: Wrong FS: cosn://doris-build-1308700295/xxxx, expected: hdfs://[172.xxxx:4007](http://172.xxxxx:4007/)`
Fix incorrect result if null partition fields in orc file.
### Root Cause
Theoretically, the underlying file of the hive partition table should not contain partition fields. But we found that in some user scenarios, the partition field will exist in the underlying orc/parquet file and are null values. As a result, the pushed down partition field which are null values. filter incorrectly.
### Solution
we handle this case by only reading non-partition fields. The parquet reader is already handled this way, this PR handles the orc reader.
When upgrading from 1.2 to 2.x(future version higher than 2.0), the default value of parameter broadcast_right_table_scale_factor may not be upgraded from old default value 10.0 to new default 0.0, which will cause the broadcast join behavior unexpected and may have a big performance impact. This pr will force to reset the value to new default value 0.0, to make sure the behavior correct.
now,`hostname_to_ip` only can resolve `ipv4`,Therefore, a method is provided to parse ipv4 or ipv6 based on parameters。
when `_heartbeat` call `hostname_to_ip`,Resolve to ipv4 or ipv6, determined by `BackendOptions.is_bind_ipv6` Decision
Additionally, a method is provided to first attempt to parse the host into ipv4, and then try ipv6 if it fails
Nereids doesn't support view based table value function, because tvf view doesn't contain the proper qualifier (catalog, db and table name). This pr is to support this function.
Also, fix nereids table value function explain output exprs incorrect bug.
The sql like
Select count(1) from view
would contain all the columns in old planner's execution plan, which is slow, because BE need to read all the column in data files. This pr is to improve the plan to only contain one column.
* Revert "[fix](testcase) fix test case failure of insert null value into not null column (#20963)"
This reverts commit 55a6649da962fb170ddb40fea8ef26bdc552a51a.
Mannual Revert "fix in strict mode, return error for insert if datatype convert fails (#20378)"
This mannual reverts commit 1b94b6368f5e871c9a0fe53dd7c64409079a4c9d
* fix case failure
We use two facilities to do predicate infer: PredicatePropagation and
PullUpPredicates. In the prvious implementation, we use a set to save
the intermediate result of PredicatePropagation. The purpose is infer
new predicate though two equal relation. However, it is the wrong way.
Because it could infer wrong predicate through outer join. For example
```sql
select a.c1
from a
left join b on a.c2 = b.c2 and a.c1 = '1'
left join c on a.c2 = c.c2 and a.c1 = '2'
inner join d on a.c3=d.c3
```
the predicates `a.c1 = '1'` and `a.c1 = '2'` should not be inferred as
filter to relation `a`.
This PR:
1. revert the change from PR #22145, commit 3c58e9ba
2. Remove the unreasonable restrict in PullupPredicate.
3. Use new Filter node rather than new otherCondition on join node to
save infer predicates
* support int128 in jsonb
* fix jsonb int128 write
* fix jsonb to json int128
* fix json functions for int128
* add nereids function jsonb_extract_largeint
* add testcase for json int128
* change docs for json int128
* add nereids function jsonb_extract_largeint
* clang format
* fix check style
* using int128_t = __int128_t for all int128
* use fmt::format_to instead of snprintf digit by digit for int128
* clang format
* delete useless check
* add warn log
* clang format
This problem is casued by #21197
Fixed an issue that `csv_with_names` and `csv_with_names_and_types` file format could not be exported on nereids optimizer when using `select...into outfile`.
- allow put custom jar in `${DORIS_HOME}/lib/java_extensions/custom_extension` such as `paimon-s3-0.4.0-incubating.jar`
- add some note for paimon and fqdn
1. rename TVFProperties to Properties
2. add generating function explode and explode_outer
3. fix concat_ws could not apply on array
4. check tokenize second argument format on FE
5. add test case for concat_ws, tokenize, explode, explode_outer and split_by_string