Commit Graph

7 Commits

Author SHA1 Message Date
0c98355fff [fix](catalog) fix create catalog with resource replay issue and kerberos auth issue (#20137)
1. Fix create catalog with resource replay bug.
	If user create catalog using `create catalog hive with resource xxx`, when replaying edit log,
	there is a bug that resource may be dropped, causing NPE and FE will fail to start.

	In this PR, I add a new FE config `disallow_create_catalog_with_resource`, default is true.
	So that `with resource` will not be allowed, and it will be deprecated later.

	And also fix the replay bug to avoid NPE.

2. Fix issue when creating 2 hive catalogs to connect with and without kerberos authentication.

	When user create 2 hive catalogs, one use simple auth, the other use kerberos auth.
	The query may fail with error like: `Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.`

	So I add a default property for hive catalog: `"ipc.client.fallback-to-simple-auth-allowed" = "true"`.
	Which means this property will be added automatically when user creating hive catalog, to avoid such problem.

3. Fix calling `hdfsExists()` issue

	When calling `hdfsExists()` with non-zero return code, should check if it encounters error or is file not found.

3. Some code refactor

	Avoid import `org.apache.parquet.Strings`
2023-05-30 16:57:39 +08:00
a041f8eabe [fix](fe) Fx SimpleDateFormatter thread unsafe issue by replacing to DateTimeFormatter. (#19265)
DateTimeFormatter replace SimpleDateFormat in fe module because SimpleDateFormat is not thread-safe.
2023-05-11 22:50:24 +08:00
39f59f554a [improvement](dry-run)(tvf) support csv schema in tvf and add "dry_run_query" variable (#16983)
This CL mainly changes:

Support specifying csv schema manually in s3/hdfs table valued function

s3 (
'URI' = 'https://bucket1/inventory.dat',
'ACCESS_KEY'= 'ak',
'SECRET_KEY' = 'sk',
'FORMAT' = 'csv',
'column_separator' = '|',
'csv_schema' = 'k1:int;k2:int;k3:int;k4:decimal(38,10)',
'use_path_style'='true'
)
Add new session variable dry_run_query

If set to true, the real query result will not be returned, instead, it will only return the number of returned rows.

mysql> select * from bigtable;
+--------------+
| ReturnedRows |
+--------------+
| 10000000     |
+--------------+
This can avoid large result set transmission time and focus on real execution time of query engine.
For debug and analysis purpose.
2023-03-02 16:51:27 +08:00
4e92f63d7b [Fix](Load) Disable for the developer to import fast json in fe (#16235) 2023-02-01 16:32:11 +08:00
dac0883635 [chore](checkstyle)forbidden import all kind of relocated guava (#12018) 2022-08-24 08:47:13 +08:00
7c950c7cd5 [feature](Nereids) support cross join in Nereids (#11502)
support cross join in Nereids

1. add PhysicalNestedLoopJoin
2. Translate PhysicalNestedLoopJoin to CrossJoinNode in PhysicalPlanTranslator
2022-08-08 22:14:27 +08:00
642499265c [fe-package]reject illegal import (#11311) 2022-07-29 14:22:23 +08:00