When forwarding show tablet stmt to Master FE. If the result contains null, FE will throw exception.
Fix it by using `\N` instead of "null" in result, to make thrift rpc happy.
* [SQL] Support where, limit, order clause in show resourcestmt.
Grammar
SHOW RESOURCES
[
WHERE
[NAME [ = "your_resource_name" | LIKE "name_matcher"]]
[RESOURCETYPE = ["SPARK"]]
]
[ORDER BY ...]
[LIMIT limit][OFFSET offset];
issue #4501
Use spring mvc rest to replace the original netty http rest
Created a new package `org/apache/doris/httpv2`,
and the origin implementations under `org/apache/doris/http` remain unchanged.
This part of the code will not be used at present, so it will not affect existing functions.
API document can be found in #4584
Proposal #4308
* [Bug]Fix bug that tablet info with wrong capacity may cause fe oom
* remove unused code
* fix
* remove empty line
Co-authored-by: caiconghui [蔡聪辉] <caiconghui@xiaomi.com>
For[Spark Load]
1 support decimal andl largeint
2 add validate logic for char/varchar/decimal
3 check data load from hive with strict mode
4 support decimal/date/datetime aggregator
When Doris create a database, the database of data quota and replica quota is initialized by configuration(Config.default_db_data_quota_bytes and Config.default_db_replica_quota_bytes).
In other words, I convert FeConstants.default_db_data_quota_bytes to Config.default_db_data_quota_bytes
Config.default_db_data_quota_bytes can be change in configuration file.
When fe replay rollup job(v1) with deleted table, it will throw null pointer exception and exit.
This commit ignore this error and print a warning log to avoid fe exit.
Fixed#4571
Change-Id: I302b554a94d42aee645db6b224cd989e00cd3ca6
Many tables are so large that need seperate partitions with "HOUR" time unit.
But now dynamic partition doesn't support "HOUR" time unit and it was marked as "TODO".
So I support the feature and it works.
Add tracing broker log. When fe get filestatus for distributing load task to broker,
the broker maybe get empty files and not give correct error code.
Add this log to easy track which broker process filestatus operation and we can get the error log.
We store all table ids involved in the Load Job in TransactionState.
However, for Hadoop Load job, table ids are set incorrectly.
This caused the WAITING_TXN phase to not correctly wait for the completion
of the previous load transaction when doing the alter table,
which caused some data version loss problems.
we can create odbc_table use SQL like
```
CREATE EXTERNAL TABLE `baseall_oracle` (
`k1` decimal(9, 3) NOT NULL COMMENT "",
`k2` char(10) NOT NULL COMMENT "",
`k3` datetime NOT NULL COMMENT "",
`k5` varchar(20) NOT NULL COMMENT "",
`k6` double NOT NULL COMMENT ""
) ENGINE=ODBC
PROPERTIES (
"host" = "192.168.0.1",
"port" = "8086",
"user" = "happenlee",
"password" = "doris",
"database" = "doris",
"table" = "baseall",
"driver" = "Oracle 19 ODBC driver",
"type" = "oracle"
);
```
Now we only support Oracle and MySQL Database and this feature default turned off by conf enable_odbc_table.
Main CL:
1. Copy the code from BE to implement the `str_to_date()` function in FE.
2. `str_to_date("2020-08-08", "%Y-%m-%d %H:%i:%s")` will return `2020-08-08 00:00:00` instead of `2020-08-08`.
It is possible to report "Illegal column/field reference'table2.DORIS_DELETE_SIGN' of semi-/anti-join"
when executing a semi/anti join statement on a table with hidden columns.
This is because the filter conditions of semi/anti join cannot added in the where statement.
Now we add delete flag related where predicate in OlapScanNode level.
1. Analyze what mode of cache can be used by query
2. Query cache before executing query in StmtExecutor
3. Two cache mode, sqlcache and partitioncache, are implemented
The param of rand() function should be literal, but current compiler ignore to
validate the literal param of rand function, it is validated in execution step.
This PR make it validated in compile step, and make it more earlier to find the usage error of rand() function.
In DistributedPlanner, do not add the unnecessary Exchanges.
For case 1, we only need to judge that the table's distribute hash keys is a subset of the aggregate keys.
For case 2, we should judge two conditions:
- partition keys are also hash keys.
- the table's distribute hash keys is a subset of the aggregate keys.
1. fix write dpp result when dpp throw exception
2. boolean value:true, false(IgnoreCase), 0, 1
3. wrong dest column for source data check
4. support * in source file path
5. if job state is cancelled or finished, submitPushTasks would throw all partitions have no load data exception,
because tableToLoadPartitions was already cleaned up
#3433
In the process of historical data transformation of materialized views, it may occur that the transformation fails due to data quality.
Add an error status code :OLAP_ERR_DATE_QUALITY_ERR to determine if a data problem is causing the failure
#3344