* Support TRUNCATE TABLE stmt
User can use TRUNCATE TABLE stmt to empties a table
or partitions completely.
Unlike DELETE, it will drop the tablets directly, and
without any performance impact.
* Fix bugs that new partition should use new ID
* Use equals() to compare Integer
* Fix compile bug
* Fix bug on single range parititon
* Check table's state again after creating partition
* Avoid 'No more data to read' error when handling stream load rpc
1. Catch throwable of all stream load rpc.
2. Avoid setting null string as error msg of rpc result status.
* Change setError_msgs to addToError_msgs
1. Only collect all error replicas if publish task is timeout.
2. Add 2 metrics to monitor the success of failure of txn.
3. Change publish timeout to Config.load_straggler_wait_second
Step1: updateBeIdTaskMaps, remove unavailable BE and add new alive BE
Step2: process timeout tasks, if a task has already been allocated to BE but not finished before DEFAULT_TASK_TIMEOUT, it will be discarded.
At the same time, the partitions belong to old tasks will be allocated to a new task. The new task with a signature will be added in the queue of needSchedulerRoutineLoadTask.
Step3: process all needSchedulerRoutineLoadTasks, allocate task to BE. The task will be executed by BE.
* Modify partition's version name to what it means.
1. committedVersion(Hash) -> visibleVersion(Hash)
2. currentVersion(Hash) -> committedVersion(Hash)
3. add some comment to make the code more readable
* Check if editlog is null in CatalogIdGenerator
To avoid unit test failure
* Change PaloMetrics' name and Catalog's Id generator
1. Remove 'Palo' prefix of class Metric.
2. Add a new CatalogIdGenerator to replace the old AtomicLong, to avoid too many edit logs.
3. Add a new histogram to monitor write letency of edit log write.
* modify next id logic
* fix a bug that Metric is not init before using HISTO_EDIT_LOG_WRITE_LATENCY
* fix a problem
Add path info of replica in catalog
Also fix a bug that when calling check_none_row_oriented_table,
store is null, it cannot be used to create table.
Instead, OLAPHeader can be used to get storage type information.
Currently, the cardinality, avgRowSize, numNodes stat info in OlapScanNode is none, So the broadcastCost and partitionCost are both wrong and Doris couldn't auto choose a best join strategy.
So we should make the statistical information in OlapScanNode more precise.
Allow User specify the null ordering
NULLS FIRST: specifies that NULL values should be returned before
non-NULL values.
NULLS LAST: specifies that NULL values should be returned after
non-NULL values.
* Change the lock type of Catalog lock
Implement a QueryableReentrantLock to see which thread held the lock.
* Key in empty version has no min/max value
It should be ignored to reconstruct min/max statistics upon restart.
1. Fix error class in start_fe.sh and start_broker.sh.
2. Add log4j2.xml in fe/src/test/resources/ to run fe ut without log4j warnings.
3. Reduce the test file size in be ut.
* Add streaming load feature. You can execute 'help stream load;' to see more information.
Changed:
* Loading phase of a certain table can be parallelized, to reduce the load job execution time when multi load jobs to a single table.
* Using RocksDB to save the header info of tablets in Backends, to reduce the IO operations and increate speeding of restarting.
Fixed:
* A lot of bugs fixed.
Added: Support getting column size and precision info of table or view using JDBC.
Updated: Change the promethues type name GAUGE to lowercase, to fit the latest promethues version.
Updated: Backend ip saved in FE will be compared with BE's local ip when doing heartbeat, to avoid false positive heartbeat response.
Updated: Using version_num of tablet instead of calculating nice value to select cumulative compaction candicates.
Fixed: Predicates should not be pushed down to subquery which contains limit clause.
Fixed: Fix the formula of calculating BE load score.
Fixed: Fix a bug that in some edge cases, non-master Fontend may wait for a unnecessary long timeout after forwarding cmd to Master FE.
Fixed: A bug that granting privs on more than one table does not work.
Fixed: Support 'Insert into' table which contains HLL columns.
Fixed: ExportStmt' toSql() method may throw NullPointer Exception if table does not exist.
Fixed: Remove unnecessary 'get capacity' operation to avoid IO impact.
Internal commit id: merge to c16bd603a53dfe2089ff95704c698a738c317792
1. No one can set root password expect for root user itself
2. NODE_PRIV cannot be granted.
3. ADMIN_PRIV and GRANT_PRIV can only be granted or revoked on *.*
4. No one can modifly privs of default role 'operator' and 'admin'.
5. No user can be granted to role 'operator'.
Fixed: the running load limit should not be applied to replay logic. It will cause replay or loading image fail.
Changed: optimize the problem of too many directories under mini load directory.
Fixed: missing password and auth check when handling mini load request in Frontend.
Fixed: DomainResolver should start after Frontends transfer to a certain ROLE, not in Catalog construction methods.
Fixed: a stupid bug that no one can set password for root user... fix it: only root user can set password for root.
Fixed: read null data twice
When reading data with a null value, in some cases, the same data will be read twice by the storage engine,
resulting in a wrong result.The reason for this problem is that when splitting,
and the start key is the minimum value, the data with null is read.
Fixed: add a flag to prevent DomainResovler thread start twice.
Fixed: fixed a mem leak of using ByteBuf when parsing auth info of http request.
Fixed: add a new config 'disable_hadoop_load', default is false, set to true to disable hadoop load.
Changed: add detail error msg of submitting hadoop load job in show load result.
Fixed: Backend process should be crashed if failed to saving header.
Added: exposure backend info to user when encounter error on Backend. for debugging it more convenient.
Fixed: Should remove fd from map when inputstream or outputstream is closed in Broker process.
Fixed: Change all files' LF to unix format.
Internal commit id: merge from dfcd0aca18eed9ff99d188eb3d01c60d419be1b8
2. add 2 new proc '/current_queries' and '/current_backend_instances' to monitor the current running queries.
3. add a manual compaction api on Backend to trigger cumulative or base compaction manually.
4. add Frontend config 'max_bytes_per_broker_scanner' to limit to bytes per one broker scanner. This is to limit the memory cost of a single broker load job
5. add Frontend config 'max_unfinished_load_job' to limit load job number: if number of running load jobs exceed the limit, no more load job is allowed to be submmitted.
6. a log of bug fixed
1. Apache HDFS broker support HDFS HA and Hadoop kerberos authentication.
2. New Backup and Restore function. Use Fs Broker to backup your data to HDFS or restore them from HDFS.
3. Table-Level Privileges. Grant fine-grained privileges on table-level to specified user.
4. A lot of bugs fixed.
5. Performance improvement.