Modify the implementation of MemTracker:
1. Simplify a lot of useless logic;
2. Added MemTrackerTaskPool, as the ancestor of all query and import trackers, This is used to track the local memory usage of all tasks executing;
3. Add cosume/release cache, trigger a cosume/release when the memory accumulation exceeds the parameter mem_tracker_consume_min_size_bytes;
4. Add a new memory leak detection mode (Experimental feature), throw an exception when the remaining statistical value is greater than the specified range when the MemTracker is destructed, and print the accurate statistical value in HTTP, the parameter memory_leak_detection
5. Added Virtual MemTracker, cosume/release will not sync to parent. It will be used when introducing TCMalloc Hook to record memory later, to record the specified memory independently;
6. Modify the GC logic, register the buffer cached in DiskIoMgr as a GC function, and add other GC functions later;
7. Change the global root node from Root MemTracker to Process MemTracker, and remove Process MemTracker in exec_env;
8. Modify the macro that detects whether the memory has reached the upper limit, modify the parameters and default behavior of creating MemTracker, modify the error message format in mem_limit_exceeded, extend and apply transfer_to, remove Metric in MemTracker, etc.;
Modify where MemTracker is used:
1. MemPool adds a constructor to create a temporary tracker to avoid a lot of redundant code;
2. Added trackers for global objects such as ChunkAllocator and StorageEngine;
3. Added more fine-grained trackers such as ExprContext;
4. RuntimeState removes FragmentMemTracker, that is, PlanFragmentExecutor mem_tracker, which was previously used for independent statistical scan process memory, and replaces it with _scanner_mem_tracker in OlapScanNode;
5. MemTracker is no longer recorded in ReservationTracker, and ReservationTracker will be removed later;
1. Avoid print large string in error log
If user load a unqualified large string, the all string will be saved in error log,
so the error log is too big that can not be shown be using `show load warnings on "url"`.
Err: `Got packet bigger than 'max_allowed_packet' bytes`
2. Remove duplicate help doc
Do not allow doc with same title, or error thrown when starting FE:
`java.lang.IllegalArgumentException: Multiple entries with same key:`
```
SET PROPERTY FOR 'jack' 'exec_mem_limit' = '2147483648';
SET PROPERTY FOR 'jack' 'load_mem_limit' = '2147483648';
```
The user level property will overwrite the value in session variables.
1. support SHOW LAST INSERT
In the current implementation, the insert operation returns a json string to describe the result information
of the insert. But this information is in the session track field of the mysql protocol,
and it is difficult to obtain programmatically.
Therefore, I provide a new syntax `show last insert` to explicitly obtain the result of the latest insert operation,
and return a normal query result set to facilitate the user to obtain the result information of the insert.
2. the `ReturnRows` field in fe.audit.log of insert operation will be set to the loaded row num of the insert.
3. Fix a bug described in #8354
SQL to reproduce:
```
SELECT * FROM table WHERE where FROM_UNIXTIME(d_datekey,'%Y-%m-%d %H:%i:%s') != '1970-08-20 00:11:43';
org.apache.doris.common.AnalysisException: errCode = 2, detailMessage = Unexpected exception: Illegal pattern character 'i'
at org.apache.doris.qe.StmtExecutor.analyze(StmtExecutor.java:584) ~[palo-fe.jar:3.4.0]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:345) ~[palo-fe.jar:3.4.0]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:318) ~[palo-fe.jar:3.4.0]
at org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:221) ~[palo-fe.jar:3.4.0]
at org.apache.doris.qe.ConnectProcessor.dispatch(ConnectProcessor.java:361) ~[palo-fe.jar:3.4.0]
at org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:562) ~[palo-fe.jar:3.4.0]
at org.apache.doris.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:50) ~[palo-fe.jar:3.4.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:835) [?:?]
```
Describe the overview of changes.
Just support:
yyyy-MM-dd HH:mm:ss
yyyy-MM-dd
yyyyMMdd
1. en: add convert_tz.md
2. en: fix curdate.md, the fix is to modify the title
3. en: add curtime.md
4. en: fix str_to_date.md, the fix is to modify the title
5. zh-CN: fix convert_tz.md, the fix is to modify the description
6. zh-CN: fix curdate.md, the fix is to modify the title
In some scenarios, users cannot find a suitable hash key to avoid data skew, so we need to provide an additional data distribution for olap table to avoid data skew
example:
CREATE TABLE random_table
(
siteid INT DEFAULT '10',
citycode SMALLINT,
username VARCHAR(32) DEFAULT '',
pv BIGINT SUM DEFAULT '0'
)
AGGREGATE KEY(siteid, citycode, username)
DISTRIBUTED BY random BUCKETS 10
PROPERTIES("replication_num" = "1");
Co-authored-by: caiconghui1 <caiconghui1@jd.com>
Because of the latest moving of some code to a new repository, the documentation for release and verification
needs to be reorganized. There are 5 relevant documents as follows.
1. release-prepare.md
General instructions for the release and related preparation work.
2. release-doris-core.md
The Doris Core release process
3. release-doris-connectors
The Doris Connectors release process
4. release-complete.md
Release completion after release polling passed.
5. release-verify.md
Release verification methods.
1. move `group_concat` from string-functions to aggregate-functions.
2. add `json_array`/`json_object`/`json_quote` to sidebar file.
3. move `json_array`/`json_object`/`json_quote`/`get_json_double`/`get_json_int`/`get_json_string` to json-functions.
4. change `group_concat` document to uppercase
Enable to check the Java version when Doris starts, to prevent the user experience caused by the inconsistency
between the compiled version and the running version.
If the Java version is compiled and the Java version is run, it will not start, and a prompt message will be given.
1. Fix the problem of BE crash caused by destruct sequence. (close#8058)
2. Add a new BE config `compaction_task_num_per_fast_disk`
This config specify the max concurrent compaction task num on fast disk(typically .SSD).
So that for high speed disk, we can execute more compaction task at same time,
to compact the data as soon as possible
3. Avoid frequent selection of unqualified tablet to perform compaction.
4. Modify some log level to reduce the log size of BE.
5. Modify some clone logic to handle error correctly.
Hive Bitmap UDF provides UDFs for generating bitmap and bitmap operations in hive tables.
The bitmap in Hive is exactly the same as the Doris bitmap.
The bitmap in Hive can be imported into Doris through spark bitmap load.
The two phase batch commit means:
During Stream load, after data is written, the message will be returned to the client,
the data is invisible at this point and the transaction status is PRECOMMITTED.
The data will be visible only after COMMIT is triggered by client.
1. User can invoke the following interface to trigger commit operations for transaction:
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:commit" \
http://fe_host:http_port/api/{db}/_stream_load_2pc
or
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:commit" \
http://be_host:webserver_port/api/{db}/_stream_load_2pc
2.User can invoke the following interface to trigger abort operations for transaction:
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:abort" \
http://fe_host:http_port/api/{db}/_stream_load_2pc
or
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:abort" \
http://be_host:webserver_port/api/{db}/_stream_load_2pc
1. set both `tuple_offsets` and `new_tuple_offsets` in PRowBatch for compatibility
2. set FE config `repair_slow_replica` default to false
Avoid impacting the load process after upgrading.
Eg, if there are only 2 replicas, one is with high version count. After upgrade,
that replica will be set to bad, so that the load process will be stopped
because only 1 replica is alive.
3. Fix a bug that NodeChannel may be blocked at `close_wait()`
Forget to set `add_batch_finish` flag after the last rpc finished.
4. Fix a NPE of RoutineLoadScheduler