Commit Graph

14804 Commits

Author SHA1 Message Date
89215306d3 [improve](load) add switch for vertical segment writer (#26996) 2023-11-15 08:19:12 +08:00
82a88366f1 [fix](fe ut) make colcoate test wait time longer (#26999) 2023-11-15 08:18:48 +08:00
add160b768 [improvement](regression-test) add more group commit regression-test (#26952) 2023-11-15 00:01:13 +08:00
88d909b4dd [test](regression) Add more alter stmt regression case (#26988) 2023-11-14 23:58:15 +08:00
08044a35ae (fix)[schema change] fix incorrect setting of schema change jobstate when replay editlog (#26992) 2023-11-14 23:53:22 +08:00
f1169d3c58 [regression-test](TRIM_DOUBLE_QUOTES) add case for TRIM_DOUBLE_QUOTES (#26998) 2023-11-14 23:52:40 +08:00
30d1e6036c [feature](runtime filter) New session variable runtime_filter_wait_infinitely (#26888)
New session variable: runtime_filter_wait_infinitely. If set runtime_filter_wait_infinitely = true, consumer of rf will wait on receiving until query is timeout.
2023-11-14 21:05:59 +08:00
9a4fd5be79 [nereids](datetime) fix wrong result type of datetime add with interval as first arg (#26957)
Incorrect result data type cause be cordump:

drop table if exists testaaa;
create table testaaa(k1 tinyint, k2 smallint, k3 int, k4 bigint, k5 decimal(9,3), k6 char(5), k10 date, k11 datetime, k7 varchar(20), k8 double max, k9 float sum) engine=olap distributed by hash(k1) buckets 5 properties("storage_type"="column","replication_num"="1") ;

insert into testaaa values(1,1,1,1,9.3, "k6", "2023-11-14", "2023-11-14", "k7", 9.99, 9.99);


select  interval 10 year + k10 from testaaa;
The plan result type is DATE:

mysql [test]>explain verbose select   interval 10 year + k10  from testaaa;
+-------------------------------------------------------------------------------------------------------+
| Explain String(Nereids Planner)                                                                       |
+-------------------------------------------------------------------------------------------------------+
| PLAN FRAGMENT 0                                                                                       |
|   OUTPUT EXPRS:                                                                                       |
|     years_add(k10, INTERVAL 10 YEAR)[#11]                                                             |
|   PARTITION: UNPARTITIONED                                                                            |
|                                                                                                       |
|   HAS_COLO_PLAN_NODE: false                                                                           |
|                                                                                                       |
|   VRESULT SINK                                                                                        |
|      MYSQL_PROTOCAL                                                                                   |
|                                                                                                       |
|   64:VEXCHANGE                                                                                        |
|      offset: 0                                                                                        |
|      tuple ids: 1N                                                                                    |
|                                                                                                       |
| PLAN FRAGMENT 1                                                                                       |
|                                                                                                       |
|   PARTITION: HASH_PARTITIONED: k1[#0]                                                                 |
|                                                                                                       |
|   HAS_COLO_PLAN_NODE: false                                                                           |
|                                                                                                       |
|   STREAM DATA SINK                                                                                    |
|     EXCHANGE ID: 64                                                                                   |
|     UNPARTITIONED                                                                                     |
|                                                                                                       |
|   58:VOlapScanNode                                                                                    |
|      TABLE: default_cluster:test.testaaa(testaaa), PREAGGREGATION: OFF. Reason: No aggregate on scan. |
|      partitions=1/1 (testaaa), tablets=5/5, tabletList=945025,945027,945029 ...                       |
|      cardinality=1, avgRowSize=9885.0, numNodes=1                                                     |
|      pushAggOp=NONE                                                                                   |
|      projections: years_add(k10[#6], INTERVAL 10 YEAR)                                                |
|      project output tuple id: 1                                                                       |
|      tuple ids: 0                                                                                     |
|                                                                                                       |
| Tuples:                                                                                               |
| TupleDescriptor{id=0, tbl=testaaa, byteSize=8}                                                        |
|   SlotDescriptor{id=6, col=k10, colUniqueId=6, type=DATEV2, nullable=true, isAutoIncrement=false}     |
|                                                                                                       |
| TupleDescriptor{id=1, tbl=testaaa, byteSize=32}                                                       |
|   SlotDescriptor{id=11, col=null, colUniqueId=null, type=DATE, nullable=true, isAutoIncrement=false}  |
+-------------------------------------------------------------------------------------------------------+
39 rows in set (1 min 31.50 sec)
coredump stack:

F1109 20:11:37.677680 323805 assert_cast.h:61] Bad cast from type:doris::vectorized::ColumnVector to doris::vectorized::ColumnVector
*** Check failure stack trace: ***
F1109 20:11:37.680608 323800 assert_cast.h:61] Bad cast from type:doris::vectorized::ColumnVector to doris::vectorized::ColumnVector
*** Check failure stack trace: ***
F1109 20:11:37.680608 323800 assert_cast.h:61] Bad cast from type:doris::vectorized::ColumnVector to doris::vectorized::ColumnVectorF1109 20:11:37.681102 323808 assert_cast.h:61] Bad cast from type:doris::vectorized::ColumnVector to doris::vectorized::ColumnVector
*** Check failure stack trace: ***
    @     0x56489d591d3d  google::LogMessage::Fail()
    @     0x56489d591d3d  google::LogMessage::Fail()
    @     0x56489d591d3d  google::LogMessage::Fail()
    @     0x56489d594279  google::LogMessage::SendToLog()
    @     0x56489d594279  google::LogMessage::SendToLog()
    @     0x56489d594279  google::LogMessage::SendToLog()
    @     0x56489d5918a6  google::LogMessage::Flush()
    @     0x56489d5918a6  google::LogMessage::Flush()
    @     0x56489d5918a6  google::LogMessage::Flush()
    @     0x56489d5948e9  google::LogMessageFatal::~LogMessageFatal()
    @     0x56489d5948e9  google::LogMessageFatal::~LogMessageFatal()
    @     0x56489d5948e9  google::LogMessageFatal::~LogMessageFatal()
    @     0x56487a2a8a0c  assert_cast<>()
    @     0x56487a2a8a0c  assert_cast<>()
    @     0x56487a2a8a0c  assert_cast<>()
    @     0x5648893d8312  doris::vectorized::ColumnVector<>::insert_range_from()
    @     0x5648893d8312  doris::vectorized::ColumnVector<>::insert_range_from()
    @     0x5648893d8312  doris::vectorized::ColumnVector<>::insert_range_from()
    @     0x56488924a670  doris::vectorized::ColumnNullable::insert_range_from()
    @     0x56488924a670  doris::vectorized::ColumnNullable::insert_range_from()
    @     0x56488924a670  doris::vectorized::ColumnNullable::insert_range_from()
    @     0x56487a454475  doris::ExecNode::do_projections()
    @     0x56487a454475  doris::ExecNode::do_projections()
    @     0x56487a454475  doris::ExecNode::do_projections()
    @     0x56487a454b89  doris::ExecNode::get_next_after_projects()
    @     0x56487a454b89  doris::ExecNode::get_next_after_projects()
*** Query id: a467995b35334741-b625042f56495aaf ***
*** tablet id: 0 ***
*** Aborted at 1699531898 (unix time) try "date -d @1699531898" if you are using GNU date ***
*** Current BE git commitID: 0d83327a7c ***
*** SIGABRT unknown detail explain (@0x190d64) received by PID 1641828 (TID 1642168 OR 0x7f6ff96c0700) from PID 1641828; stack trace: ***
    @     0x556ca2a3ab8f  std::_Function_handler<>::_M_invoke()
    @     0x556c9f322787  std::function<>::operator()()
    @     0x556ca29da0b0  doris::Thread::supervise_thread()
    @     0x556c9f322787  std::function<>::operator()()
    @     0x7f71b9c38609  start_thread
    @     0x556ca29da0b0  doris::Thread::supervise_thread()
    @     0x7f71b9c38609  start_thread
 0# doris::signal::(anonymous namespace)::FailureSignalHandler(int, siginfo_t*, void*) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/common/signal_handler.h:417
 1# 0x00007F71B9E09090 in /lib/x86_64-linux-gnu/libc.so.6
 2# raise at ../sysdeps/unix/sysv/linux/raise.c:51
 3# abort at /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:81
 4# 0x0000556CC51F3729 in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
 5# 0x0000556CC51E8D3D in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
 6# google::LogMessage::SendToLog() in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
 7# google::LogMessage::Flush() in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
 8# google::LogMessageFatal::~LogMessageFatal() in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
 9# doris::vectorized::ColumnVector const& assert_cast const&, doris::vectorized::IColumn const&>(doris::vectorized::IColumn const&) in /mnt/hdd01/ci/branch20-deploy/be/lib/doris_be
10# doris::vectorized::ColumnVector::insert_range_from(doris::vectorized::IColumn const&, unsigned long, unsigned long) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/vec/columns/column_vector.cpp:354
11# doris::vectorized::ColumnNullable::insert_range_from(doris::vectorized::IColumn const&, unsigned long, unsigned long) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/vec/columns/column_nullable.cpp:289
12# doris::ExecNode::do_projections(doris::vectorized::Block*, doris::vectorized::Block*) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/exec/exec_node.cpp:573
13# doris::ExecNode::get_next_after_projects(doris::RuntimeState*, doris::vectorized::Block*, bool*, std::function const&, bool) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/exec/exec_node.cpp:592
14# doris::pipeline::SourceOperator::get_block(doris::RuntimeState*, doris::vectorized::Block*, doris::pipeline::SourceState&) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/pipeline/exec/operator.h:413
15# doris::pipeline::PipelineTask::execute(bool*) at /home/zcp/repo_center/doris_branch-2.0/doris/be/src/pipeline/pipeline_task.cpp:259
16# doris::pipeline::TaskScheduler::_do_work(unsigned
2023-11-14 20:28:41 +08:00
2d8438fa1b [fix](forward) add exception msg for ForwardToMasterException (#26956)
Signed-off-by: nextdreamblue <zxw520blue1@163.com>
2023-11-14 20:20:29 +08:00
8b097af9fa [fix](move-memtable) support http stream (#26929) 2023-11-14 20:19:57 +08:00
fad6770225 [improvement](statistics)Multi bucket columns using DUJ1 to collect ndv (#26950)
Using DUJ1 to collect ndv for multiple bucket columns.
2023-11-14 20:13:37 +08:00
1edeacd0a5 [enhance](regression) enhance docker network by add network subnet (#26862) 2023-11-14 20:06:20 +08:00
da732e5f5b [doc](fix) a new docs for k8s deploy by operator (#26508) 2023-11-14 20:05:28 +08:00
4889c1d029 [hotfix](priv) Fix restore snapshot user priv with add cluster in UserIdentity (#26969)
Signed-off-by: Jack Drogon <jack.xsuperman@gmail.com>
2023-11-14 19:48:54 +08:00
cdef768629 [fix](sink) crash caused by wild pointer of counter in VDataStreamSender (#26947)
If preparation fails, the counter _peak_memory_usage_counter will be a wild pointer.

*** SIGSEGV address not mapped to object (@0x454d49545f) received by PID 16992 (TID 18856 OR 0x7f4d05444700) from PID 1296651359; stack trace: ***
 0# doris::signal::(anonymous namespace)::FailureSignalHandler(int, siginfo_t*, void*) at /root/doris/be/src/common/signal_handler.h:417
 1# os::Linux::chained_handler(int, siginfo*, void*) in /app/doris/Nexchip-doris-1.2.4.2-bin-x86_64/java8/jre/lib/amd64/server/libjvm.so
 2# JVM_handle_linux_signal in /app/doris/Nexchip-doris-1.2.4.2-bin-x86_64/java8/jre/lib/amd64/server/libjvm.so
 3# signalHandler(int, siginfo*, void*) in /app/doris/Nexchip-doris-1.2.4.2-bin-x86_64/java8/jre/lib/amd64/server/libjvm.so
 4# 0x00007F55C85B9400 in /lib64/libc.so.6
 5# doris::vectorized::VDataStreamSender::close(doris::RuntimeState*, doris::Status) at /root/doris/be/src/vec/sink/vdata_stream_sender.cpp:734
 6# doris::PlanFragmentExecutor::close() at /root/doris/be/src/runtime/plan_fragment_executor.cpp:543
 7# doris::PlanFragmentExecutor::~PlanFragmentExecutor() at /root/doris/be/src/runtime/plan_fragment_executor.cpp:95
 8# doris::FragmentExecState::~FragmentExecState() at /root/doris/be/src/runtime/fragment_mgr.cpp:112
 9# std::_Sp_counted_ptr<doris::FragmentExecState*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() at /root/ldb/ldb_toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/shared_ptr_base.h:348
10# doris::FragmentMgr::exec_plan_fragment(doris::TExecPlanFragmentParams const&, std::function<void (doris::RuntimeState*, doris::Status*)> const&) at /root/doris/be/src/runtime/fragment_mgr.cpp:855
11# doris::FragmentMgr::exec_plan_fragment(doris::TExecPlanFragmentParams const&) at /root/doris/be/src/runtime/fragment_mgr.cpp:592
12# doris::PInternalServiceImpl::_exec_plan_fragment_impl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, doris::PFragmentRequestVersion, bool) at /root/doris/be/src/service/internal_service.cpp:463
13# doris::PInternalServiceImpl::_exec_plan_fragment_in_pthread(google::protobuf::RpcController*, doris::PExecPlanFragmentRequest const*, doris::PExecPlanFragmentResult*, google::protobuf::Closure*) at /root/doris/be/src/service/internal_service.cpp:305
14# doris::WorkThreadPool<false>::work_thread(int) at /root/doris/be/src/util/work_thread_pool.hpp:160
15# execute_native_thread_routine at ../../../../../libstdc++-v3/src/c++11/thread.cc:84
16# start_thread in /lib64/libpthread.so.0
17# clone in /lib64/libc.so.6
2023-11-14 19:05:49 +08:00
50fa96c185 [fix](memory)Fix MacOS perf_counters.cpp compile Error (#26942) 2023-11-14 18:53:04 +08:00
1976a06635 [Chore](docs)Hide the job document. (#26970)
After job reconstruction, insert job is not supported yet.
2023-11-14 18:42:51 +08:00
13bc6b702b [refactor](Job)Refactor JOB (#26845)
##  Motivation:
In the past, our JOB only supported Timer JOB, which could only provide scheduling for fixed-time tasks. Meanwhile, the JOB was solely responsible for execution, and during execution, there might be inconsistencies in states, where the task was executed successfully, but the JOB's recorded task status was not updated.
This inconsistency in task states recorded by the JOB could not guarantee the correctness of the JOB's status. With the gradual integration of various businesses into the JOB, such as the export job and mtmv job, we found that scaling became difficult, and the JOB became particularly bulky. Hence, we have decided to refactor JOB.

## Refactoring Goals:
- Provide a unified external registration interface so that all JOBs can be registered through this interface and be scheduled by the JobScheduler.

- The JobScheduler can schedule instant JOBs, timer JOBs, and manual JOBs.

- JOB should provide a unified external extension class. All JOBs can be extended through this extension class, which can provide special functionalities like JOB status restoration, Task execution, etc.

- Extended JOBs should manage task states on their own to avoid inconsistent state maintenance issues.

- Different JOBs should use their own thread pools for processing to prevent inter-JOB interference.
###  Design:
- The JOBManager provides a unified registration interface through which all JOBs can register and then be scheduled by the JobScheduler.
- The TimerJob periodically fetches JOBs that need to be scheduled within a time window and hands them over to the Time Wheel for triggering. To prevent excessive tasks in the Time Wheel, it distributes the tasks to the dispatch thread pool, which then assigns them to corresponding thread pools for execution.
- ManualJob or Instant Job directly assigns tasks to the corresponding thread pool for execution.
- The JOB provides a unified extension class that all JOBs can utilize for extension, providing special functionalities like JOB status restoration, Task execution, etc.
- To implement a new JOB, one only needs to implement AbstractJob.class and AbstractTask.class.
<img width="926" alt="image" src="https://github.com/apache/doris/assets/16631152/3032e05d-133e-425b-b31e-4bb492f06ddc">

## NOTICE
This will cause the master's metadata to be incompatible
2023-11-14 18:18:59 +08:00
6b8ec22436 exclude regression test workload_manager_p1 (#26736) 2023-11-14 17:55:42 +08:00
a2ae225c77 [Fix](row cache) invalid row cache using key encoded without sequence column (#26948) 2023-11-14 16:51:23 +08:00
bfa50f08c1 [enhancement](Nereids): add nereids profile (#26935) 2023-11-14 16:43:17 +08:00
573fa2063b [Fix](wal) Fix wal space back pressure core (#26907) 2023-11-14 16:10:25 +08:00
30c3986196 [chore][ci] add README.md for regression-test (#26876)
* [chore][ci] add README.md for regression-test
2023-11-14 15:28:46 +08:00
3585c7e216 [test](parquet)append parquet reader byte_array_decimal and rle_bool case (#26751) 2023-11-14 15:05:10 +08:00
a16517a061 [test](tvf) append tvf read hive_text file regression case. (#26790) 2023-11-14 15:03:19 +08:00
cd25579bdf [fix](statistics)Fix external table show column stats type bug (#26910)
The show column stats result for external table shows N/A for the columns of method, type, trigger and query_times. This pr is to fix this bug, to show the correct value.
2023-11-14 14:47:05 +08:00
23e2bded1a [fix](Nereids) column pruning under union broken unexpectedly (#26884)
introduced by PR #24060
2023-11-14 00:39:32 -06:00
9b3ddd13eb [pipelineX](jdbc) prevent JDBC scan if disable Java support (#26919) 2023-11-14 14:20:36 +08:00
1baf541532 [fix](config)Fix fe pom cdh download failed issue (#26913)
fix download net.sourceforge.czt.dev jar failed.

---------

Co-authored-by: Yijia Su <suyijia@selectdb.com>
2023-11-14 14:17:24 +08:00
de38ffe2b2 [Fix](multi-catalog) Fix NPE when replaying hms events (#26803)
Invoke ConnectContext.get() at replayer thread of slave FE nodes maybe return null, so a NPE will be thrown and slave nodes will be crashed.
Co-authored-by: wangxiangyu <wangxiangyu@360shuke.com>
2023-11-14 13:55:25 +08:00
d2eea9b3ae [chore](macOS) Reduce the size of executables on macOS arm64 (#26894)
Like #15641, we should reduce the size of executables on macOS arm64. Otherwise, we can not run doris_be and doris_be_test with ASAN build type on macOS arm64 now.
2023-11-14 12:21:08 +08:00
39473cdf48 [performance](load) add vertical segment writer (#24403) 2023-11-14 11:53:09 +08:00
f1d90ffc4e [regression](nereids) add test case for partition prune (#26849)
* list selected partition name in explain
* add prune partition test case (multi-range key)
2023-11-14 11:51:32 +08:00
f6a9914bc7 [feature](move-memtable) support auto partition in sink v2 (#26914) 2023-11-14 11:39:44 +08:00
0a9d71ebd2 [Fix](Planner) fix varchar does not show real length (#25171)
Problem:
when we create table with datatype varchar(), we regard it to be max length by default. But when we desc, it does not show
real length but show varchar()
Reason:
when we upgrade version from 2.0.1 to 2.0.2, we support new feature of creating varchar(), and it shows the same way with
ddl schema. So user would confuse of the length of varchar
Solved:
change the showing of varchar() to varchar(65533), which in compatible with hive
2023-11-14 10:49:21 +08:00
e0934166f5 [bugfix](es-catalog)fix exception when querying ES table (#26788) 2023-11-14 10:47:37 +08:00
cd7ad99de0 [improvement](regression-test) add chunked transfer json test (#26902) 2023-11-14 10:31:30 +08:00
de6ecd2035 [fix](tls) Manually track memory in Allocator instead of mem hook and ThreadContext life cycle to manual control (#26904)
Manually track query/load/compaction/etc. memory in Allocator instead of mem hook.
Can still use Mem Hook when cannot manually track memory code segments and find memory locations during debugging.
This will cause memory tracking loss for Query, loss less than 10% compared to the past, but this is expected to be more controllable.
Similarly, Mem Hook will no longer track unowned memory to the orphan mem tracker by default, so the total memory of all MemTrackers will be less than before.
Not need to get memory size from jemalloc in Mem Hook each memory alloc and free, which would lose performance in the past.
Not require caching bthread local in pthread local for memory hook, in the past this has caused core dumps inside bthread, seems to be a bug in bthread.
ThreadContext life cycle to manual control
In the past, ThreadContext was automatically created when it was used for the first time (this was usually in the Jemalloc Hook when the first malloc memory), and was automatically destroyed when the thread exited.
Now instead of manually controlling the create and destroy of ThreadContext, it is mainly created manually when the task thread start and destroyed before the task thread end.
Run 43 clickbench query tests.
Use MemHook in the past:
2023-11-14 10:30:42 +08:00
fef627c0ba [Fix](Txn) Fix transaction write to sequence column error (#26748) 2023-11-14 10:30:10 +08:00
44f49c687d [typo](doc) Add documentation for synchronizing tables without primary keys. (#26774)
* add no primary-key sync doc
2023-11-14 10:10:54 +08:00
34edc578f1 [opt](MergeIO) use equivalent merge size to measure merge effectiveness (#26741)
`MergeRangeFileReader` is used to merge small IOs, and `max_amplified_read_ratio` controls the proportion of read amplification. However, in some extreme cases(eg. `orc strip size`/`parquet row group size` is less than 3MB), the control effect of `max_amplified_read_ratio` is not good, resulting in a large amount of small IOs.

After testing, the return time of a single IO for IO size smaller than 4kb in hdfs(512kb in oss) remains basically unchanged. Therefore, equivalent IO size is used to measure merge effectiveness:
```
EquivalentIOSize = MergeSize / Request IOs
```
When `EquivalentIOSize` is greater than 4kb in hdfs, or 512kb in oss, we believe that this kind of merge is effective.
2023-11-14 10:07:14 +08:00
6f82c798eb [fix](delta-writer) fix total received rows in delta writer incorrect (#26905) 2023-11-14 08:31:16 +08:00
ec40603b93 [fix](parquet) compressed_page_size has the same meaning in page v1 and v2 (#26783)
1. Parquet with page v2 is parsed error when using other codec except snappy. Because `compressed_page_size` has the same meaning in page v1 and v2, it always contains the bytes of definition level, repetition level and compressed data.
2. Add regression test for `fix_length_byte_array` stored decimal type, and dictionary encoded date/datetime type.
2023-11-14 08:30:42 +08:00
df6e444e75 [improvement](log) log desensitization without displaying user info (#26912) 2023-11-14 08:30:00 +08:00
e7a8022106 [enhancement](binlog) Add dbName && tableName in CreateTableRecord (#26901)
Signed-off-by: Jack Drogon <jack.xsuperman@gmail.com>
2023-11-14 08:29:38 +08:00
b19abac5e2 [fix](move-memtable) pass num local sink to backends (#26897) 2023-11-14 08:28:49 +08:00
de62c00f4e [fix](move-memtable) init auto partition context in VRowDistribution::open (#26911) 2023-11-14 08:16:14 +08:00
37ca129fa7 [test](fe) Add more FE UT for org.apache.doris.journal.bdbje (#26629) 2023-11-13 23:06:17 +08:00
5ad49dceaa [fix](scanner_schedule) scanner hangs due to negative num_running_scanners (#26816)
* [fix] scanner hangs due to negative num_running_scanners

Before the patch, num_running_scanners is increased after submitting,
then it may be decreased before increasing then negative values can
be seen by get_block_from_queue and a expected submit does not happend.

Co-authored-by: Mingyu Chen <morningman.cmy@gmail.com>
2023-11-13 23:03:49 +08:00
f698bb7be2 [Feature](inverted index) index tool add match_all and match_phrase (#26896) 2023-11-13 22:53:11 +08:00