This CL modify the `evalExpr()` of ExpressionFunctions, so that it won't change the
`FunctionCallExpr` to `NullLiteral` when there is null parameter in UDF. Which will fix the
problem described in ISSUE: #2913
1. Add some comments to make the code easier to understand;
2. Make the metric `create_tablet_requests_failed` to be accurate;
3. Some internal methods use naked pointers directly instead of `shared_ptr`;
4. The `using` in `.h` files are contagious when included by other files,
so we should only use it in `.cpp` files;
5. Some formatting changes: such as wrapping lines that are too long
6. Parameters that need to be modified, use pointers instead of references
No functional changes in this patch.
Thread pool design point:
All tasks submitted directly to the thread pool enter a FIFO queue and are
dispatched to a worker thread when one becomes free. Tasks may also be
submitted via ThreadPoolTokens. The token wait() and shutdown() functions
can then be used to block on logical groups of tasks.
A token operates in one of two ExecutionModes, determined at token
construction time:
1. SERIAL: submitted tasks are run one at a time.
2. CONCURRENT: submitted tasks may be run in parallel.
This isn't unlike submitted without a token, but the logical grouping that tokens
impart can be useful when a pool is shared by many contexts (e.g. to
safely shut down one context, to derive context-specific metrics, etc.).
Tasks submitted without a token or via ExecutionMode::CONCURRENT tokens are
processed in FIFO order. On the other hand, ExecutionMode::SERIAL tokens are
processed in a round-robin fashion, one task at a time. This prevents them
from starving one another. However, tokenless (and CONCURRENT token-based)
tasks can starve SERIAL token-based tasks.
Thread design point:
1. It is a thin wrapper around pthread that can register itself with the singleton ThreadMgr
(a private class implemented in thread.cpp entirely, which tracks all live threads so
that they may be monitored via the debug webpages). This class has a limited subset of
boost::thread's API. Construction is almost the same, but clients must supply a
category and a name for each thread so that they can be identified in the debug web
UI. Otherwise, join() is the only supported method from boost::thread.
2. Each Thread object knows its operating system thread ID (TID), which can be used to
attach debuggers to specific threads, to retrieve resource-usage statistics from the
operating system, and to assign threads to resource control groups.
3. Threads are shared objects, but in a degenerate way. They may only have
up to two referents: the caller that created the thread (parent), and
the thread itself (child). Moreover, the only two methods to mutate state
(join() and the destructor) are constrained: the child may not join() on
itself, and the destructor is only run when there's one referent left.
These constraints allow us to access thread internals without any locks.
1. when read column data page:
for compaction, schema_change, check_sum: we don't use page cache
for query and config::disable_storage_page_cache is false, we use page cache
2. when read column index page
if config::disable_storage_page_cache is false, we use page cache
In the current implementation, the state of the table will be set until the next round of job scheduling. So there may be tens of seconds between job completion and table state changes to NORMAL.
And also, I made the synchronized range smaller by replacing the synchronized methods with synchronized blocks, which may solve the problem described in #2903
1. MonoTime/MonoDelta
MonoTime: The MonoTime represents a particular point in time, relative to some fixed but unspecified reference point.
MonoDelta: The MonoDelta class represents an elapsed duration of time, the delta between two MonoTime instances.
2. CountDownLatch
This is a C++ implementation of the Java CountDownLatch
scoped_refptr is used to replace std::shared_ptr, is generally faster and smaller.
advantage
(1) only requires a single allocation, and ref count is on the same cache line as the object
(2) the pointer only requires 8 bytes (since the ref count is within the object)
(3) you can manually increase or decrease reference counts when more control is required
(4) you can convert from a raw pointer back to a scoped_refptr safely without worrying about double freeing
(5) since we control the implementation, we can implement features, such as debug builds that capture the stack trace of every referent to help debug leaks.
disadvantage
(1) the referred-to object must inherit from RefCounted
(2) does not support the weak_ptr use cases
The code submitted later will use this utility class.
Currently only factory methods for various file types are provided.
In the future, tool methods that are common to all Env types can
be added here.
Mainly contains the following modifications:
1. Use `std::unique_ptr` to replace some naked pointers
2. Modify some methods from member-method to local-static-function
3. Modify some methods do not need to be public to private
4. Some formatting changes: such as wrapping lines that are too long
5. Remove some useless variables
6. Add or modify some comments for easier understanding
No functional changes in this patch.
fix a bug when using grouping sets without all column in a grouping set item will produce wrong value.
fix grouping function check will not work in group by clause
when we need to ensure that **a newly-created file** is fully
synchronized back to disk, we should call `fsync()` on the parent
directory—that is, the directory containing the newly-created file.
That is to say, In this situation, we should call `fsync()` on
both the newly-created file and its parent directory.
Unfortunately, currently in Doris, in any scenario, directories
are not fsynced.
This patch adds `sync_dir()` interface first, laying the groundwork
for future fixes.
This patch also removes unneeded private method `dir_exists()`.
Currently, the report from BE to FE is completed in the background
threads of `AgentServer` (`report_tablet_thread` and
`report_disk_stat_thread`). These two threads will sleep and be in
a standby state after each report, if there is any need to report
immediately, they will be notified and wake up immediately to report.
For example, when background thread (`disk_monitor_thread`) in
`StorageEngine` finds some tablets were deleted, it will notify
`AgentServer` to trigger a report immediately.
In the current implementation, in order to report ASAP, a local variable
(`_is_drop_tables`) and two other flags are used to record whether
reporting is needed, and then `StorageEngine::disk_monitor_thread` checks
the value of this variable every time it runs, to determine whether it
needs to be triggered Reporting. This is actually superfluous, and it
may result in untimely notifications, as shown below:
```
(thread_1) (thread_2)
disk-monitor disk-stat-reporter
| |
| reporting
| |
notify_1 |
| |
| wait_for_notify(will wait until timeout or next notification)
| |
V V
```
When `report_tablet_thread` has not started waiting,
`StorageEngine::disk_monitor_thread` triggers a notification, so this
notification will not be received by `report_tablet_thread`,
resulting in the BE not reporting to the FE until the lock times out
or the next round of `disk_monitor_thread` detection.
This change restructures the triggering implementation, and solves the above problem.
This change also changes some methods(that do not need to be public) to private.
In StorageEngine, the variable _min_percentage_of_error_disk was not
initialized (so it defaults to 0), which causes the process to exit
whenever one disk fails.
What we expect is that exit the process only when the number of
failed disks reach a certain percentage.
Also, this variable should mean the maximum percentage of
error disks allowed, not the minimum, so change the configuration
name to max_percentage_of_error_disk.
TabletsChannel may be written after cancelation, leading to core at DeltaWriter::write. We should check the state of TabletsChannel at the beginning of each operations.
It is not necessary to perform compaction in the following cases
1. A tablet has only 2 rowsets, the versions are [0-1] and [2-x]. In this case,
there is no need to perform base compaction because the [0-1] version is an empty version.
Some tables will be partitioned by day, and then each partition will only load one batch of data
each day, so a large number of tablets with rowsets [0-1][2-2] will appear. And these tablets
do not need to be base compaction.
2. The initial value of the `last successful execution time of compaction` is 0,
which causes the first time to determine the time interval from the
last successful execution time of compaction, which always meets the
conditions to trigger cumulative compaction.
When using an iterator of _tablet_map.tablet_arr(`std::list`) to remove
a tablet, we should first remove tablet from _partition_map to avoid
the iterator becoming invalid.
In `AgentServer`, each task type needs to be processed separately,
which leads to very long code, hard to read, and not easy to detect
errors (for example, some task type processing may be missed,
corresponding relationship may be error)
Fortunately, the code for each task_type is very similar, so this
is a good case to use `MACRO`, which can greatly reduce the repeated
code and solve above problems.
This patch also fix two small bugs:
1. The `_topic_subscriber` member has not been released in dtor
2. in `submit_tasks()`, the `status_code` is not reset before
each task is processed, resulting in wrong judgment.
No functional changes in this patch.
When constructing `Schema` objects, two similar `init` functions
need to be called, and the call order is implicitly required, which
is easy to be misused. At the same time, some of the existing comments
are missing or out of date, which will cause some misleading.
This patch unifies the initialization logic of `Schema`.
No functional changes in this patch.