This CL mainly changes:
1. Add 2 new FE modules
1. fe-common
save all common classes for other modules, currently only `jmockit`
2. spark-dpp
The Spark DPP application for Spark Load. And I removed all dpp related classes to this module, including unit tests.
2. Change the `build.sh`
Add a new param `--spark-dpp` to compile the `spark-dpp` alone. And `--fe` will compile all FE modules.
the output of `spark-dpp` module is `spark-dpp-1.0.0-jar-with-dependencies.jar`, and it will be installed to `output/fe/spark-dpp/`.
3. Modify some bugs of spark load
This PR is to support grammar like the following: INSTALL PLUGIN FROM [source] [PROPERTIES("KEY"="VALUE", ...)]
user can set md5sum="xxxxxxx", so we don't need to provide a md5 uri.
When Fe uploads the spark archive, the broker may fail to write the file,
resulting in the bad file being uploaded to the repository.
Therefore, in order to prevent spark from reading bad files, we need to
divide the upload into two steps.
The first step is to upload the file, and the second step is to rename the file with MD5 value.
This PR is to ensure that dropped db , table or partition can be with normal state after recovered by user. Commited txns can not be aborted, because the partitions's commited versions have been changed, and some tablets may already have new visible versions. If user just don't want the meta(db, table or partition) anymore, just use drop force instead of drop to skip committed txn check.
from/to_base64 may return incorrect value when the value is null #4130
remove the duplicated base64 code
fix the base64 encoded string length is wrong, and this will cause the memory error
This PR is to use LongAdder or volatile long to replace AtomicLong in some scenarios.
In the statistical summation scenario, LongAdder(introduced by jdk1.8) has better performance than AtomicLong in high concurrency update scenario. And if we just want to keep get and set operation for variable to be atomic, just add volatile at the front of the variable is enough, use AtomicLong is a little heavy.
NOTE: LongAdder is usually preferable to AtomicLong when multiple threads update a common sum that is used for purposes such as collecting statistics, not for fine-grained synchronization control, such as auto-incremental ids.
### Resume
When users use spark load, they have to upload the dependent jars to hdfs every time.
This cl will add a self-generated repository under working_dir folder in hdfs for saving dependecies of spark dpp programe and spark platform.
Note that, the dependcies we upload to repository include:
1、`spark-dpp.jar`
2、`spark2x.zip`
1 is the dpp library which built with spark-dpp submodule. See details about spark-dpp submodule in pr #4146 .
2 is the spark2.x.x platform library which contains all jars in $SPARK_HOME/jars
**The repository structure** will be like this:
```
__spark_repository__/
|-__archive_1_0_0/
| |-__lib_990325d2c0d1d5e45bf675e54e44fb16_spark-dpp.jar
| |-__lib_7670c29daf535efe3c9b923f778f61fc_spark-2x.zip
|-__archive_2_2_0/
| |-__lib_64d5696f99c379af2bee28c1c84271d5_spark-dpp.jar
| |-__lib_1bbb74bb6b264a270bc7fca3e964160f_spark-2x.zip
|-__archive_3_2_0/
| |-...
```
The followinng conditions will force fe to upload dependencies:
1、When fe find its dppVersion is absent in repository.
2、The MD5 value of remote file does not match the local file.
Before Fe uploads the dependencies, it will create an archive directory with name `__archive_{dppVersion}` under the repository.
#4076
1. The visibleVersionTime is updated when insert data to partition
2. GlobalTransactionMgr call partition.updateVisibleVersionAndVersionHash(version, versionHash) when fe is restarted
3. If fe restart, VisibleVersionTime may be changed, but the changed value is newer than the old value
Now, we only check database used data quota when create or alter table, or in some old type load job, but not for routine load job and stream load job. This PR provide a uniform solution to check db used data quota when data load job begin a new txn.
This PR is to optimize the logic of processing unfinishedTask when transaction is publishTimeout, we find errorReplica by
"db -> table -> partition -> index -> tablet(backendId) -> replica" path.
This CL mainly includes:
- add some methods to get thread's stats from Linux's system file in
env.
- support get thread's stats by http method.
- register page handle in BE to show thread's stats to help developer
position some thread relate problem.
This PR is mainly do following three things:
1. Add thread name in fe log to make trace problem more easy.
2. Add agent_task_resend_wait_time_ms config to escape sending duplicate agent task to be.
3. Skip to continue to update replica version when new version is lower than replica version in fe.
Unique Key table will load duplicate rows for different loads.
If exists duplicate row between loads. Compaction will merge this rows.
The statistics should take this merged number into consideration.
Now, We missed the merged number. So it will encounter error when compaction.
Add log to record the compaction point changing.
When OLAP_ERR_BE_SEGMENTS_OVERLAPPING happens,
it will be used to track the bugs. Add issue #4134 link.
Using slice->data to create HyperLogLog, it will exec HyperLogLog(Slice(const char*)). Then Slice(const char*) will use strlen(data) to calc the size. But the slice in this unit test isn't a C-string. Need to use Slice.
This commit mainly supports creating bitmap_union, hll_union, and count materialized views.
* The main changes are as follows:
1. When creating a materialized view, doris judge the semantic analysis of the newly supported aggregate function.
Only bitmap_union(to_bitmap(column)), hll_union(hll_hash(column)) and count(column) are supported.
2. Match the correct materialized view when querying.
After the user sends the query, if there is a possibility of matching the materialized view, the query will be rewritten firstly.
Such as:
Table: k1 int, k2 int
MV: k1 int, mv_bitmap_union_k2 bitmap mv_bitmap_union
mv_bitmap_union = to_bitmap(k2)
Query: select k1, count(distinct k2) from Table
Found that there is a match between the materialized view column and the query column, the query is rewritten as:
Rewritten query: select k1, bitmap_union_count(mv_bitmap_union_k2) from table
Then when the materialized view is matched, it can be matched to the query materialized view table.
Sometimes the rewritten query may not match any materialized view, which means that the rewriting failed. The query needs to be re-parsed and executed again.
Related issue #4017, main changes as follows:
1. Add expired_snapshot_rs_version_map,_expired_snapshot_rs_metas,
2. Add VersionedRowsetTracker record compacted path version
3. Record path version when rowsets compact
4. In gc process, add expired snapshot rowsets to unused set to remove.
* [Bug] Fix bug that tablet meta lock twice
The tablet meta load may already be hold before calling
generate_tablet_meta_copy(), so we need provide a unlocked
version of generate_tablet_meta_copy()
* fix typo
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
This commit mainly supports load bitmap_union, hll_union, and count materialized views.
The main changes are as follows:
1、insert stmt support load extend column
2、load stmt support load extend column
Issue : #3344
Co-authored-by: HangyuanLiu <460660956@qq.com>
cpp-mustache is a C++ implementation of a Mustache template engine
with support for RapidJSON, and in order to simplify RapidJSON object
building, we introduce class EasyJson from Apache Kudu.
Result may error when ORC load negative decimal value
When load negative decimal which has pre zero , the result is wrong.
eg -0.0014, the orc result is -14(precision ... 0)
When set `disable_colocate_balance` to false and set some BE to decommission,
`Coloratebalancer#balanceGroup` will choose decommissioned BE to locate tablets,
which is not right
Fix#4102
+ Building the materialized view function for schema_change here based on defineExpr.
+ This is a trick because the current storage layer does not support expression evaluation.
+ count distinct materialized view will set mv_expr with to_bitmap or hll_hash.
+ count materialized view will set mv_expr with count.
+ Support to regenerate historical data when a new materialized view is created in BE。
+ Support to_bitmap function
+ Support hll_hash function
+ Support count(field) function
For #3344