Commit Graph

826 Commits

Author SHA1 Message Date
e20d905d70 Remove unused KUDU codes (#3175)
KUDU table is no longer supported long time ago. Remove code related to it.
2020-03-24 13:54:05 +08:00
d4c1938b5c Open datetime min value limit (#3158)
the min_value in olap/type.h of datetime is 0000-01-01 00:00:00, so we don't need restrict datetime min in tablet_sink
2020-03-24 10:52:57 +08:00
dff3c0d57e Revert "Remove deep copy when doing hash table EvalRow (#3171)" (#3173) 2020-03-23 15:29:46 +08:00
wyb
dd8d748c55 Remove deep copy when doing hash table EvalRow (#3171)
remove varchar column deep copy in partitioned hash table EvalRow function
2020-03-21 09:52:49 +08:00
d29ed84b6a [Bug] Fix bug that right semi/anti join is not right (#3167)
This bug is introduced by PR: #3148.
right semi/anti join can not use `insert_unique` in build phase of join.
2020-03-20 20:58:55 +08:00
47a3d5000b [UnitTest] Fix unit test bug in BetaRowset and PageCacheTest (#3157)
1. BlockManager has been added into StorageEngine.
   So StorageEngine should be initialized when starting BetaRowset unit test.

2. Cache should not use the same buf to store value, otherwise the address
   will be freed twice and crash.
2020-03-20 20:37:50 +08:00
6beadfda71 [Bug] Fix delete predicate bug for segment v2 (#3164)
This bug is because the min and max wrapper field is not initialized
when there is no predicate of that column.
2020-03-20 20:35:55 +08:00
2dc995df7b [CodeStyle] Rename new_partition_aggregation_node and new_partitioned_hash_table (#3166) 2020-03-20 19:59:01 +08:00
5a8fcd263f [CodeStyle] Delete obsolete code of partition_aggregation_node and partitioned_hash_table (#3162) 2020-03-20 16:25:29 +08:00
c08d6e4708 [tablet meta] Do some refactor on TabletMeta (#3136)
remove some functions' return value which always return OLAP_SUCCESS
optimize some loops
2020-03-20 15:03:22 +08:00
2d3dbc2c42 Revert "[CodeStyle] Del obsolete code of partition_aggregation_node (#3154)" (#3160)
This reverts commit dae013d797c1c2c9e54246d5ace4bdd90b297d43.
2020-03-20 14:47:25 +08:00
5f004cb009 Revert "[CodeStyle] Remove unused PartitionedHashTable (#3156)" (#3159)
This reverts commit d3fd44f0a2fe076d2c62851babc162fcebe4d63b.
2020-03-20 14:42:40 +08:00
d3fd44f0a2 [CodeStyle] Remove unused PartitionedHashTable (#3156) 2020-03-20 12:19:08 +08:00
dae013d797 [CodeStyle] Del obsolete code of partition_aggregation_node (#3154) 2020-03-20 11:33:55 +08:00
f0db9272dd [Performance] Improve performence of hash join in some case (#3148)
improve performent of hash join  when build table has to many duplicated rows, this will cause hash table collisions and slow down the probe performence.
In this pr when join type is  semi join or anti join, we will build a hash table without duplicated rows.
benchmark:
dataset: tpcds dataset  `store_sales` and `catalog_sales`
```
mysql> select count(*) from catalog_sales;
+----------+
| count(*) |
+----------+
| 14401261 |
+----------+
1 row in set (0.44 sec)

mysql> select count(distinct cs_bill_cdemo_sk) from catalog_sales;
+------------------------------------+
| count(DISTINCT `cs_bill_cdemo_sk`) |
+------------------------------------+
|                            1085080 |
+------------------------------------+
1 row in set (2.46 sec)

mysql> select count(*) from store_sales;
+----------+
| count(*) |
+----------+
| 28800991 |
+----------+
1 row in set (0.84 sec)

mysql> select count(distinct ss_addr_sk) from store_sales;
+------------------------------+
| count(DISTINCT `ss_addr_sk`) |
+------------------------------+
|                       249978 |
+------------------------------+
1 row in set (2.57 sec)
```

test querys:
query1: `select count(*) from (select store_sales.ss_addr_sk  from store_sales left semi join catalog_sales  on catalog_sales.cs_bill_cdemo_sk = store_sales.ss_addr_sk) a;`

query2: `select count(*) from (select catalog_sales.cs_bill_cdemo_sk from catalog_sales left semi join store_sales on catalog_sales.cs_bill_cdemo_sk = store_sales.ss_addr_sk) a;`

benchmark result:


||query1|query2|
|:--:|:--:|:--:|
|before|14.76 sec|3 min 16.52 sec|
|after|12.64 sec|10.34 sec|
2020-03-20 10:31:14 +08:00
12d1b072ef [Bug] Fix bug that of union statement (#3137)
fix a bug of const union query like `select null union select null`, this because the type of SlotDescriptor when clause is `select null` is null ,this will cause BE core dump, and FE find wrong cast function.
2020-03-20 09:51:38 +08:00
b286f4271b Remove unused PreAggregtionNode (#3151) 2020-03-20 09:19:47 +08:00
0f14408f13 [Temp Partition] Support loading data into temp partitions (#3120)
Related issue: #2663, #2828.

This CL support loading data into specified temporary partitions.

```
INSERT INTO tbl TEMPORARY PARTITIONS(tp1, tp2, ..) ....;

curl .... -H "temporary_partition: tp1, tp, .. "  ....

LOAD LABEL db1.label1 (
DATA INFILE("xxxx") 
INTO TABLE `tbl2`
TEMPORARY PARTITION(tp1, tp2, ...)
...
```

NOTICE: this CL change the FE meta version to 77.

There 3 major changes in this CL

## Syntax reorganization

Reorganized the syntax related to the `specify-partitions`. Removed some redundant syntax
 definitions, and unified the syntax related to the `specify-partitions` under one syntax entry.

## Meta refactor

In order to be able to support specifying temporary partitions, 
I made some changes to the way the partition information in the table is stored.

Partition information is now organized as follows:

The following two maps are reserved in OlapTable for storing formal partitions:

    ```
    idToPartition
    nameToPartition
    ```

Use the `TempPartitions` class for storing temporary partitions.

All the partition attributes of the formal partition and the temporary partition,
such as the range, the number of replicas, and the storage medium, are all stored
in the `partitionInfo` of the OlapTable.

In `partitionInfo`, we use two maps to store the range of formal partition
and temporary partition:

    ```
    idToRange
    idToTempRange
    ```

Use separate map is because the partition ranges of the formal partition and
the temporary partition may overlap. Separate map can more easily check the partition range.

All partition attributes except the partition range are stored using the same map,
and the partition id is used as the map key.

## Method to get partition

A table may contain both formal and temporary partitions.
There are several methods to get the partition of a table.
Typically divided into two categories:

1. Get partition by id
2. Get partition by name

According to different requirements, the caller may want to obtain
a formal partition or a temporary partition. These methods are
described below in order to obtain the partition by using the correct method.

1. Get by name

This type of request usually comes from a user with partition names. Such as
`select * from tbl partition(p1);`.
This type of request has clear information to indicate whether to obtain a
formal or temporary partition.
Therefore, we need to get the partition through this method:

`getPartition(String partitionName, boolean isTemp)`

To avoid modifying too much code, we leave the `getPartition(String
partitionName)`, which is same as:

`getPartition(partitionName, false)`

2. Get by id

This type of request usually means that the previous step has obtained
certain partition ids in some way,
so we only need to get the corresponding partition through this method:

`getPartition(long partitionId)`.

This method will try to get both formal partitions and temporary partitions.

3. Get all partition instances

Depending on the requirements, the caller may want to obtain all formal
partitions,
all temporary partitions, or all partitions. Therefore we provide 3 methods,
the caller chooses according to needs.

`getPartitions()`
`getTempPartitions()`
`getAllPartitions()`
2020-03-19 15:07:01 +08:00
178bdcb16a Use DCHECK_GT instead when checking _tablet_map_lock_shard_size (#3138) 2020-03-18 19:41:04 +08:00
08e4035a41 1 (#3134) 2020-03-17 20:11:41 +08:00
d01b58bff6 Support 64 bit timestamp in from_unixtime (#3069)
Support 64 bit timestamp in from_unixtime
2020-03-17 17:30:42 +08:00
33319d659b [Thrift] Fix a bug when ThriftClientImpl close() error (#3128)
In ThriftClientImpl close(), the under layer TTransport may throw an exception,
this pathch catch the exception to avoid crash.
2020-03-17 15:53:44 +08:00
0959abc1dc [ExceptNode] Implement except node (#3056)
implement except node,
support  statement like:

``` 
select a from t1 except select b from t2
```
2020-03-17 10:54:40 +08:00
f6374fa9a5 Use default_rowset_type to replace compaction_rowset_type (#3101)
* use default_rowset_type to replace compaction_rowset_type

* segment v2 usage document
2020-03-16 22:23:48 +08:00
a80e9bf229 Fix broker scan node mem limit check (#3123) 2020-03-16 20:36:46 +08:00
ee06ce31ba [Bug] Fix bug that the file_block_mgr object was incorrectly destructed (#3122)
During the use of the `block`, some methods in the block manager will be referenced.
So `file_block_mgr` should be a resident and globally unique object.

I put it in `StorageEngine`.

TODO: the `BlockManager`, `Env`  need to be reorganized.
2020-03-16 17:07:27 +08:00
64a06ea9d4 [UT] Fix some BE unit tests (#3110)
And also support graceful exit for StorageEngine to avoid hang too long
time in unit test.
2020-03-16 13:31:44 +08:00
42931d22cb [Bug] tablet meta is not updated correctly after compaction (#3098)
This CL try to fix a potential bug describe in ISSUE: #3097. But I'm not sure this is the root cause.

Also remove lots of verbose log, and fix a memory leak.
2020-03-14 23:39:11 +08:00
aa540966c6 Output null for hll and bitmap column when select * (#2991) 2020-03-13 11:59:30 +08:00
c5660fcb9d [UT]Fix unit test for cgroup_util (#3094)
Co-authored-by: wangcong18 <wangcong18@xiaomi.com>
2020-03-12 22:59:40 +08:00
8276c6d7f8 Show BE version in 'show backends;' (#3074)
In a large scale cluster, we may rolling upgrade BEs, this patch add a
column named 'Version' for command 'show backends;', as well as website
'/system?path=//backends', to provide a method to check whether there
is any BE missing upgraded.
2020-03-12 22:15:13 +08:00
905070f4da [CodeStyle] Fix compile warning (#3076)
```
be/src/olap/rowset/segment_v2/ordinal_page_index.cpp:103:22: warning: ‘ordinal’ may be used
uninitialized in this function [-Wmaybe-uninitialized]
    _ordinals[i] = ordinal;
```
2020-03-11 18:17:29 +08:00
bf9612e28b [CodeStyle] Remove unnecessary forward declaration of WritableFile (#3075) 2020-03-11 18:17:11 +08:00
a77515fe03 [Backup] Fix backup job block at SNAPSHOTING phase (#3058)
This bug occurred when BE make snapshot, the version required by fe had been merged into the cumulative version, so the snapshot task could not complete the task even if it retried. In order to solve this problem, the BackupJob could be set to CANCELLED, and the user could continue to retry the job.

Fix #3057
2020-03-11 14:05:02 +08:00
608917c04d Use block layer to write files (#3064)
This is the second patch following 58b8e3f574614433ea9e0c427961f2efb3476c2a,

This patch use block-layer to write files.
2020-03-11 12:11:25 +08:00
b9b9a11eae [Bug] Fix invalid rollback for stream load txn (#3054) 2020-03-09 22:07:36 +08:00
a1f5b57011 Support sharding tablet_map_lock into more small map locks to make good performance for tablet manage task (#3051)
Support sharding tablet_map_lock into more small map locks to make good performance for tablet manage task
2020-03-09 16:29:56 +08:00
dc07182bd4 [Intersect] Implements intersect node (#3034)
imlement of the intersect node
now can support statement like `select a from t intersect select b from t1 intersect select 1;`
2020-03-09 10:52:55 +08:00
c83729435f Write delete predicate into RowsetMeta upon upgrade from Doris-0.10 to Doris-0.11 (#3044)
If delete predicate exists in meta in Doris-0.10, all of this predicates should
be remained. There is an confused place in Doris-0.10. The delete predicate
only exists in OLAPHeaderMessage and PPendingDelta, not in PDelta.
This trick results this bug.
2020-03-07 11:16:48 +08:00
1d296e907d Fix orc load timestamp bug (#3047)
The timestamp value load from orc file is error, the value has an offset with hive and spark.
Becuase the time zone of orc's timestamp is stored inside orc's stripe information, so the timestamp obtained here is an offset timestamp, so parse timestamp with UTC is actual datetime literal.
2020-03-06 18:03:27 +08:00
fca6c4e523 Fix bitmap null crash (#3042) 2020-03-05 21:30:32 +08:00
4ed99e3c0c [Compile] Fix BE compile failure (#3040)
fix BE compile failure because of BloomFilterIndexWriter bug.
2020-03-05 11:38:42 +08:00
63051a3b37 [Bug] Fix int128 bloom filter write bug (#2995)
std::set.insert(int128) core dump because segment fault.
the reason is the __int128 is not aligned.
2020-03-05 09:15:11 +08:00
cc1a5fb8ea [Function] Support '%' in date format string (#3037)
eg:
select str_to_date('2014-12-21 12%3A34%3A56', '%Y-%m-%d %H%%3A%i%%3A%s');
select unix_timestamp('2007-11-30 10:30%3A19', '%Y-%m-%d %H:%i%%3A%s');

This also enable us to extract column fields from HDFS file path with contains '%'.
2020-03-05 08:56:02 +08:00
50af594c66 [MemLimit] Normalize the setting of mem limit (#3033)
Normalize the setting of mem limit to avoid some unexpected exception.
For example, use may not setting query mem limit in query plan, which
may cause BE crash.
2020-03-05 08:47:45 +08:00
f17924650f [Config] Modify brpc max_body_size to 200M (#3030)
The default max size per row is 100K, and default row batch size is 2048.
So we change the default brpc max_body_size to 200MB to avoid query failure.
2020-03-04 15:30:27 +08:00
aa58cd99d9 Fix disks_total_capacity metric bug (#2988)
Now disks_total_capacity metric is a user specified capacity, but
disks_avail_capacity is the disk's actual available capacity, so
disks_total_capacity may be less than disks_avail_capacity, and
UsedPct on FE may be a negative number as a result.
We'd better to use disk actual capacity for disks_total_capacity metric.
2020-03-02 19:09:50 +08:00
0d1e28746e [Function] Support null_or_empty function (#2977)
It returns true if the string is empty or NULL. Otherwise it returns false.
2020-03-01 17:35:45 +08:00
58b8e3f574 [Fs Block] Add block layer to storage-engine (#2983)
The abstraction of the Block layer, inspired by Kudu, lies between the "business
layer" and the "underlying file storage layer" (`Env`), making them no longer
strongly coupled.

In this way, for the business layer (such as `SegmentWriter`),
there is no need to directly do the file operation, which will bring better
encapsulation. An ideal situation in the future is: when we need to support a
new file storage system, we only need to add a corresponding type of
BlockManager without modifying the business code (such as `SegmentWriter`).

With the Block layer, there are some benefits:

1. First and foremost, the mapping relationship between data and `Env` is more
   flexible. For example, in the storage engine, the data of the tablet can be
   placed in multiple file systems (`Env`) at the same time. That is, one-to-many
   relationships can be supported. For example: one on the local and one on the
   remote storage.
2. The mapping relationship between blocks and files can be adjusted, for example,
   it may not be a one-to-one relationship. For example, the data of multiple
   blocks can be stored in a physical file, which can reduce the number of files
   that need to be opened during querying. It is like `LogBlockManager` in Kudu.
3. We can move the opened-file-cache under the Block layer, which can automatically
   close and open the files used by the upper layer, so that the upper business
   level does not need to be aware of the restrictions of the file handle at all
   (This problem is often encountered online now).
4. Better automatic cleanup logic when there are exceptions. For example, a block
   that is not closed explicitly can automatically clean up its corresponding file,
   thereby avoiding generating most garbage files.
5. More convenient for batch file creation and deletion. Some business operations
   create multiple files, such as compaction. At present, the processing flow that
   these files go through is executed one by one: 1) creation; 2) writing data;
   3) fsync to disk. But in fact, this is not necessary, we only need to fsync this
   batch of files at the end. The advantage is that it can give the operating system
   more opportunities to perform IO merge, thereby improving performance. However,
   this operation is relatively tedious, there is no need to be coupled in the
   business code, it is an ideal place to put it in the Block layer.

This is the first patch, just add related classes, laying the groundwork for later
switching of read and write logic.
2020-03-01 10:48:00 +08:00
f2d2e4bffd [Unused] Remove unused GC function in DataDir (#3019) 2020-02-28 21:47:41 +08:00