Commit Graph

326 Commits

Author SHA1 Message Date
5032b7fe7a Support materialized view schema change in bitmap hll and count field [#3739] (#3873)
+ Building the materialized view function for schema_change here based on defineExpr.
+ This is a trick because the current storage layer does not support expression evaluation.
+ count distinct materialized view will set mv_expr with to_bitmap or hll_hash.
+ count materialized view will set mv_expr with count.
+ Support to regenerate historical data when a new materialized view is created in BE。
    + Support to_bitmap function
    + Support hll_hash function
    + Support count(field) function
For #3344
2020-07-16 10:45:15 +08:00
9b0ad66b78 [runtime] Replace the thread pool in FragmentMgr (#4057) 2020-07-15 10:03:48 +08:00
1bfb105ec1 [Bug] Fix bug that routine load task throw exception when calling afterVisible() (#3979) 2020-07-01 09:22:33 +08:00
93a0b47d22 Revert "[Memory Engine] MemTablet creation and compatibility handling in BE (#3762)" (#3931)
This reverts commit ca96ea30560c9e9837c28cfd2cdd8ed24196f787.
2020-06-24 10:13:45 +08:00
c50a310f8f [optimize] Optimize spark load/broker load reading parquet format file (#3878)
Add BufferedReader for reading parquet file via broker
2020-06-23 13:42:22 +08:00
f189a2e7b8 [Spark load][Be 1/1] Be handle push task (#3742)
1、Add a PushBrokerReader in push_handle.cpp.
2、PushBrokerReader wraps the ParquetScanner to support reading data from parquet format file through broker.
2020-06-22 19:57:58 +08:00
8cd36f1c5d [Spark Load] Support java version hyperloglog (#3320)
mainly used for Spark Load process to calculate approximate deduplication value and then serialize to parquet file.
Try to keep the same calculation semantic with be's C++ version
2020-06-21 09:37:05 +08:00
fdd65c50c4 [Bug] fix mem_tracker use-after-free & add UT for it (#3899) 2020-06-20 19:08:53 +08:00
51367abce7 [Bug] Fix bug that BE crash when doing Insert Operation (#3872)
Mainly change:
1. Fix the bug in `update_status(status)` of `PlanFragmentExecutor`.
2. When the FE Coordinator executes `execRemoteFragmentAsync()`, if it finds an RPC error, return a Future with an error code instead of exception.
3. Protect the `_status` in RuntimeState with lock
4. Move the `_runtime_profile` of RuntimeState before the `_obj_pool`, so that the profile will be
deconstructed after the object pool.
5. Remove the unused `ObjectPool` param in RuntimeProfile constructor. If I don't remove it,
RuntimeProfile will depends on the `_obj_pool` in RuntimeProfile.
2020-06-19 17:09:04 +08:00
ca96ea3056 [Memory Engine] MemTablet creation and compatibility handling in BE (#3762) 2020-06-18 09:56:07 +08:00
0224d49842 [Fix][Bug] Fix compile bug (#3888)
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
2020-06-16 18:42:04 +08:00
6c4d7c60dd [Feature] Add QueryDetail to store query statistics. (#3744)
1. Store the query statistics in memory.
2. Supporting RESTFUL interface to get the statistics.
2020-06-15 18:16:54 +08:00
2211cb0ee0 [Metrics] Add metrics document and 2 new metrics of TCP (#3835) 2020-06-15 09:48:09 +08:00
8caedadb67 use scoped_refptr to new HashIndex (#3818) 2020-06-10 23:47:10 +08:00
e4dc2ec440 [StorageEngine] Make StorageEngine::open return more detailed info (#3761)
StorageEngine::open just return a very vague status info when failed,
we have to check logs to find out the root reason, and it's not
convenient to check logs if we run unit tests in CI dockers.
It would be better to return more detailed failure info to point out
the root reason, for example, it may return error status with message
"file descriptors limit is too small".
2020-06-07 10:21:33 +08:00
3b6a781862 [Bug] Fix a bug that tablet's _preferred_rowset_type may be modified to BETA_ROWSET after cloned (#3750)
TabletMeta's _preferred_rowset_type is not initialized after object constructing and
may be a random value, and this field is not updated when create ALPHA_ROWSET tablet,
and it will not be serialized into pb in this case. So if cloning an ALPHA_ROWSET
tablet from another BE, this new created local tablet's _preferred_rowset_type field
may be random as BETA_ROWSET and can not be overwrote after cloned, then new input
rows will be wrote as BETA_ROWSET format which is not we expect.
This patch fix this bug by giving _preferred_rowset_type a default value and updating
this field when create any type of tablet, and add an unit test and related overwrite
equal operator functions.
2020-06-06 11:36:28 +08:00
70aa9d6ca8 [Memory Engine] Add MemTabletScan (#3734) 2020-06-03 15:42:38 +08:00
60f93b2142 Fix bitmap type (#3749) 2020-06-03 10:07:58 +08:00
7524c5ef63 [Memory Engine] Add MemSubTablet, MemTablet, WriteTx, PartialRowBatch (#3637) 2020-05-30 10:33:10 +08:00
c967eaf496 [Memory Engine] Add TabletType to PartitionInfo and TabletMeta (#3668) 2020-05-29 20:20:44 +08:00
e76f712bb3 [Bug] Load data is error in json load 2020-05-28 17:28:33 +08:00
1cc78fe69b [Enhancement] Convert metric to Json format (#3635)
Add a JSON format for existing metrics like this.
```
{
    "tags":
    {
        "metric":"thread_pool",
        "name":"thrift-server-pool",
        "type":"active_thread_num"
    },
    "unit":"number",
    "value":3
}
```
I add a new JsonMetricVisitor to handle the transformation.
It's not to modify existing PrometheusMetricVisitor and SimpleCoreMetricVisitor.
Also I add
1.  A unit item to indicate the metric better 
2. Cloning tablet statistics divided by database.
3. Use white space to replace newline in audit.log
2020-05-27 08:49:30 +08:00
3ffc447b38 [OUTFILE] Support INTO OUTFILE to export query result (#3584)
This CL mainly changes:

1. Support `SELECT INTO OUTFILE` command.
2. Support export query result to a file via Broker.
3. Support CSV export format with specified column separator and line delimiter.
2020-05-25 21:24:56 +08:00
6788cacb94 Fix unit test failed (#3642)
Fix some unittest failed due to glog, this may be we change the ut build dir,and the log path is not exist in new build dir, so we change the log from file to stdout
2020-05-25 18:55:19 +08:00
12ebd5d82b Remove some outdate test (#3672) 2020-05-25 09:23:56 +08:00
ba7d2dbf7b [Function] Support utf-8 encoding in instr, locate, locate_pos, lpad, rpad (#3638)
Support utf-8 encoding for string function `instr`, `locate`, `locate_pos`, `lpad`, `rpad`
and add unit test for them
2020-05-22 14:34:26 +08:00
16deac96a9 [UT][Bug] Fix the ut error of bitmap_intersect (#3664)
Change-Id: Id32fd9381119f30786acae9b4ac61b0d5ef9df48
2020-05-22 10:29:12 +08:00
ef8fd1fcbe [Load] Support load json-data into Doris by RoutineLoad or StreamLoad (#3553)
Doris support load json-data by RoutineLoad or StreamLoad
2020-05-21 13:00:49 +08:00
0d66e6bd15 Support bitmap_intersect (#3571)
* Support bitmap_intersect

Support aggregate function Bitmap Intersect, it is mainly used to take intersection of grouped data.
The function 'bitmap_intersect(expr)' calculates the intersection of bitmap columns and returns a bitmap object.
The defination is following:
FunctionName: bitmap_intersect,
InputType: bitmap,
OutputType: bitmap

The scenario is as follows:
Query which users satisfy the three tags a, b, and c at the same time.

```
select bitmap_to_string(bitmap_intersect(user_id)) from
(
    select bitmap_union(user_id) user_id from bitmap_intersect_test
    where tag in ('a', 'b', 'c')
    group by tag
) a
```
Closed #3552.

* Add docs of bitmap_union and bitmap_intersect

* Support null of bitmap_intersect
2020-05-20 21:12:02 +08:00
c54cb4b14e [Memory Engine] Add column reader/writer (#3580) 2020-05-20 11:09:30 +08:00
7fb74db0a1 [Trace] Introduce trace util to BE
Ref https://github.com/apache/incubator-doris/issues/3566
Introduce trace utility from Kudu to BE. This utility has been widely used in Kudu,
Impala also import this trace utility.
This trace util is used for tracing each phases in a thread, and can be dumped to
string to see each phases' time cost and diagnose which phase cost more time.
This util store a Trace object as a threadlocal variable, we can add trace entries
which record the current file name, line number, user specified symbols and
timestamp to this object, and it's able to add some counters to this Trace
object. And then, it can be dumped to human readable string.
There are some helpful macros defined in trace.h, here is a simple example for
usage:
```
  scoped_refptr<Trace> t1(new Trace);            // New 2 traces
  scoped_refptr<Trace> t2(new Trace);
  t1->AddChildTrace("child_trace", t2.get());    // t1 add t2 as a child named "child_trace"

  TRACE_TO(t1, "step $0", 1);  // Explicitly trace to t1
  usleep(10);
  // ... do some work
  ADOPT_TRACE(t1.get());   // Explicitly adopt to trace to t1
  TRACE("step $0", 2);     // Implicitly trace to t1
  {
    // The time spent in this scope is added to counter t1.scope_time_cost
    TRACE_COUNTER_SCOPE_LATENCY_US("scope_time_cost");
    ADOPT_TRACE(t2.get());  // Adopt to trace to t2 for the duration of the current scope
    TRACE("sub start");     // Implicitly trace to t2
    usleep(10);
    // ... do some work
    TRACE("sub before loop");
    for (int i = 0; i < 10; ++i) {
      TRACE_COUNTER_INCREMENT("iterate_count", 1);  // Increase counter t2.iterate_count

      MicrosecondsInt64 start_time = GetMonoTimeMicros();
      usleep(10);
      // ... do some work
      MicrosecondsInt64 end_time = GetMonoTimeMicros();
      int64_t dur = end_time - start_time;
      // t2's simple histogram metric with name prefixed with "lbm_writes"
      const char* counter = BUCKETED_COUNTER_NAME("lbm_writes", dur);
      TRACE_COUNTER_INCREMENT(counter, 1);
    }
    TRACE("sub after loop");
  }
  TRACE("goodbye $0", "cruel world");     // Automatically restore to trace to t1
  std::cout << t1->DumpToString(Trace::INCLUDE_ALL) << std::endl;
```
output looks like:
```
0514 02:16:07.988054 (+     0us) trace_test.cpp:76] step 1
0514 02:16:07.988112 (+    58us) trace_test.cpp:80] step 2
0514 02:16:07.988863 (+   751us) trace_test.cpp:103] goodbye cruel world
Related trace 'child_trace':
0514 02:16:07.988120 (+     0us) trace_test.cpp:85] sub start
0514 02:16:07.988188 (+    68us) trace_test.cpp:88] sub before loop
0514 02:16:07.988850 (+   662us) trace_test.cpp:101] sub after loop
Metrics: {"scope_time_cost":744,"child_traces":[["child_trace",{"iterate_count":10,"lbm_writes_lt_1ms":10}]]}
```
Exclude the original source code, this patch
do the following work to adapt to Doris:
- Rename "kudu" namespace to "doris"
- Update some names to the existing function names in Doris, i.g. strings::internal::SubstituteArg::kNoArg -> strings::internal::SubstituteArg::NoArg
- Use doris::SpinLock instead of kudu::simple_spinlock which hasn't been imported
- Use manual malloc() and free() instead of kudu::Arena which hasn't been imported
- Use manual rapidjson::Writer instead of kudu::JsonWriter which hasn't been imported
- Remove all TRACE_EVENT related unit tests since TRACE_EVENT is not imported this time
- Update CMakeLists.txt

NOTICE(#3622):
This is a "revert of revert pull request".
This pr is mainly used to synthesize the PRs whose commits were
scattered and submitted due to the wrong merge method into a complete single commit.
2020-05-18 14:55:11 +08:00
69a63f6f53 Revert "[trace] Introduce trace util to BE" (#3614)
This revert is used to correct the mess of the commit
timeline caused by the wrong merge method.
2020-05-18 13:16:39 +08:00
bb7ae97845 [trace] Introduce trace util to BE
Ref https://github.com/apache/incubator-doris/issues/3566
Introduce trace utility from Kudu to BE. This utility has been widely used in Kudu,
Impala also import this trace utility.
This trace util is used for tracing each phases in a thread, and can be dumped to
string to see each phases' time cost and diagnose which phase cost more time.
This util store a Trace object as a threadlocal variable, we can add trace entries
which record the current file name, line number, user specified symbols and
timestamp to this object, and it's able to add some counters to this Trace
object. And then, it can be dumped to human readable string.
There are some helpful macros defined in trace.h, here is a simple example for
usage:
```
  scoped_refptr<Trace> t1(new Trace);            // New 2 traces
  scoped_refptr<Trace> t2(new Trace);
  t1->AddChildTrace("child_trace", t2.get());    // t1 add t2 as a child named "child_trace"

  TRACE_TO(t1, "step $0", 1);  // Explicitly trace to t1
  usleep(10);
  // ... do some work
  ADOPT_TRACE(t1.get());   // Explicitly adopt to trace to t1
  TRACE("step $0", 2);     // Implicitly trace to t1
  {
    // The time spent in this scope is added to counter t1.scope_time_cost
    TRACE_COUNTER_SCOPE_LATENCY_US("scope_time_cost");
    ADOPT_TRACE(t2.get());  // Adopt to trace to t2 for the duration of the current scope
    TRACE("sub start");     // Implicitly trace to t2
    usleep(10);
    // ... do some work
    TRACE("sub before loop");
    for (int i = 0; i < 10; ++i) {
      TRACE_COUNTER_INCREMENT("iterate_count", 1);  // Increase counter t2.iterate_count

      MicrosecondsInt64 start_time = GetMonoTimeMicros();
      usleep(10);
      // ... do some work
      MicrosecondsInt64 end_time = GetMonoTimeMicros();
      int64_t dur = end_time - start_time;
      // t2's simple histogram metric with name prefixed with "lbm_writes"
      const char* counter = BUCKETED_COUNTER_NAME("lbm_writes", dur);
      TRACE_COUNTER_INCREMENT(counter, 1);
    }
    TRACE("sub after loop");
  }
  TRACE("goodbye $0", "cruel world");     // Automatically restore to trace to t1
  std::cout << t1->DumpToString(Trace::INCLUDE_ALL) << std::endl;
```
output looks like:
```
0514 02:16:07.988054 (+     0us) trace_test.cpp:76] step 1
0514 02:16:07.988112 (+    58us) trace_test.cpp:80] step 2
0514 02:16:07.988863 (+   751us) trace_test.cpp:103] goodbye cruel world
Related trace 'child_trace':
0514 02:16:07.988120 (+     0us) trace_test.cpp:85] sub start
0514 02:16:07.988188 (+    68us) trace_test.cpp:88] sub before loop
0514 02:16:07.988850 (+   662us) trace_test.cpp:101] sub after loop
Metrics: {"scope_time_cost":744,"child_traces":[["child_trace",{"iterate_count":10,"lbm_writes_lt_1ms":10}]]}
```
Exclude the original source code, this patch
do the following work to adapt to Doris:
- Rename "kudu" namespace to "doris"
- Update some names to the existing function names in Doris, i.g. strings::internal::SubstituteArg::kNoArg -> strings::internal::SubstituteArg::NoArg
- Use doris::SpinLock instead of kudu::simple_spinlock which hasn't been imported
- Use manual malloc() and free() instead of kudu::Arena which hasn't been imported
- Use manual rapidjson::Writer instead of kudu::JsonWriter which hasn't been imported
- Remove all TRACE_EVENT related unit tests since TRACE_EVENT is not imported this time
- Update CMakeLists.txt
2020-05-18 11:10:25 +08:00
123e1394b1 [Delete] Allow delete duplicated non-key column using delete from (#3424) 2020-05-15 09:26:36 +08:00
8406723912 adapt to Doris 2020-05-13 12:13:47 +00:00
e066791e47 import original files 2020-05-13 19:03:20 +08:00
a7cfafe076 [Memory Engine] add core column related classes (#3508)
add core column related classes
2020-05-13 16:30:32 +08:00
63fecc7954 Remove unused ColumnType (#3532) 2020-05-11 18:57:47 +08:00
5a57ecca15 [Doris On ES]fix bug of query failed in doc_value_mode when fields have none value (#3513)
#3479 

Here I try to explain the cause of the problem and how to fix it.

**The Cause of The problem**
Take the case in issue(#3479 ) as an example:
The general results are as follows:
```
GET table/_doc/_search
{"query":{"match_all":{}},"stored_fields":"_none_","docvalue_fields":["k1"],"sort":["_doc"],"size":100}

{
  "took": 6,
  "timed_out": false,
  "_shards": {
    ……
  },
  "hits": {
    "total": 3,
    "max_score": null,
    "hits": [
      {
        "_index": "table",
        "_score": null,
        "sort": [
          0
        ]
      },
      {
        "_index": "table",
        "_score": null,
        "fields": {
          "k1": [
            "kkk1"
          ]
        },
        "sort": [
          0
        ]
      },
      {
        "_index": "table",
        "_score": null,
        "sort": [
          0
        ]
      }
    ]
  }
}
```

But in Doris on ES,Be fetched data parallelly on all shards, and use `filter_path` to reduce the network cost. The process will be as follows:
```
GET table/_doc/_search?preference=_shards:1&filter_path=_scroll_id,hits.hits._source,hits.total,_id,hits.hits._source.fields,hits.hits.fields
{"query":{"match_all":{}},"stored_fields":"_none_","docvalue_fields":["k1"],"sort":["_doc"],"size":100}

{
  "hits": {
    "total": 0
  }
}

GET table/_doc/_search?preference=_shards:2&filter_path=_scroll_id,hits.hits._source,hits.total,_id,hits.hits._source.fields,hits.hits.fields
{"query":{"match_all":{}},"stored_fields":"_none_","docvalue_fields":["k1"],"sort":["_doc"],"size":100}
{
  "hits": {
    "total": 1
  }
}

GET table/_doc/_search?preference=_shards:3&filter_path=_scroll_id,hits.hits._source,hits.total,_id,hits.hits._source.fields,hits.hits.fields
{"query":{"match_all":{}},"stored_fields":"_none_","docvalue_fields":["k1"],"sort":["_doc"],"size":100}
{
  "hits": {
    "total": 1,
    "hits": [
      {
        "fields": {
          "k1": [
            "kkk1"
          ]
        }
      }
    ]
  }
}
```
*Scan-Worker On BE which processed result of shard2  will failed.* 

**The reasons are as follows:**
1. "filter_path" causes the hits.hits object not exist.  
2. In the current implementation, if there are some data rows(total > 0), the hits.hits. object must be an array

**How To Fix it**

Two Method:
1. modify "filter_path" to contain the hits.  
Pros: Fixed Code is very simple
Cons: More network cost
2. Deal with the case where fields are missing in a batch. 
Pros: No loss of performance
Cons: Code is more complex 

Performance first, I use Method2.

**Design**
1. Add a variable "_doc_value_mode" into Class "EsScrollParser" to =indicate whether the data processed by this parser is doc_value_mode or not.
2. "_doc_value_mode" is passed from ESScollReader <- ESScanner <- ScrollQueryBuilder::build() that determines whether DSL is enable doc_value_mode
3. When hits.hits of response from ES is empty and total > 0. We know there are data lines, but the corresponding fields do not exist. EsScrollParser will use "_doc_value_mode"  and _total to construct _total lines which fields are assigned with 'NULL'
2020-05-11 15:34:12 +08:00
b576e54fe6 [ASAN] Fix some address problems detected by ASAN (#3495)
LSAN detected errors have been fixed by a prior pathch (#3326), but
there are still some ASAN detected errors.
This patch try to fix these errors to make Doris BE more robustness.
And then we can add CI run in LSAN/ASAN mode to detect memory errors
as early as possible.
2020-05-11 10:30:45 +08:00
e2c3c84e8d [ut] disable backgrounp scan context gc to speed up unit test (#3524)
Each test case in ExternalScanContextMgrTest may cost 1 minitue
which is too long, we'd better disable backgrounp scan context
gc to speed up unit test.
2020-05-09 09:01:05 +08:00
a656a7ddd4 Support append_trailing_char_if_absent function (#3439) 2020-05-09 08:59:34 +08:00
94b3a2bd50 [Bug] Fix string functions not support multibyte string (#3345)
Let string functions support utf8 encoding
2020-05-08 12:52:46 +08:00
94539e7120 Non blocking OlapTableSink (#3143)
ImplementaItion Notes
NodeChannel
_cur_batch -> _pending_batches: when _cur_batch is filled up, move it to _pending_batches.
add_row() just produce batches.
try_send_and_fetch_status() tries to consume one pending batch. If has in flight packet, skip send in this round.
So we can add one sender thread to be in charge of all node channels try_send.

IndexChannel
init(), open() stay the same.
Use for_each_node_channel() to expose the detailed changes of NodeChannel.(It's more easy to read & modify)
Sender thread
See func OlapTableSink::_send_batch_process()

Why use polling?
If we use wait/notify, it will notify when generate a new batch. We can't skip sending this batch, coz it won't notify the same batch again. So wait/notify can't avoid blocking simply.
So I choose polling.
It's wasting to continuously try_send(), but it's difficult to set the suitable polling interval. Thus, I add std::this_thread::yield() to give up the time slice, give priority to other process/threads (if there are other process/threads waiting in the queue).
2020-05-07 10:43:41 +08:00
7399997433 [Memory Engine] Add hash index implementation (#3462) 2020-05-06 23:37:25 +08:00
b58b1b3953 [metrics] Make DorisMetrics to be a real singleton (#3417) 2020-05-04 09:20:53 +08:00
72f3082358 [Metrics] Add some metrics for container size in BE (#3246)
We can observe the workload of BE, and also it's a way to check
whether there is any problem in BE, like some container increase
too large and lead to OOM.

This patch add the following metrics:
```
Name                                   Description
rowset_count_generated_and_in_use      The total count of rowset id generated and in use since BE last start
unused_rowsets_count                   The total count of unused rowset waiting to be GC
broker_count                           The total count of brokers in management
data_stream_receiver_count             The total count of data stream receivers in management
fragment_endpoint_count                The total count of fragment endpoints of data stream in management, should always equal to data_stream_receiver_count
active_scan_context_count              The total count of active scan contexts
plan_fragment_count                    The total count of plan fragments in executing
load_channel_count                     The total count of load channels in management
result_buffer_block_count              The total count of result buffer blocks for queries, each block has a limited queue size (default 1024)
result_block_queue_count               The total count of queues for fragments, each queue has a limited size (default 20, by config::max_memory_sink_batch_count)
routine_load_task_count                The total count of routine load tasks in executing
small_file_cache_count                 The total count of cached small files' digest info
stream_load_pipe_count                 The total count of stream load pipes, each pipe has a limited buffer size (default 1M)
tablet_writer_count                    The total count of tablet writers
brpc_endpoint_stub_count               The total count of brpc endpoints
```
2020-04-25 16:13:39 +08:00
ad6698cd31 [Performance] Use Google/CCTZ to replace boost at timezone function (#3300)
NOTICE: the thirdparty dependency need to upgrade to add libcctz.
2020-04-23 09:26:04 +08:00
4a7a88ede1 [LSAN] Fix some memory leak detected by LSAN (#3326) 2020-04-22 22:59:44 +08:00
22e90f7260 [SegmentV2] Fix bloom filter bits buffer not initialize as 0 (#3372) 2020-04-22 19:50:05 +08:00