Commit Graph

196 Commits

Author SHA1 Message Date
67cdc723ac Fix bug that only REPLICA_MISSING repair need to create a new replica (#590) 2019-01-25 17:56:42 +08:00
9d71a930a2 Fix bug that repair slot may not be released when clone finished (#589) 2019-01-25 16:49:15 +08:00
9a272f0592 Optimize something (#585)
1. Optimize the error msg of Tablet scheduler.
2. Missing helper nodes info when modify Frontends.
3. Fix bug that olap tablet's header lock is not released.
2019-01-25 14:27:33 +08:00
bc7e7409ca Allow repair VERSION_IMCOMPLETE tablet when ALTERing table (#583)
Previously we do not allow repair tablet if the table it belongs
to is under ALTER process. But it will possibly let the alter job
failed due to some replica's failure of load.
2019-01-24 15:39:05 +08:00
cd7a2c3fd5 Refactor CreateTableTest (#579) 2019-01-24 13:56:41 +08:00
aeb89ab4d3 Add ColocateMetaService (#562) 2019-01-24 11:20:12 +08:00
c82879cb2e Fix bug that heartbeat error msg may be null (#574) 2019-01-23 17:24:29 +08:00
079141e14a Add disk usage percent in SHOW BACKEND stmt (#571) 2019-01-23 14:08:33 +08:00
9e240d432a Fix replica version report bug (#569)
Replica with version hash equals to 0 should be handled correctly.
2019-01-22 16:48:24 +08:00
e11bdf2db5 Remove unit of measurement about query statistics in audit log (#568) 2019-01-22 14:38:39 +08:00
09df294898 Fix some bugs (#566)
1. Backup obj should set state to NORMAL.
2. Replica with version 1-0 should be handled correctly.
2019-01-22 12:21:55 +08:00
e80f6ed86a Fix uncorrect hll type length when creating table (#565) 2019-01-22 11:22:49 +08:00
f7155217bf Remove build rows counter in PartitionHashJoinNode (#557)
* Remove build rows counter in PartitionHashJoinNode
* Fix unit test fail in RuntimeProfileTest
* Add check for result type length in cast_to_string_val
2019-01-21 14:08:59 +08:00
54e98f6964 Auto fix missing version replica (#560) 2019-01-21 08:56:43 +08:00
51c128c8d1 Skip balance when colocate group is balancing (#548) 2019-01-18 14:13:42 +08:00
717285db1e Remove unused code about showing current queries (#552) 2019-01-18 09:53:40 +08:00
723ef04f51 Fix string truncation error in CastExpr. (#551) 2019-01-17 17:38:34 +08:00
d15bc83de0 Fix some bugs of alter table operation (#550)
1. Fix bug that failed to query restored table after schema change.
2. Fix bug that failed to add rollup to restored table.
3. Optimize the info of SHOW ALTER TABLE stmt.
4. Optimize the info of some PROCs.
5. Optimize the tablet checker to avoid adding too much task to scheduler.
2019-01-17 15:17:51 +08:00
5cb1c161a4 Fix colocate join balance bug (#547) 2019-01-17 14:58:08 +08:00
6bef41633c Add DORIS_THIRDPARTY env in docker image (#539)
* Add param of specified thirdparty path
1. The thirdparth path can be specify on build.sh: ./build.sh --thirdparty /specified/path/to/thirdparty
2. If there are only thirdparty param of build.sh, it will build both fe and be
3. Add unit test of routine load stmt
4. Remove source code in docker image

* Add DORIS_THIRDPARTY env in docker image
1. Set DORIS_THIRDPARTY env in docker image. The build.sh will use /var/local/thirdparty instead of /source/code/thirdparty
2. remove --thirdparty param of build.sh

* Change image workdir to /root
2019-01-17 14:19:13 +08:00
0e5b193243 Add cpu and io indicates to audit log (#531) 2019-01-17 12:43:15 +08:00
33b133c6ff Fix bug that internal retry of stream load return wrong result (#541)
Add an internal-generated timestamp as a unique identifier to identify a request and a retry request
2019-01-16 18:59:19 +08:00
798a66e6a0 Implement new tablet repair and balance framework (#336)
More detail, see issue #540
2019-01-16 13:29:17 +08:00
f20c99fd09 Support storage migration (#534)
Add a migration lock to lock the data when doing storage migration.
2019-01-15 12:53:24 +08:00
0fcbe15280 Change the default bdbje sync policy to SYNC (#519) 2019-01-10 19:06:29 +08:00
0b50617542 Fix BE core if WHEN expr is null in CASE-WHEN clause (#521)
#518
2019-01-10 13:40:28 +08:00
d372b04e42 Revert "Add cpu and io indicates to audit log (#513)" (#520)
This reverts commit 5192e2f010308eefffa5271b0bdc947dfd9168ae.
2019-01-10 12:44:09 +08:00
5192e2f010 Add cpu and io indicates to audit log (#513)
Record query consumption into fe audit log. Its basic mode of work is as follows, one of instance of parent plan is responsible for accumulating sub plan's consumption and send to it's parent, BE coordinator will get total consumption because it's a single instance.
2019-01-09 22:28:20 +08:00
69f9987abd EsTable without partition info (#511) 2019-01-09 11:14:19 +08:00
92b138121b Support io and cpu indicates for current query (#497)
Help to locate big query when system overload, by checking consumptions of running parts of current all queries or specified one query. Its basic mode of work is as follows: firstly trigger BE to report RuntimeProfiles, and wait a moment. secondly caculate consumptions with RuntimeProfiles reported by BE. The consumptions supported by it are the cost of running ExecNode in query when call it.
2019-01-08 10:59:42 +08:00
cbf1f99a46 Fix parse es state failed in unit test (#502) 2019-01-07 14:13:26 +08:00
9bfd8d818a Add md5 property for UDF create statement (#500) 2019-01-06 19:45:04 +08:00
483c5a971e Add routine load statement (#456)
1. Add sql parser and sql scanner for routine load stmt such as KW_ROUTINE(routine), KW_PAUSE.
2. Create routine load statment like
      CREATE ROUTINE LOAD name ON database.table
      (properties of routine load)
      [PROPERTIES (key1=value1, )]
      FROM [KAFKA](type of routine load)
      (properties of this type)

      properties of routine load:
          The load property of CreateRoutineLoadStmt is disordered: Both 'LoadColumnsInfo, PartitionNames xxx' and 'PartitionNames, ColumnsInfo xxx' is right.
          [COLUMNS TERMINATED BY separator ]
          [(col1, ...)]
          [SET (k1=f1(xx), k2=f2(xx))]
          WHERE
          [PARTITION (p1, p2)]

      type of routine load:
          KAFKA

      different type has different properties
      properties of this type:
          k1 = v1
          k2 = v2
3. Pause/Resume/Stop routine load statment like
      PAUSE/RESUME/STOP ROUTINE LOAD jobName
4. Ddlexecutor support CreateRoutineLoadStmt, Pause/Resume/StopRoutineLoadStmt
5. Pause/Stop routine load will clear all of task which belong to job immediately
      The task which has been not committed will be abort.
6. Resume routine load will change job state to need scheduler
      The RoutineLoadJobScheduler will scheduler it later.
7. Show routine load statment like
      SHOW ROUTINE LOAD jobName
8. All of load property can implement LoadProperty such as LoadColumnsInfo, PartitionsNames etc
9. The sql of LoadColumnsInfo is Columns (c1, c2, c3) set (c1, c2, c3=c1+c2)
10. Add check of routineLoadName, db.routineLoadName is unique in database when job state is not final state.
2019-01-04 13:49:49 +08:00
a51ce03595 Enhance the usability of Load operation (#490)
1. Add broker load error hub
A broker load error hub will collect error messages in load process and saves them as a file to the specified remote storage via broker. In case that in broker/min/streaming load process, user may not be able to access the error log file in Backend directly.
We also add a new header option: 'enable_hub' in streaming load request, and default is false. Because if we enable the broker load error hub, it will significantly slow down the processing speed of streaming load, due to the visit of remote storage via broker. So use can disable the error load hub using this header option, to avoid slowing down the load speed.

2. Show load error logs by using SHOW LOAD WARNINGS stmt
We also provide a more easy way to get load error logs. We implement 'SHOW LOAD WARNINGS ON 'url'' stmt to show load error logs directly. The 'url' in stmt is provided in 'SHOW  LOAD' stmt.
eg:
show load warnings on "http://192.168.1.1:8040/api/_load_error_log?file=__shard_2/error_log_xxx";

3. Support now() function in broker load
User can mapping a column to now() in broker load stmt, which means this column will be filled with time when the ETL started.

4. Support more types of wildcard in broker load
Currently, we only support wildcard '*' to match the file names. wildcard like '/path/to/20190[1-4]*' is not support.
2019-01-03 19:07:27 +08:00
d1bdb55302 Fix bug that schema change on restored table will lose data (#489)
In TabletInvertedIndex calss, The instance of TabletMeta in
'tabletMetaMap' and 'tabletMetaTable' should be same.
Otherwise when we change the schema hash info of TabletMeta
in 'tabletMetaMap', TabletMeta in 'tabletMetaTable' left
unchanged, which will cause inconsistency of meta.
2019-01-02 09:51:06 +08:00
ff7d3e5878 Unify the print method of TUniqueId (#487) 2018-12-29 16:22:38 +08:00
7380483394 Support UDF (#468)
Now, user can create UDF with CREATE FUNCTION statement. Doris only
support UDF in this version, it will support UDAF/UDTF later.
2018-12-29 09:13:04 +08:00
3faf443f52 Fixed: prometheus2.6 metrics (#478) 2018-12-29 09:08:57 +08:00
46c70a16b1 Add more detail logs to debug streaming load (#484)
* Add more detail logs to debug streaming load

* fix bugs

* fix bugs
2018-12-28 19:42:09 +08:00
4655b96580 Fix bug that generating incorrect wSymbol in InPredicate (#472) 2018-12-27 19:43:15 +08:00
7d7934112f Fix fe ut (#469)
1. Fix StreamLoadScanNodeTest
2. Revert the fix of decimal value with scientific notation, this still need to fix it later
2018-12-25 20:07:03 +08:00
d2cd8cf180 Make column's name be case insensitive in load stmt (#464)
1. Make column's name be case insensitive in broker load
2. Make column name in stream load be case insensitive too
2018-12-25 16:57:41 +08:00
5b1e3d3f40 Optimize backup & restore process (#460)
1. Print broker address for debug.
2. Do not letting backup job cancelled if it already in state UPLOAD_INFO.
3. Cancel task on Backends when job is cancelled.
4. Show detail progress of backup and restore job.
5. Make 'show snapshot' result more readable.
6. Change upload and download thread num of backup and restore in Backend to 1.
2018-12-24 16:49:16 +08:00
0341ffde67 Revert commit 'Add log to detect empty load file' (#447)
Looks like we still need to send push task without file
to Backend, or load job will fail.
Fix it later.
2018-12-19 12:29:12 +08:00
5a6e5cfd07 Add log to detect empty load file (#445)
We find that a load file may not be generated for rollup tablet,
add a log to observe.
2018-12-18 12:44:36 +08:00
b9201ece0b Parse thrift port from cluster state (#443) 2018-12-18 11:28:42 +08:00
7f014bdb11 Check meta context when update partition version (#438)
Partition.updateVisibleVersionAndVersionHash() is the only method that
may call Catalog.getCurrentCatalogJournalVersion() in a non-replay thread.

So we have to check whether MetaContext is null. If MetaContext is null, which
means this is a non-replay thread, and we do not need call Catalog.getCurrentCatalogJournalVersion().

Also modify the load logic to make delete job done more quickly.
2018-12-17 18:46:27 +08:00
45e42bd003 Redesign the access to meta version (#436)
Because the meta version is only be used in catalog saving and loading.
So currently this version is a field of Catalog class. And we can get this
version only by calling Catalog.getCurrentCatalogJournalVersion().

But in restore process, we need to read the meta data which is saved with
a specified meta version. So we need a flexible way to read a meta data
using a specified meta version, not only the version from Catalog.

So we create a new class called MetaContext. Currently it only has one field,
'journalVersion', to save the current journal version. And it is a
thread local variable, so that we can create a MetaContext anywhere we want,
and setting the 'journalVersion' which we want to use for reading meta.

Currently, there are 4 threads which is related to meta data saving and loading.

The Frontend starting thread, which will call Catalog.initialize() to load the image.
the Frontend state listener thread, which will listen the state changing, and call
transferToMaster() or transferToNonMaster().
Edit log replayed thread, which is created when calling transferToNonMaster().
It will replay edit log
Checkpoint thread, which is created when calling transferToMaster(). It will do
the checkpoint periodically.
Notice that we get the 'current meta version' only when 'READING' the meta (not WRITING).
So we only need to take care of all 'READING' threads.
We create MetaContext thread local variable for these 4 threads, and thread 2,3,4's
meta context inherit from thread 1's meta context. Because thread 1 will load the origin
image file and get the very first meta version.

And we leave the Catalog.getCurrentCatalogJournalVersion()'s name unchanged, just
change its content, because we don't want change a lot codes this time.

On the other hand, we add the current meta version in backup job info file when doing
backup job. So that when restoring from a backup snapshot, we can know which meta
version we should use for read the meta.
And also , we add a new property "meta_version" for Restore stmt, so that we can specify
the meta version used for reading backup meta. It is for those old backup snapshots
which do not has meta version saving in backup job info file.
2018-12-17 10:05:16 +08:00
548da0546a Fix compile error in run-fe-ut.sh (#415) 2018-12-11 17:46:13 +08:00
81ee15ed25 Fix compile failure in RLTaskTxnCommitAttachment (#414) 2018-12-11 16:00:07 +08:00