support suite block to specify multiple groups.
TestAction support compare result to iterator, local file and http stream.
support print teamcity service message.
abandon the logical: generate groovy file for sql file
support 3 levels parrallel: script file, suite block, thread action
support specify JAVA_OPTS for boot shell
avoid jvm metaspace oom
use -d to run the suite in some directories, instead of -g. and -g is used to specify groups
I met a failure of reading hdfs files in broker load, the error message is unclear and
I spent a lot of time to locate the problem.
```
W0330 11:08:01.093812 2755268 broker_scan_node.cpp:364] Scanner[0] process failed. status=connect failed.
W0330 11:08:01.097682 2018787 fragment_mgr.cpp:234] Got error while opening fragment 712ae2b848324cb6-94a83d646173c1e9: Internal error: connect failed.
W0330 11:08:01.097702 2018787 tablet_sink.cpp:148] connect failed.
```
We should add more information when connect to hdfs failed.
1. Check properties' effectiveness of sql_block_rule, can't set limitations of sql_block_rule to be negative.
2. Optimize the judgment conditions when a query hits a sql_block_rule
3. Check if sql_block_rule has already exist when exec set property for "user" "sql_block_rule" = "xxx"
4. Add UT ad SqlBlockRuleMgrTest.java
I started a discussion on this before, you can check it in the mail group
https://lists.apache.org/thread/o770bc3k623kyfks2mzkt21qsc4g6328
In order to facilitate everyone to organize the documents, I created a new-docs directory under the incubator-doris directory. The new directory structure is below this. I just created a directory structure here, which needs to be rearranged.
In the data import scenario, in order to take into account the viewing habits of previous users, the import is organized in two ways:
1. According to the usage scenario: This will give users clearer guidance. For example, the user is the data source of kafka, then the user can directly select the routine to load
2. According to the import method: it is the introduction of the various import methods we provided before
In order to facilitate everyone to run and debug locally, I migrated the entire .vuepress under the original document. After completion, you only need to delete the original docs directory and rename the new new-docs directory to docs. At the same time, you can also run it locally, so that you can organize documents and know the content of each document directory.
In the local debugging execution, switch to the new-docs directory and execute the following command:
````
npm install
npm run dev
````
then through the browser
http://ip:port/zh-CNhttp://ip:port/en
In #8319, I remove mysql-connector-java dependency because of license incompatibility.
But we need a mysql compatible driver for http query api. So I choose mariadb-java-client,
which is under LGPL.
This PR fixes the #8731 and refactor the `build.sh` script.
The build.sh script is currently responsible for the compilation of the following Doris components.
1. FE
- fe-common
- fe-core
- spark-dpp
- hive-udf
- java-udf
- ui
2. BE
- palo_be
- meta_tool
3. broker
In the FE module.
- The 4 submodules `fe-common, fe-core, spark-dpp and ui` together form Frontend.
- `spark-dpp, hive-udf and java-udf` can be compiled separately to produce jar packages for individual use.
In the BE module.
- `palo_be` can start the BE process separately.
- `meta_tool` can be compiled separately to produce binaries.
The modified build.sh script has the following changes:
1. there is no longer an option to compile `ui` separately, build together with `--fe`.
2. `fe/be/spark-dpp/hive-udf/java-udf/palo_be/meta_tool` can be compiled separately.
3. all components except `java-udf` will be compiled by default (`java-udf` is in development)
Remaining issues:
Several submodules of FE have messy dependencies.
For example, `java-udf` depends on `fe-core`, and `fe-core` depends on `spark-dpp`,
resulting in a large binary jar of `java-udf`.
It needs to be reorganized afterwards.