Commit Graph

116 Commits

Author SHA1 Message Date
6ea5218ee8 Revert "[Enhencement](env) Checking Master branch must use JDK17 (#31587)"
This reverts commit fa499cc200344eaaf837fd52211820dc7b7b9296.
2024-03-06 13:13:49 +08:00
fa499cc200 [Enhencement](env) Checking Master branch must use JDK17 (#31587)
Add to check the JDK version in `env.sh`,  and force master to use java 17 version
2024-03-06 13:05:58 +08:00
9508546cc8 [fix](script) Fix start_fe.sh missing METADATA_FAILURE_RECOVERY with foreground (#31258) 2024-02-23 19:03:28 +08:00
f9e16e08eb [fix](script) Fix start_fe.sh --image path check image not work (#30695) 2024-02-18 11:50:17 +08:00
ea427e8c51 [fix](JDK17) It will report an exception whenwe start BE with JDK17 and query AVRO table : InaccessibleObjectException (#30541)
* [fix](JDK17) It will report an exception whenwe start BE with JDK17 and query AVRO  table : InaccessibleObjectException (#30003)
2024-01-30 15:33:40 +08:00
b1a9370004 [fix](glue)support access glue iceberg with credential list (#30473)
merge from #30292
2024-01-28 18:23:07 +08:00
9773fef4a1 [fix](class-loader) fix class loader conflict on BE side (#29942)
1. make `hadoop-common` in be java extension as `provided`.
2. must load be java extension jars before hadoop jars
2024-01-16 18:37:06 +08:00
25428bd7fb [fix](kerberos) fix BE kerberos ccache renew, optimize kerbero options (#29291)
1. we need  remove BE kinit, and use jni login with keytab, because kinit cannot renew TGT for doris in many complex cases.
> This pull requet will support new instance from keytab: https://github.com/apache/doris-thirdparty/pull/173, so now we  won't need kinit cmd, just login with keytab and principal

2. add `kerberos_ccache_path` to set kerberos credentials cache path manually.

3. add `max_hdfs_file_handle_cache_time_ms` to set hdfs fs handle cache time.
2024-01-16 18:35:29 +08:00
12af86176a [fix](class-loader) fix class loader conflict on BE side (#29942)
1. make `hadoop-common` in be java extension as `provided`.
2. must load be java extension jars before hadoop jars
2024-01-14 15:53:33 +08:00
8fc9c18c85 [improvement](jdbc catalog) Put the jdbc connection pool parameters into catalog properties (#29195) 2024-01-12 11:40:28 +08:00
b31028196c Revert "[feature](script) Add check_jvm_xmx for start_fe.sh (#28989)" (#29630)
This reverts commit 8c9908c7b4f575b41e01ae2a81b4423da6a12c3c.
2024-01-07 14:02:02 +08:00
8c9908c7b4 [feature](script) Add check_jvm_xmx for start_fe.sh (#28989)
* When -Xmx is configured more than 90% of total physical memory, start_fe.sh
  will not allowed to start, because fe maybe been killed by operating system
  with a high probability.
2024-01-06 20:11:54 +08:00
10368a71a4 [fix][security]security optimize for executable binary file doris_be access should be restricted (#29303) 2023-12-30 23:39:16 +08:00
f87c807979 [enhancement](jdk) support doris fe running in jvm with jdk16+ (#26889) 2023-11-15 10:27:30 +08:00
d2eea9b3ae [chore](macOS) Reduce the size of executables on macOS arm64 (#26894)
Like #15641, we should reduce the size of executables on macOS arm64. Otherwise, we can not run doris_be and doris_be_test with ASAN build type on macOS arm64 now.
2023-11-14 12:21:08 +08:00
3faf3b4118 [chore] Print FE version even if it has been started (#26427)
In the previous implementation, `bin/start_fe.sh --version` will
complain that "Frontend running as process xxx. Stop it first."

To show version
1. `bin/start_fe.sh --version` will print version info to fe.out
2. `bin/start_fe.sh --console --version` will print version info to stdout
2023-11-07 22:33:02 +08:00
2feed57f47 [Fix](fs_benchmark_tools) Fix run_fs_benchmark.sh classpath issue. (#26183)
Fix run_fs_benchmark.sh classpath issue.
2023-11-07 18:43:30 +08:00
9ea8efe5fa [coverage](fe)add jacoco coverage option on start_fe.sh (#25598) 2023-10-20 10:13:03 +08:00
f41b6a5fc3 [minor](doc) update the doc for docker env and custom_lib dir (#25088)
1. Update the doc for `apache/doris:build-env-for-2.0`
2. Update the doc for `custom_dir`
2023-10-09 09:50:31 +08:00
07f9f27fa9 [improvement](start script) start script can not set http proxy (#25086)
be clone snapshot using http, if set http proxy, then be clone snapshot will failed. so the start script forbit set env http proxy.
2023-10-08 10:06:06 +08:00
9d0f4c0094 [minor](be) set fd number check to 60000 for BE start script (#25078)
Modify the BE fd number check to 60000,
because the default fd number value of some system is 65535, which is smaller than previous threshold 65536,
so reduce to 60000 to let Doris start normally in most of system.
2023-10-07 19:02:39 +08:00
8a226bbd63 [fix](start_be) ignore output from command -v (#24739) 2023-09-21 19:57:43 +08:00
4c79a76491 [improve](script) echo infos if java cmd is not valid when starting be (#24714)
Co-authored-by: stephen <hello-stephen@qq.com>
2023-09-21 12:43:24 +08:00
xyf
9ccce61836 [chore](thirdparty)We need to issue an error when starting FE without setting the Java home environment (#23943)
---------

Co-authored-by: yiguolei <676222867@qq.com>
2023-09-21 08:10:36 +08:00
1a553f7e14 [Improve](start-shell)Optimize fe&be startup (#24556)
- sh start_fe/start_be --console is used to instruct the program to run in console mode.
- sh start_fe/start_be --daemon is used to instruct the program to run in daemon mode.
- sh start_fe/start_be used starts as a background execution, records output and error logs to the specified file
2023-09-19 23:00:59 +08:00
64337a8698 [Improve](metadata)Start the script to set metadata_failure_recovery (#24308) 2023-09-14 10:02:35 +08:00
e090b83e33 [improvemnt](script) support custom lib dir to save custom libs (#23887)
Sometimes, user need to add some custom libs to the cluster, such lzo.jar, orai18n.jar, etc.
In previous, these lib files are places in fe/lib or be/lib.
But when upgrading cluster, the lib dir will be replaced by new lib dir, so that all custom libs are lost.

In this PR, I add new dir custom_lib for FE and BE, and user can place custom lib files in it.
2023-09-05 11:54:19 +08:00
774a771e0c [Improve](be)check swap (#18891)
Co-authored-by: Yongqiang YANG <98214048+dataroaring@users.noreply.github.com>
2023-09-05 09:39:55 +08:00
2885de1d63 [chore](macOS) Fix invalid option errors in start_be.sh (#23861) 2023-09-05 09:07:53 +08:00
57ca7d66d3 [Fix](multi-catalog) Fix zlib init error by using doris's zlib shared library and jni.log does not output. (#23260) 2023-09-02 21:44:14 +08:00
ffadf09eec [fix](catalog)add custom jar (#23406)
- allow put custom jar in `${DORIS_HOME}/lib/java_extensions/custom_extension` such as `paimon-s3-0.4.0-incubating.jar`
- add some note for paimon and fqdn
2023-08-25 11:10:53 +08:00
5ba505ebf4 [fix](multi-catalog)fix avro and jdbc scanner dependency (#23015)
add preload-extensions module, put all conflict dependencies to pom.xml in preload-extensions
2023-08-20 19:28:17 +08:00
919bfd73f1 [improvement](multi-catalog)add scanner isolation class loader (#22247)
Add scanner isolation class loader to make each plugin non-conflicting.
The BE will get scanner classes by JNI call and use JniClassLoader load them.
In the last version,we always get canner classes from the system class path by default,
so it cannot isolate the classes for each scanner
2023-08-10 10:02:46 +08:00
96a46302e8 [fix](stacktrace) Fix Jemalloc enable profile fail to run BE after rewrites dl_iterate_phdr (#22549)
Jemalloc heap profile follows libgcc's way of backtracing by default.
rewrites dl_iterate_phdr will cause Jemalloc to fail to run after enable profile.

TODO, two solutions:

- Jemalloc specifies GNU libunwind as the prof backtracing way, but my test failed,
--enable-prof-libunwind not work: --enable-prof-libunwind not work jemalloc/jemalloc#2504

- ClickHouse/libunwind solves Jemalloc profile backtracing, but the branch of ClickHouse/libunwind
has been out of touch with GNU libunwind and LLVM libunwind, which will leave the fate to others.
2023-08-03 19:32:36 +08:00
bc87002028 [opt](conf) remote scanner thread num is changed to core num * 10 (#22427) 2023-08-01 23:09:49 +08:00
e8f4323e0f [Fix](jdbcCatalog) fix typo of some variable #22214 2023-07-26 08:34:45 +08:00
1afe090486 [improvement](memory) modify jemalloc conf in be.conf (#21943)
modify jemalloc conf in be.conf
    disable je_purge_all_arena_dirty_pages
2023-07-20 10:34:31 +08:00
fde73b6cc6 [Fix](multi-catalog) Fix hadoop short circuit reading can not enabled in some environments. (#21516)
Fix hadoop short circuit reading can not enabled in some environments.
- Revert #21430 because it will cause performance degradation issue.
- Add `$HADOOP_CONF_DIR` to `$CLASSPATH`.
- Remove empty `hdfs-site.xml`. Because in some environments it will cause hadoop short circuit reading can not enabled.
- Copy the hadoop common native libs(which is copied from https://github.com/apache/doris-thirdparty/pull/98
) and add it to `LD_LIBRARY_PATH`. Because in some environments `LD_LIBRARY_PATH` doesn't contain hadoop common native libs, which will cause hadoop short circuit reading can not enabled.
2023-07-06 15:00:26 +08:00
242a35fa80 [fix](s3) fix s3 fs benchmark tool (#21401)
1. fix concurrency bug of s3 fs benchmark tool, to avoid crash on multi thread.
2. Add `prefetch_read` operation to test prefetch reader.
3. add `AWS_EC2_METADATA_DISABLED` env in `start_be.sh` to avoid call ec2 metadata when creating s3 client.
4. add `AWS_MAX_ATTEMPTS` env in `start_be.sh` to avoid warning log of s3 sdk.
2023-07-05 16:20:58 +08:00
9adbca685a [opt](hudi) use spark bundle to read hudi data (#21260)
Use spark-bundle to read hudi data instead of using hive-bundle to read hudi data.

**Advantage** for using spark-bundle to read hudi data:
1. The performance of spark-bundle is more than twice that of hive-bundle
2. spark-bundle using `UnsafeRow` can reduce data copying and GC time of the jvm
3. spark-bundle support `Time Travel`, `Incremental Read`, and `Schema Change`, these functions can be quickly ported to Doris

**Disadvantage** for using spark-bundle to read hudi data:
1. More dependencies make hudi-dependency.jar very cumbersome(from 138M -> 300M)
2. spark-bundle only provides `RDD` interface and cannot be used directly
2023-07-04 17:04:49 +08:00
88b2d81873 [Fix](multi-catalog) Add hadoop system classpath to CLASSPATH to resolve can not enable hadoop short circuit reading in some environments. (#21430)
Add hadoop system classpath to CLASSPATH to resolve can not enable hadoop short circuit reading in some environments.
2023-07-03 14:51:34 +08:00
1dec592e91 [improvement](fs_bench) optimize the usage of fs benchmark tool for hdfs (#21154)
Optimize the usage of fs benchmark tool:

1. Remove `Open` benchmark, it is useless.
2. Remove `Delete` benchmark, it is dangerous.
3. Add `SingleRead` benchmark, user can specify an exist file to test read operation:

    `sh bin/run-fs-benchmark.sh --conf=conf/hdfs_read.conf --fs_type=hdfs --operation=single_read`

4. Modify the `run-fs-benchmark.sh`, remove `OPTS` section, use options in `fs_benchmark_tool` directly
5. Add some custom counters in the benchmark result, eg:

```
--------------------------------------------------------------------------------------------------------------------------------
Benchmark                                                                      Time             CPU   Iterations UserCounters...
--------------------------------------------------------------------------------------------------------------------------------
HdfsReadBenchmark/iterations:1/repeats:3/manual_time/threads:1              6864 ms         2385 ms            1 ReadRate=200.936M/s
HdfsReadBenchmark/iterations:1/repeats:3/manual_time/threads:1              3919 ms         1828 ms            1 ReadRate=351.96M/s
HdfsReadBenchmark/iterations:1/repeats:3/manual_time/threads:1              3839 ms         1819 ms            1 ReadRate=359.265M/s
HdfsReadBenchmark/iterations:1/repeats:3/manual_time/threads:1_mean         4874 ms         2011 ms            3 ReadRate=304.054M/s
HdfsReadBenchmark/iterations:1/repeats:3/manual_time/threads:1_median       3919 ms         1828 ms            3 ReadRate=351.96M/s
HdfsReadBenchmark/iterations:1/repeats:3/manual_time/threads:1_stddev       1724 ms          324 ms            3 ReadRate=89.3768M/s
HdfsReadBenchmark/iterations:1/repeats:3/manual_time/threads:1_cv          35.37 %         16.11 %             3 ReadRate=29.40%
HdfsReadBenchmark/iterations:1/repeats:3/manual_time/threads:1_max          6864 ms         2385 ms            3 ReadRate=359.265M/s
HdfsReadBenchmark/iterations:1/repeats:3/manual_time/threads:1_min          3839 ms         1819 ms            3 ReadRate=200.936M/s
```

- For `open_read` and `single_read`, add `ReadRate` as `bytes per second`.
- For `create_write`, add `WriteRate` as `bytes per second`.
- For `exists` and `rename`, add `ExistsCost` and `RenameCost` as `time cost per one operation`.
2023-06-26 11:37:14 +08:00
d49c412c59 [Feature](multi-catalog) Add hdfs benchmark tools. (#21074) 2023-06-25 09:35:27 +08:00
37c9a08e56 [Bug] The PID_DIR variable in the Doris stop script does not follow the conf file (#20881) 2023-06-22 10:26:26 +08:00
53b2fe5db6 [improvement](jdbc) Set the JDBC connection timeout to be conf (#21000) 2023-06-20 14:23:48 +08:00
9c30fb5a21 [fix](script)Fix the JAVA_OPTS version error of the BE start script (#20766) 2023-06-14 15:25:00 +08:00
5d2758cb8f [improvement](build) move add BE extension jars to java_extensions dir (#20740)
Follow #20185
Move all BE java extension jars to `be/lib/java_extensions/` dir.
Also remove `udf` dir, used for BE native udf, which is deprecated since v1.2

The final output is:

```
output
├── be
│   ├── bin
│   ├── conf
│   ├── dict
│   ├── lib
|   ├── java_extensions
│       ├── hudi-scanner-jar-with-dependencies.jar
│       ├── java-udf-jar-with-dependencies.jar
│       ├── jdbc-scanner-jar-with-dependencies.jar
│       ├── max-compute-scanner-jar-with-dependencies.jar
│       └── paimon-scanner-jar-with-dependencies.jar
│   ├── LICENSE-dist.txt
│   ├── licenses
│   ├── log
│   ├── NOTICE.txt
│   ├── storage
│   └── www
└── fe
    ├── bin
    ├── conf
    ├── doris-meta
    ├── lib
    ├── LICENSE-dist.txt
    ├── licenses
    ├── log
    ├── mysql_ssl_default_certificate
    ├── NOTICE.txt
    ├── spark-dpp
    └── webroot
```
2023-06-13 18:55:12 +08:00
8c4f3d4126 [chore](macOS) Fix JAVA_OPTS in start_be.sh (#19267)
We should set -XX:-MaxFDLimit on macOS if we enable java support for BE otherwise BE may fail to start up.
2023-05-08 14:01:10 +08:00
Pxl
ec517a53a8 [Chore](build) upgrade clang-format version to 16 && move thrift to fe-common (#19155)
upgrade clang-format version to 16
move thrift to fe-common
fix core dump on pipeline engine when operator canceled and not prepared
2023-04-28 14:14:51 +08:00
7b02fa5cd6 [optimization](conf) optimization JAVA_OPTS for be conf and be bin (#19029) 2023-04-27 13:48:46 +08:00