now we make error if we deal with leading zeros in decimal value , type_precision >= precision will make value overflow and DCHECK will fail , so if here has leading zero we should only make type_precision > precision to make value right
now we use cast to decimalv3 may has error , because decimalv3 use type precision for translate string
mysql [test]>select cast("9999e-1" as decimal(2,1));
+------------------------------------+
| cast('9999e-1' as DECIMALV3(2, 1)) |
+------------------------------------+
| 999.9 |
+------------------------------------+
1 row in set (0.01 sec)
this pr will fix this just keep reaction same with mysql
mysql> select cast('9999e-1' as decimalv3(2, 1));
+------------------------------------+
| cast('9999e-1' as DECIMALV3(2, 1)) |
+------------------------------------+
| 9.9 |
+------------------------------------+
1 row in set (0.07 sec)
* [improvement](shutdown) not print thread pool error stack trace when shutdown
when thread pool shutdown, should not print error stack trace, it is very confuse.
arrow flight server should not call shutdown, if it is not enabled, because it will print error stack.
remove service unavailable from thrift because it is useless.
Part of this PR need to pick to 2.0 branch.
Co-authored-by: yiguolei <yiguolei@gmail.com>
Current initialization dependency:
Daemon ───┬──► StorageEngine ──► ExecEnv ──► Disk/Mem/CpuInfo
│
│
BackendService ─┘
However, original code incorrectly initialize Daemon before StorageEngine.
This PR also stop and join threads of daemon services in their dtor, to ensure Daemon services release resources in reverse order of initialization via RAII.
1. do not split compress data file
Some data file in hive is compressed with gzip, deflate, etc.
These kinds of file can not be splitted.
2. Support lz4 block codec
for hive scan node, use lz4 block codec instead of lz4 frame codec
4. Support snappy block codec
For hadoop snappy
5. Optimize the `count(*)` query of csv file
For query like `select count(*) from tbl`, only need to split the line, no need to split the column.
Need to pick to branch-2.0 after this PR: #22304
Sometimes, the partitions of a hive table may on different storage, eg, some is on HDFS, others on object storage(cos, etc).
This PR mainly changes:
1. Fix the bug of accessing files via cosn.
2. Add a new field `fs_name` in TFileRangeDesc
This is because, when accessing a file, the BE will get a hdfs client from hdfs client cache, and different file in one query
request may have different fs name, eg, some of are `hdfs://`, some of are `cosn://`, so we need to specify fs name
for each file, otherwise, it may return error:
`reason: IllegalArgumentException: Wrong FS: cosn://doris-build-1308700295/xxxx, expected: hdfs://[172.xxxx:4007](http://172.xxxxx:4007/)`
now,`hostname_to_ip` only can resolve `ipv4`,Therefore, a method is provided to parse ipv4 or ipv6 based on parameters。
when `_heartbeat` call `hostname_to_ip`,Resolve to ipv4 or ipv6, determined by `BackendOptions.is_bind_ipv6` Decision
Additionally, a method is provided to first attempt to parse the host into ipv4, and then try ipv6 if it fails
* support int128 in jsonb
* fix jsonb int128 write
* fix jsonb to json int128
* fix json functions for int128
* add nereids function jsonb_extract_largeint
* add testcase for json int128
* change docs for json int128
* add nereids function jsonb_extract_largeint
* clang format
* fix check style
* using int128_t = __int128_t for all int128
* use fmt::format_to instead of snprintf digit by digit for int128
* clang format
* delete useless check
* add warn log
* clang format
Now we can not support nested type array/map
so this pr aim to:
1. add format option for string convert defined datatype to keep with origin from_string
2. support array map can nested array and map
Add scanner isolation class loader to make each plugin non-conflicting.
The BE will get scanner classes by JNI call and use JniClassLoader load them.
In the last version,we always get canner classes from the system class path by default,
so it cannot isolate the classes for each scanner
If there is a core dump here, it may cover up the real stack, if stack trace indicates heap corruption
(which led to invalid jemalloc metadata), like double free or use-after-free in the application.
Try sanitizers such as ASAN, or build jemalloc with --enable-debug to investigate further.
Now we make wrong for decimal parse from string
if given string precision is bigger than defined decimal precision, we will return a overflow error, but only digit part is bigger than typed digit length , we should return overflow error when we traverse given string to decimal value