Issue Number: close #xxx
I add jdbc catalog for doris multi-catalog feature.
Currently, the jdbc catalog only supports MYSQL DBMS.
TODO:
support for postgre DB
Support for other databases.
Problem summary
For jdbc catalog, we can create catalog like:
CREATE CATALOG jdbc4 PROPERTIES (
"type"="jdbc",
"jdbc.user"="root",
"jdbc.password"="123456",
"jdbc.jdbc_url" = "jdbc:mysql://127.0.0.1:13396/demo?yearIsDateType=false",
"jdbc.driver_url" = "file:/mnt/disk2/ftw/tools/jar/mysql-connector-java-5.1.47/mysql-connector-java-5.1.47.jar",
"jdbc.driver_class" = "com.mysql.jdbc.Driver"
);
Note:
yearIsDateType is a param of jdbc:
If yearIsDateType configuration property is set to false, then the returned object type is java.sql.Short. If set to true (the default), then the returned object is of type java.sql.Date with the date set to January 1st, at midnight.
To compat with mysql, we force the use of yearIsDateType=false in FE. if user sets yearIsDateType=true, doris FE will force to change yearIsDateType=false.
Losing segmentid info will mess up the _segment_id_to_value_in_dict_flags map
in InListPredicate, causing two distinct segments to collide and crash the BE
at last.
Signed-off-by: freemandealer <freeman.zhang1992@gmail.com>
Signed-off-by: freemandealer <freeman.zhang1992@gmail.com>
json reader DCHECK fail because of missing TYPE_STRING
fix bug that if no file is found, the tvf will throw NPE.
The predicate conjuncts can not be pushed down to parquet reader if this is a load task.
Because the predicate should be applied on column of dest table, not on column of source file.
Add a temp property "use_new_load_scan_node" of broker load to make regression test happy.
So that we can use new load scan node for a certain job and avoid setting global FE config.
MySQL [db]> SELECT SUM(a.r[1]) as active_user_num, SUM(a.r[2]) as active_user_num_1day, SUM(a.r[3]) as active_user_num_3day, SUM(a.r[4]) as active_user_num_7day FROM ( SELECT user_id, retention( day = '2022-11-01', day = '2022-11-02', day = '2022-11-04', day = '2022-11-07') as r FROM login_event WHERE (day >= '2022-11-01') AND (day <= '2022-11-21') GROUP BY user_id ) a;
ERROR 1105 (HY000): errCode = 2, detailMessage = sum requires a numeric parameter: sum(%element_extract%(a.r, 1))
If there are too much deleted tablets in RocksDB, there are many OLAP_ERR_TABLE_ALREADY_DELETED_ERROR during startup and will try to get error stack. It will cost a lot of time and the start process taks very very long.
Co-authored-by: yiguolei <yiguolei@gmail.com>
1. change jsonb_extract_string behavior: convert to string instead of NULL if the type of json path is not string
2. move jsonb tutorial doc to JSONB data type
BE will crash when querying partitioned hive table with text format
and put partition column at first of select items.
1. FE should use file slots to set the column mapping index of csv file.
2. BE should use `get_by_name` of block to get right column in a block in csv reader.
1.Support in bitmap syntax, like 'where k1 in (select bitmap_column from tbl)';
2.Support bitmap runtime filter. Generate a bitmap filter using the right table bitmap and push it down to the left table storage layer for filtering.
Problem:
We got following error frequently while SELECT xxx INTO OUTFILE:
ERROR 1064 (HY000): RpcException, msg: Fail to write to broker, broker:TNetworkAddress(hostname=a.b.c.d, port=8111) failed:write() send(): Broken pipe
Reason:
we cache broker thrift client in BE;
thrift client check connect isOpen only return cached flag, not care the real socket is opened or closed;
after we get client from cache, the socket may already closed, then pwrite will failed.
How to fix:
Other interfaces such as open and close, will reopen and retry again, but pwrite do not retry.
As there are write offset inside pwrite, and the broker(server) side also will check the write offset, it is safe to retry pwrite.
in the previous, the result is:
```
mysql> select array_position([1, null], null);
+--------------------------------------+
| array_position(ARRAY(1, NULL), NULL) |
+--------------------------------------+
| NULL |
+--------------------------------------+
1 row in set (0.02 sec)
```
but after this commit, the result become:
```
mysql> select array_position([1, null], null);
+--------------------------------------+
| array_position(ARRAY(1, NULL), NULL) |
+--------------------------------------+
| 2 |
+--------------------------------------+
1 row in set (0.02 sec)
```
The run length of null map is saved as `uint16_t`. Previously, the run length of null map was
limited by `batch_size` in the `ParquetReader`, by setting `batch_size = std::min(batch_size, (size_t)USHRT_MAX)`.
It works well when the batch size is less than `USHRT_MAX`.
However, [Lazy read](https://github.com/apache/doris/pull/13917) will merge empty batches until reading
a non-empty batch or reaching the EOF of a row group, so the `batch_size` may be greater than `USHRT_MAX`
in non-predicate columns.
In addition, even if the `batch_size` does not exceed `USHRT_MAX`, the adjacent batches may also make
the run length exceed the `USHRT_MAX` in `ColumnSelectVector::get_next_run`.
This PR is to make sharing hash table for broadcast more robust:
Add a session variable to enable/disable this function.
Do not block the hash join node's close function.
Use shared pointer to share hash table and runtime filter in broadcast join nodes.
The Hash join node that doesn't need to build the hash table will close the right child without reading any data(the child will close the corresponding sender).