1. Support some function alias of mod/fmod, adddate/add_data
2. Support some function of multi args: week, yearweek
3. Fix bug of multi args function call the DATETIME type not effective in DATE type
Sometimes BE is build on a machine with SIMD instruction such as AVX2.
But the BE binary will be copied to a machine without AVX2. It will crashed without any error message.
This PR will check the required SIMD instructions and print error messages during startup.
Hive Bitmap UDF provides UDFs for generating bitmap and bitmap operations in hive tables.
The bitmap in Hive is exactly the same as the Doris bitmap.
The bitmap in Hive can be imported into Doris through spark bitmap load.
The two phase batch commit means:
During Stream load, after data is written, the message will be returned to the client,
the data is invisible at this point and the transaction status is PRECOMMITTED.
The data will be visible only after COMMIT is triggered by client.
1. User can invoke the following interface to trigger commit operations for transaction:
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:commit" \
http://fe_host:http_port/api/{db}/_stream_load_2pc
or
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:commit" \
http://be_host:webserver_port/api/{db}/_stream_load_2pc
2.User can invoke the following interface to trigger abort operations for transaction:
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:abort" \
http://fe_host:http_port/api/{db}/_stream_load_2pc
or
curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:abort" \
http://be_host:webserver_port/api/{db}/_stream_load_2pc
Framework code for statistics collection,
containing only the main data structures, no implementation details.
This pr will not affect any existing code
and users will not be able to create statistics job.
1. set both `tuple_offsets` and `new_tuple_offsets` in PRowBatch for compatibility
2. set FE config `repair_slow_replica` default to false
Avoid impacting the load process after upgrading.
Eg, if there are only 2 replicas, one is with high version count. After upgrade,
that replica will be set to bad, so that the load process will be stopped
because only 1 replica is alive.
3. Fix a bug that NodeChannel may be blocked at `close_wait()`
Forget to set `add_batch_finish` flag after the last rpc finished.
4. Fix a NPE of RoutineLoadScheduler
Fix MyForeachPartitionFunction.java:xxx:xxx
java: org.apache.doris.demo.spark.demo.hdfs.MyForeachPartitionFunction is not abstract and does not override abstract method apply$mcVJ$sp(long) in scala.Function1
1. Add `iceberg_table_creation_strict_mode` in `fe.conf` to control iceberg external table creation, when data type is not supported in Doris.
2. Add `REFRESH` syntax to synchronize the Iceberg table and database.
3. Support create Iceberg external table with specific column definitions.
1. Added http interface return example in table-schema-action.md.
2. Correct typos in the document in error.md.
3. Modify the content of the code comments in the text_converter.hpp file.
This PR #7936 change some FE log level to debug, so that when error happens, it is not easy to find out
which SQL cause the error.
So I add stmt id and query id in error log, so that user can use these identifiers to find SQL in fe.audit.log
If the tableRef behind represents a CTE or a view,
the tableRef will be reset during semantic parsing.
The new tableRef needs to inherit the lateral view property of the origin tableRef
to ensure that the lateral view is not accidentally lost during parsing.
fix ltrim result may incorrect in some case
according to https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
Built-in Function: int __builtin_cl/tz (unsigned int x)
If x is 0, the result is undefined.
So we handle the case of 0 separately
this function return different between gcc and clang when x is 0
1. reuse Schema to avoid copying, because clone Schema will generate a lot of sub Field object
2. call interface provided by Block to reduce code lines
This PR mainly changes:
1. Change the define of PBlock
The new PBlock consists of a set of PColumnMeta and a binary buffer.
The PColumnMeta records the metadata information of all columns in the Block,
while the buffer stores the serialized binary data of all columns.
2. Refactor the serialize/deserialize method of data type
Rewrite the `serialize()/deserialize()` of IDataType. And also add
a new method `get_uncompressed_serialized_bytes()` to get the total length
of uncompressed serialized data of a column.
3. Rewrite the serialize/deserialize method of Block
Now, when serializing a Block to PBlock, it will first get the total length
of uncompressed serialized data of all columns in this Block, and then allocate
the memory to write the serialized data to the buffer.
4. Use brpc attachment to transmit the serialized column data