The type and name parsing functions could move outside of allocated memory
as they didn't check for the terminating null character. Also fixed the
printf format string used when the list of used tables is being created.
Fixed CDC testing connector to abort on error and added some extra output
to the cdc_datatypes test.
The type and name parsing functions could move outside of allocated memory
as they didn't check for the terminating null character. Also fixed the
printf format string used when the list of used tables is being created.
Fixed CDC testing connector to abort on error and added some extra output
to the cdc_datatypes test.
The schema generator program needs to add the real_type and length fields
if the data types define them.
Also fixed a bug where the real_type and length fields were checked for
generated fields.
The DATETIME(n) values generated by a MariaDB 10.0 server were not
interpreted correctly as the wrong algorithm was used to extract the
values.
DATETIME(0) values still do not work properly and they require further
debugging and changes to the code.
The avro schema allows custom properties to be defined for the schema
fields. The avrorouter stored extra information about the table into the
schema for later use.
Currently, this information is only generated by the avrorouter
itself. Further improvements to the schema generator scripts need to be
done.
When the binlog has been read, it needs to be treated as if the
transaction or row limit has been hit. This will cause all tables to be
flushed to disk before the files are indexed.
When a MariaDB 10.0 DATETIME field with a custom length was defined, the
field offsets weren't calculated properly.
As there is no metadata for pre-10.1 DATETIME types with decimal
precision, the metadata (i.e. decimal count) needs to be gathered from the
CREATE TABLE statement. This information is then used to calculate the
correct field length when the value is decoded.
This change does not fix the incorrect interpretation of the old DATETIME
value. The converted values are still garbled due to the fact that the
value needs to be shifted out of the decimal format before it can be
properly converted.
The DATETIME(n) values generated by a MariaDB 10.0 server were not
interpreted correctly as the wrong algorithm was used to extract the
values.
DATETIME(0) values still do not work properly and they require further
debugging and changes to the code.
The avro schema allows custom properties to be defined for the schema
fields. The avrorouter stored extra information about the table into the
schema for later use.
Currently, this information is only generated by the avrorouter
itself. Further improvements to the schema generator scripts need to be
done.
When the binlog has been read, it needs to be treated as if the
transaction or row limit has been hit. This will cause all tables to be
flushed to disk before the files are indexed.
When a MariaDB 10.0 DATETIME field with a custom length was defined, the
field offsets weren't calculated properly.
As there is no metadata for pre-10.1 DATETIME types with decimal
precision, the metadata (i.e. decimal count) needs to be gathered from the
CREATE TABLE statement. This information is then used to calculate the
correct field length when the value is decoded.
This change does not fix the incorrect interpretation of the old DATETIME
value. The converted values are still garbled due to the fact that the
value needs to be shifted out of the decimal format before it can be
properly converted.
When the connection timeout was checked for a connection, it assumed that
only valid and fully established sessions should be timed out.
Taking into account the fact that connections can be idle even before the
session is fully established, the check should be expanded to all
connections regardless of the session state.
The fixed length string processing assumed that the string lengths were
contained in the first byte. This is not true for large fixed length
strings that take more than 255 bytes to store. This consists of
multi-byte character strings that can take up to 1024 bytes to store.
If a complete response is delivered in many buffers, then calling
gwbuf_length() whenever you need the complete size starts to hurt.
By caching the length of the data received sofar and by updating
the length in clientReply(), gwbuf_length() will be called exactly
once for each buffer(chain) delivered to routeQuery().
test_poll was calling poll_init() two times since it's already included in
init_test_env().
test_queuemanager was missing a bunch of frees. This doesn't fix it completely,
but removes most of the leaks and valgrind errors.
If the output buffer given to pcre2_substitute is too small, an error
value is written to the last parameter (output length). That value
should not be used for calculations. This patch gives a copy as
parameter instead.
Coincidentally, this commit fixes the crashes of query classifier tests.
Also, increase buffer growth rate in utils.c.
If a client connects from an IPv4 address, but the listener listens on an
IPv6 address, the client IP will be a IPv6 mapped IPv4 address
e.g. ::ffff:127.0.0.1. A grant for an IPv4 address should still match an
IPv6 mapped IPv4 address.
When users were loaded, the permissions for the service user were
checked. The conditional that makes sure the check is executed only at
startup was checking the listener's users instead of the SQLite handle
which caused all reloads of users to check the permissions.