The addition of field types and lengths wasn't added to the avrorouter
ALTER TABLE handler. This caused crashes when an alter table was done and
new rows were inserted afterwards.
The type and name parsing functions could move outside of allocated memory
as they didn't check for the terminating null character. Also fixed the
printf format string used when the list of used tables is being created.
Fixed CDC testing connector to abort on error and added some extra output
to the cdc_datatypes test.
The schema generator program needs to add the real_type and length fields
if the data types define them.
Also fixed a bug where the real_type and length fields were checked for
generated fields.
The avro schema allows custom properties to be defined for the schema
fields. The avrorouter stored extra information about the table into the
schema for later use.
Currently, this information is only generated by the avrorouter
itself. Further improvements to the schema generator scripts need to be
done.
When the binlog has been read, it needs to be treated as if the
transaction or row limit has been hit. This will cause all tables to be
flushed to disk before the files are indexed.
When a MariaDB 10.0 DATETIME field with a custom length was defined, the
field offsets weren't calculated properly.
As there is no metadata for pre-10.1 DATETIME types with decimal
precision, the metadata (i.e. decimal count) needs to be gathered from the
CREATE TABLE statement. This information is then used to calculate the
correct field length when the value is decoded.
This change does not fix the incorrect interpretation of the old DATETIME
value. The converted values are still garbled due to the fact that the
value needs to be shifted out of the decimal format before it can be
properly converted.
The fixed length string processing assumed that the string lengths were
contained in the first byte. This is not true for large fixed length
strings that take more than 255 bytes to store. This consists of
multi-byte character strings that can take up to 1024 bytes to store.
The order of the servers in the service definition could break the
master_accept_reads functionality.
When the first server defined in the service is a slave, it will always be
picked as the first candidate for reads. The master would only be
considered as a candidate for reads if no previous candidate was
available. For this reason, the master_accept_reads only worked when the
first server in the list was the master.
The Avro C API fails to write bytes of size zero. A workaround is to write
a single zero byte for each NULL field of type bytes.
Also added an option to configure the Avro block size in case very large
records are written.
The rotations of binlogs weren't detected as the file names weren't
compared.
Moved the indexing of the binlogs to the end of the binlog
processing. This way the files can be flushed multiple times before they
are indexed.
The old DATETIME format wasn't processed properly which caused a
corruption of following events.
A BLOB type value could be non-NULL but still have no data. In this case,
the value should be stored as a null Avro value.
The DECIMAL value type is now properly handled in Avrorouter. It is
processed into an Avro double value when before it was ignored and
replaced with a zero integer.
Backported to the 2.0 branch.
The backtick was copied to the field name and converted to an underscore
when the name was transformed into a valid Avro identifier. This caused
one extra character to appear in the field name in the Avro schema files.
Some of the JSON errors weren't handled which could cause problems when a
malformed schema definition is read.
Also added more error messages for situations when opening of the files
fails.
When a DCB error occurs, the handleError entry point of the routers is
called. The caller of this entry point expects that the error handler
marks the DCB as handled. The aforementioned behavior is wrong as the
error handler should not keep track of whether the handler was already
called.
Closing the DCB and the backend reference that uses it at the same time
makes the error handling code clearer and removes some of the assumptions
that the code made. It will cause the DCB to be closed in multiple places
but the logic of why a DCB is being closed is more visible from the code.
This change should remove all cases where a DCB is closed without a
tightly coupled backend reference.
If the `error_on_write` mode is used when a master loses the master state,
the backend would not get closed. This would allow masters that appear
back to be used which is not intended.
Added a check for the validity of the backend DCBs before they are
closed. This should guarantee that only valid DCBs are closed by
readwritesplit.
However, this is not the correct solution for the problem. The DCB should
not be in an invalid state in the first place and this fix just removes
the bad side effects of the double closing.
With the added logging in the readwritesplit error handler, more detailed
information should become available.
If a DCB is passed to the error handler for which we cannot find the
corresponding backend reference, it should not be closed.
Added extra logging for situations where the backend reference can't be
found or it is in the wrong state.
MXS-1009. This commit adds a gwbuf_free after maxinfo_execute() to
free a buffer with an sql-query after it has been processed. Also,
the parse tree in maxinfo_execute_query() is now freed. The tree_free-
function was renamed to maxinfo_tree_free, since it is now globally
available.
This commit has additional changes (in relation to the 1.4.4 branch)
to remove errors caused by differences in the html and sql-sides of
MaxInfo.
This is not the optimal way to do error handling but it should solve all
problems that could rise from the multi-threaded model of MaxScale.
By taking a lock at the start of handleError, we'll be able to modify the
dcb error handling flag in a thread-safe manner. This should prevent
double error handling for all DCBs.
JSON does not have a concept of streams and a common way to stream JSON is
to separate each JSON object with a newline. Adding a newline makes it
easier to parse as JSON values do not natively contain newlines.
The prepared statements were router according to the real type instead of
being router to the master. This was caused by the change in the route
target function.
This commit adds a free() to null_auth_free_client_data, which plugs
the memory leak in maxinfo.
Also, this commit fixes some segfaults when multiple threads are
running status_row() or variable_row(). The functions use
statically allocated index variables, which often go out-of-bounds
in concurrent use. This fix changes the indexes to thread-specific
variables, with allocating and deallocating. This does seem to slow
the functions down somewhat.
With the use_sql_variables_in=master option, readwritesplit should route
all user variable modifications and reads with user variables to the
master.
Previously, the modification of user variables was grouped into generic
system variables which caused all modifications to system variables to go
to the master only. The router requires a finer grained distiction between
normal system variable modifications and user variable modifications.
With the improvements to the query classifier, readwritesplit now properly
routes all user variable operations to the master and other system
variable modifications to all servers.