The service for a dummy session will be NULL. If authentication fails for
a dummy session, then no service level actions should be taken.
Only the binlogrouter can trigger authentication failure with a dummy
session as it creates connections before the service itself has started.
The fact whether a query explicitly defined a query was ignored by the
statement parsing function and it always assumed the database was
explicitly defined. This caused the TABLE keyword in a CREATE TABLE
statement to be falsely identified as the current database.
This is a partial cherry-pick of 9f11fdd2c122c9b30548c7ab55df8dda643ec659.
Use larger BLOB type for mxs812_1, the inserted value exceeds the normal
BLOB size.
Add a baseline check into bulk_insert to verify that direct connections
work (at the moment they don't, needs an investigation).
Modified avro_alter to prevent data type conversion errors.
The readqueue should never be explicitly assigned and should only ever be
appended to. This guarantees that the packets are read and processed in
the correct order.
Also removed an unused function that deals with the readqueue
manipulation.
When packets were routed individually, the qualification for the
persistent pool was done before the current command was updated. In
addition to this, the previous commit doesn't seem like it can even build.
The statement tracking was given the same buffer multiple times when a
large packet was spread across multiple buffers when a router with
RCAP_TYPE_STMT_INPUT was present.
The command tracking packet processing is redundant for routers that
require RCAP_TYPE_STMT as the same processing is done later when the
buffer is split into packets and routed. By using the split packets, the
protocol module can simplify the command tracking by a great deal for most
routers.
When backend authentication failed due to errors other than wrong
credentials, the users were unconditionally reloaded. This caused a spike
of activity whenever authentication failed for other reasons.
Also fixed the test that checks for this to look for the correct error
message.
Added some code to detect server states in a consistent manner and created
the test. The multi-source replication appeared to cause problems for the
test system which needs to be resolved.
Added --force flags to most direct `mysql` calls to prevent errors from
stopping the processing of remaining commands.
The new parameter allows ignoring of master servers that are external to
the monitor configuration. This allows sub-trees of the actual replication
tree to be used as fully fledged replication trees.
When a table map event is read after an alter table, the old TABLE_MAP
object contains old information. Due to this, as well as the added benefit
of making the code easier to read, the recycling of TABLE_MAP objects was
removed. In practice, there were no benefits to re-mapping the tables to a
different ID.
The Annotate_rows events were not processed which caused the following
table map event to be ignored.
Also removed a false debug assertion. The byte count can be zero and the
pointer is not guaranteed to point to anything valid.
If a CREATE TABLE statement had a quoted keyword as the name of a field,
the calculated column count and actual column counts would differ.
In addition to this, oneline comments before the end of the statement
would truncate the SQL due to the fact that the whitespace was squashed
before the comment removal was done.
If the provided config path refers to a directory it can still
be opened and an attempt to read be made. However, as reading
will fail but end-of-file not be reached, we can't rely upon
'feof()' for detecting when to bail out.
As it is a user error to provide a directory as the config path,
that will be detected and deemed an error in a subsequent commit.
Stop replicating from master if unsupported binlog events are seen.
Also report error message for unsupported events
(blr_read_events_all_events) at maxscale start-up and with
maxbinlogcheck utility