When a slave transitions from catchup mode to up-to-date mode, an error
message is logged because the slave is at an unexpected position. This
error message should not be logged because it is a possible and an expected
situation.
The check for rotate event conditions was wrong which led to false error
message about unexpected binlog file and position combinations.
The position of the last event was reset every time a file was opened which
caused problems when the binlog file was rotated. The slave's current positions
were compared to the position where the last event started and because the
last_event_pos variable didn't point to the rotate event of the previous binlog,
the slave's never got the rotate event.
The name of the binlog file was added to the log message where a slave
is behind the master but the same file is used. This makes debugging the problem
a bit easier.
The decision to send an event to a slave can now only be made in one place.
This will force all events to pass the same checks before they are sent to
the slaves.
The position of the next event to be written was used as the position
of the current event. This caused the checks for the position of the current
safe event to fail and the non-transaction safe version was used.
This only happened with events that are not done inside a transaction i.e.
DDL statements.
The message is logged when a DDL statement is executed. It should not be
logged if trx_safe is on since the current_safe_event should always point
at the event we are sending. The current_safe_event is set to the wrong value
which causes this message to be logged.
Due to the false positives caused by this, the message is removed.
The message now states the location where it was called from and the amount
of events received from the master. In addition to this, new logging was
added when unsafe events are sent to slaves when transaction safety is enabled.
The duplicate event error message now logs the length of the slave's
write queue. This will tell how much data is still buffered inside MaxScale
when duplicate events are detected.
If a duplicate event is detected the state of the slave is set
to BLRS_ERRORED and the connection is closed. That way the
duplicate event will not break the slave, and it will pick
up its state when it reconnects.
When an event is sent to a slave, we store information about the
event and who sent it, so that we can detect if the same event is
sent twice. If a duplicate event is detected, we log information
about it.
The service permission checks did not check for SELECT privileges on
mysql.tables_priv which caused confusing error messages. The database
grant erros also did not log the MySQL error message which is often very
helpful when resolving permission errors.
The readwritesplit assumed that the execution of a session command would
always succeed. This is not the case when a write to the backend server
fails and it is not something that shouldn't happen.
The client side authentication assumed that it was processing contiguous memory.
This caused the authentication to fail when packets were received in multiple
parts. Transforming the buffer chain into one contiguous buffer fixes this problem.
The default version string is now `5.5.5-10.0.0 <MaxScale version>-maxscale`.
This fill fix Java connector issues related to version string processing.
Added a small explanation and an excerpt from a configuraton file to
the dbfwfilter documentation. It demonstrates the use of both blacklist
and whitelist functionality in the same service.
Added a small explanation and an excerpt from a configuraton file to
the dbfwfilter documentation. It demonstrates the use of both blacklist
and whitelist functionality in the same service.
The `allow` keyword can be used to substitute the `deny` keyword but this
was not documented. Also the fact that neither of them affect the actual
behavior of the filter was not very clearly stated.
The `allow` keyword can be used to substitute the `deny` keyword but this
was not documented. Also the fact that neither of them affect the actual
behavior of the filter was not very clearly stated.
It is possible that messages logged immediately before exiting are not flushed
to disk. Flushing all logs before exiting from the main function guarantees
that any relevant log messages are flushed to disk.