If the slave's response differs from the master and the slave sent an
error packet, log the contents of the error. This should make it obvious
as to what caused the failure.
The assertion in routeQuery that expects there to be at least one ongoing
query would be triggered if a query was received after a master had failed
but before the session would close. To make sure the internal logic stays
consistent, the error handler should only decrement the expected response
count if the session can continue.
MySQLAuth now logs the server where the users were loaded from. As only
the initial loading of users causes a log message, it is still possible
for the source server to change without any indication of it.
By enabling the debug messages only at startup, we'll get log messages for
any daemon startup failures and we exlude the verbose parsing errors that
malformed requests cause.
The hard limit of 10 seconds is too strict when taking into account the
fact that infinite refreshes was possible before the bug was fixed. This
also makes testing a lot easier where rapid reloads are necessary.
The first node without a priority would be chosen as the candidate master
and the rest would be ignored. The code must check if neither of the two
nodes have priorities and if so must choose the better one.
Certain MariaDB connectors will use the direct execution for batching
COM_STMT_PREPARE and COM_STMT_EXECUTE execution without waiting for the
COM_STMT_PREPARE to complete. In these cases the COM_STMT_EXECUTE (and
other COM_STMT commands as well) will use the special ID 0xffffffff. When
this is detected, it should be substituted with the ID of the latest
statement that was prepared.
The stacktrace now removes leading paths from object files and common
source code prefixes. It also skips the first five frames which are always
inside the stacktrace printing code and the signal handler.
This fixes the test failures that stem from users being created right
after maxscale has started. This also should make startups a bit smoother
now that the default value of users_refresh_time has been fixed.
This could end up in infinite mutual recursion if no responses are
expected. Although this does not happen now that MXS-2587 is fixed, the
code should not even be there.
All COM_STMT_SEND_LONG_DATA commands and the COM_STMT_EXECUTE that follows
it must be sent to the same server. This implicitly works for masters but
with multiple slave servers the data could be sent to the wrong server.
By using the code added for MXS-2521, this problem can now be easily
solved by checking what the previous command was.
If a transaction replay fails, no queries must be routed before the
connection is closed. This could happen if the client received the error
from the replay failure and closes the connection before the fake hangup
generated by the replay failure is processed.
When fake hangup events are delivered via DCBs, the current DCB would not
be updated. This would cause error messages without a session ID which
makes failure analysis harder.
Systemd provides the facilities to run commands before startup which can
be used to prevent the problem that fixing MXS-2578 caused: upon upgrading
from 2.3.8 to 2.3.9 the /var/lib/maxscale directory would be removed if it
was empty.
The test appears to fail when the throttling is unable to keep the QPS
high enough for the test to pass. To reduce the likelihood of this, lower
the limit to 500 QPS.
In theory, the minimum delay of one millisecond in the delayed_call limits
the filter to a maximum QPS of 1000 as each query would wait for at least
a millisecond before being routed. This is yet to be proven but it would
explain why the tests are having a hard time approaching that level of
QPS.