This allows the same verbose information to be logged in the cases where
it is of use. Mostly this information can be used to figure out why a
certain session was closed.
By doing the reconnection only when a new query arrives, we prevent the
excessive reconnecting that is done when a server's actual and monitored
states are in conflict.
If a master failed during an ongoing session command history replay, it
would be treated as if a normal session command failed which would result
in the already executed session command being re-executed on all servers
at the wrong logical position.
To fix this, the history replay must be distinguished from normal session
command execution. When a connection replaying the history fails, the
query routing simply needs to be attempted again.
When a query returns a WSREP error, most of the time it is not something
the client application is expecting. To prevent this from affecting the
client, it can be treated the same way a transaction rollback is treated:
ignore the error and try again.
The expected response counter was not decremented if a transaction replay
was started. This caused the connections to hang which in turn caused the
failure of the mxs1507_trx_stress test case.
The assertion in routeQuery that expects there to be at least one ongoing
query would be triggered if a query was received after a master had failed
but before the session would close. To make sure the internal logic stays
consistent, the error handler should only decrement the expected response
count if the session can continue.
This could end up in infinite mutual recursion if no responses are
expected. Although this does not happen now that MXS-2587 is fixed, the
code should not even be there.
If a transaction replay fails, no queries must be routed before the
connection is closed. This could happen if the client received the error
from the replay failure and closes the connection before the fake hangup
generated by the replay failure is processed.
If a server fails mid-resultset, there's not a lot we can do to recover
the situation. A few cases could be handled (e.g. generate an ERR if the
resultset has proceeded to the row processing stage) but these fall
outside the scope of the original issue.
If one slave is executing a query while another one is executing a session
command and the one that is executing the session command fails, the
ongoing query would get retried even though the server that failed was not
executing it. If the server was executing a session command, nothing needs
to be done.
If a resultset is followed by an ERR packet that is not expected
(e.g. server is shutting down), the packet must not be sent to the
client. This allows readwritesplit to replace the failing connection with
a new one thus hiding server shutdowns from clients.
As an error returned by the server is now stored inside RWBackend,
irrespective of whether it is returned solely or e.g. last after
a result set, there is no need to examine the GWBUF in rws, but
we can use the information that exists.
If the execution of a session command fails on a master, it is retried
again. If the master is not available, the response will be returned from
one of the slaves.
The retrying of a read on a slave should only be done when the failing
server is waiting for a result and it was the last server from which a
result was expected.
If the master fails when a session command is being executed with
delayed_retry enabled, a null query would get placed into the query
queue. This change simply prevents the crash and closes the session even
though the query could be retried.
A query should not be queued if no responses are expected. The code that
executes queued queries should be dead code and this assertion would catch
it.
If a client requests an unknown binary protocol prepared statement handle,
a custom error shows the actual ID used instead of the "empty" ID of 0
that the backend sends.
The code that checked that only non-empty queries are stored in the query
queue was left out when the query queue fix was backported to 2.3. Since
MXS-2464 is caused by a still unknown bug, the runtime check should help
figure out in which cases the problem occurs.
Added a test that makes sure the transaction replay cap is respected. Also
improved the logging to show how many transaction replay attemps have been
done and to log if a replay is not done due to too many attempts.
In most cases it is reasonable to stop attempting transaction replays
after a certain number of failed attempts. This prevents transactions from
being repeatedly replayed on the same server over and over again if, for
example, it keeps crashing.
If
- transaction replay is enabled,
- an error is returned and
- the error is one of the recoverable Clustrix errors
we will retry the transaction.
If it succeeds, then the client will not notice anything but
for a short delay.
Note that the error message is looked for irrespective of whether
the backend is Clustrix or not. However, as errors are not common
the price for doing that can probably be ignored.
However, a bigger problem is that explicit knowledge of different
backends should *not* be coded into routers.
If a transaction replay has to be executed twice due to a failure of the
original candidate master, the query queue could contain replayed
queries. The replayed queries would be placed into the queue if a new
connection needs to be created before the transaction replay can start.
Backported the changes that convert the query queue in readwritesplit into
a proper queue. This changes combines both
5e3198f8313b7bb33df386eb35986bfae1db94a3 and
6042a53cb31046b1100743723567906c5d8208e2 into one commit.