This correctly triggers the session command response processing to accept
results from other servers than the current master backend if the session
can continue. If the session cannot continue, it will be stopped
immediately.
The code used a null GWBUF with gwbuf_append which causes a crash. The
return value of the function that used it was also not correctly handled
and would be mistaken for a different error.
This allows the set of servers used by the service to also participate in
the cache value resolution. This will prevent the most obvious of problems
but any abstractions of the servers will prevent this from working.
Session commands did not trigger a reconnection process which caused
sessions to be closed in cases where recovery was possible.
Added a test case that verifies the patch fixes the problem.
If the session command could not be routed, the log message should contain
the actual command that was routed. This makes failure analysis much
easier.
If a limit on the replication lag is configured, servers with unmeasured
replication lag should not be used. The code in question did use them even
when a limit was set as the value used for undefined lag was -1 which
always measured lower than the limit.
The code relied on last_read for the idle time calculation which caused
the pings that were written to not reset the idle time. This increased the
chance of multiple COM_PING packets being sent to a backend before a reply
was received.
The use of the server state is not transactional across multiple uses of
the function. This means that any assertions on the target state can fail
if the monitor updates the state between target selection and the
assertion.
The slave backend would be closed twice if it would both respond with a
different result and be closed due to a hangup before the master
responded.
Added a test case that reproduced the problem.
It appears that rollback errors are possible outside of
transactions. Since this was not something we expected to see, logging it
as an error allows us to see why this happens in production deployments.
The errors that are ignored by readwritesplit are now stored as the
current close reason in the Backend. This allows the information about the
error to be retained and it can be used later in the error handler to
display the true reason why the connection was closed.