The settings are different from the other fields in that they should not change
on their own. Most manipulation functions only require the settings.
Also takes into use a class for host and port data.
If the slave's response differs from the master and the slave sent an
error packet, log the contents of the error. This should make it obvious
as to what caused the failure.
The assertion in routeQuery that expects there to be at least one ongoing
query would be triggered if a query was received after a master had failed
but before the session would close. To make sure the internal logic stays
consistent, the error handler should only decrement the expected response
count if the session can continue.
MySQLAuth now logs the server where the users were loaded from. As only
the initial loading of users causes a log message, it is still possible
for the source server to change without any indication of it.
The first node without a priority would be chosen as the candidate master
and the rest would be ignored. The code must check if neither of the two
nodes have priorities and if so must choose the better one.
The backend DCBs do not have to send hangups in the close protocol API
method. If the conditions for the hangup were true, the session was
already stopping meaning that the client DCB was already closed.
This could end up in infinite mutual recursion if no responses are
expected. Although this does not happen now that MXS-2587 is fixed, the
code should not even be there.
If a transaction replay fails, no queries must be routed before the
connection is closed. This could happen if the client received the error
from the replay failure and closes the connection before the fake hangup
generated by the replay failure is processed.
If code that may remove items from m_nodes_by_id (Clustrix nodes
keyed by id) succeeds, we must update the vector of health check
URLs also in the case that code that _may_ add items to m_nodes_by_id
fails.
The error was only generated for COM_STMT_EXECUTE commands when all PS
commands should trigger it. In addition, large packets would get sent two
errors upon the arrival of the trailing end.
If an error is generated while a COM_CHANGE_USER is being done, it would
always use the sequence number 1. To properly handle this case and send
the correct sequence number, the COM_CHANGE_USER progress needs to be
tracked at the session level.
The information needs to be shared between the backend and client
protocols as the final OK to the COM_CHANGE_USER, with the sequence number
3, is the one that the backend server returns. Only after this response
has been received and routed to the client can the COM_CHANGE_USER
processing stop.
If a server fails mid-resultset, there's not a lot we can do to recover
the situation. A few cases could be handled (e.g. generate an ERR if the
resultset has proceeded to the row processing stage) but these fall
outside the scope of the original issue.
As is explained in MDEV-19893:
Client reads from socket, gets the packet from 1. with seqno=0, which
it does not expect, since seqno is supposed to be incremented. Client
complains, throws tantrums and exceptions.
To cater for clients that do not expect out-of-bound messages
(i.e. server-initiated packets with seqno 0), all messages generated by
MaxScale should use at least sequence number 1.
Deep-copying prevents subsequent modifications done by the caller from
affecting the data that can be potentially stored in the write queue of
the backend's DCB.
Centos6 uses a very old version of SQLite without support for URI filenames.
PAM authenticator must use a file-based database.
Commit cherry-picked to 2.4.0 from 2.3.
If a COM_STMT_EXECUTE has no metadata in it and it has more than one
parameter, it must be routed to the same backend where the previous
COM_STMT_EXECUTE with the same ID was routed to. This prevents MDEV-19811
that is triggered by MaxScale routing the queries to different backends.