The error was only generated for COM_STMT_EXECUTE commands when all PS
commands should trigger it. In addition, large packets would get sent two
errors upon the arrival of the trailing end.
If an error is generated while a COM_CHANGE_USER is being done, it would
always use the sequence number 1. To properly handle this case and send
the correct sequence number, the COM_CHANGE_USER progress needs to be
tracked at the session level.
The information needs to be shared between the backend and client
protocols as the final OK to the COM_CHANGE_USER, with the sequence number
3, is the one that the backend server returns. Only after this response
has been received and routed to the client can the COM_CHANGE_USER
processing stop.
If a server fails mid-resultset, there's not a lot we can do to recover
the situation. A few cases could be handled (e.g. generate an ERR if the
resultset has proceeded to the row processing stage) but these fall
outside the scope of the original issue.
As is explained in MDEV-19893:
Client reads from socket, gets the packet from 1. with seqno=0, which
it does not expect, since seqno is supposed to be incremented. Client
complains, throws tantrums and exceptions.
To cater for clients that do not expect out-of-bound messages
(i.e. server-initiated packets with seqno 0), all messages generated by
MaxScale should use at least sequence number 1.
Deep-copying prevents subsequent modifications done by the caller from
affecting the data that can be potentially stored in the write queue of
the backend's DCB.
Centos6 uses a very old version of SQLite without support for URI filenames.
PAM authenticator must use a file-based database.
Commit cherry-picked to 2.4.0 from 2.3.
If a COM_STMT_EXECUTE has no metadata in it and it has more than one
parameter, it must be routed to the same backend where the previous
COM_STMT_EXECUTE with the same ID was routed to. This prevents MDEV-19811
that is triggered by MaxScale routing the queries to different backends.
mxs::Buffer::iterator is not a random-access iterator and hence will have
linear cost. This is too costly to do on every packet with even moderately
sized resultsets.
By always restoring the ID, we are guaranteed to only store the query in
the form that it was originally sent in. This should be changed so that
the ID that the client sends can be used as-is in the backends.
The connection counts are now always used to pick the best servers where
the initial connections are created. This covers both master and slaves
selection. Reconnections done while routing queries still pick the "best"
server according to the slave selection criteria. This allows better
servers to be taken into use when `lazy_connect` is enabled.
1. Remove persistence of performance data
2. Move global CanonicalPerformance into SmartRouter object
3. Implement another kill_all_others_v2. Left kill_all_others_v1
in case it should be fixed and used instead.
Does the measurments, usage of the same and persistence to file.
Kill is not implemented, so waits for all clusters to fully responds
The performance data uses a mutex and the persistence data file
is written while holding the mutex. This obviously needs to be
improved, but this commit shows the working concept.