The configuring of the monitor instance is now performed in a
separate function. That is in preparation for the moving of the
start function to maxscale::MonitorInstance.
- All monitors (but MariaDBMon for the time being) inherit
from that.
- All common member variables moved to that class. Still
manipulated in derived classes.
In subsequent commits common functionality will be moved to
that class.
To work around the limitation in the session command handling and
multi-part results, all session commands are now treated as gathered
results. This allows session commands which return result sets to be used
with MaxScale.
This change should not cause problems with practical workloads as they
usually do not return massive resultsets for session commands.
The optimal way to handle the multi-part responses would be to integrate
it into the result completion tracking process. This would allow the
prepared statement IDs to be extracted while the command is being
processed.
By relying on the server to tell us that it is requesting the loading of a
local infile, we can remove one state from the state machine that governs
the loading of local files. It also removes the need to handle error and
success cases separately.
A side-effect of this change is that execution of multi-statement LOAD
DATA LOCAL INFILE no longer hangs. This is done by checking whether the
completion of one command initiates a new load.
The current code recursively checks the reply state and clones the
buffers. Neither of these are required nor should they be done but
refactoring the code is to be done in a separate commit.
Added two helper functions that are used to detect requests for local
infiles and to extract the total packet length from a non-contiguous
GWBUF.
The logic was weird, as the permission checking function assumes a disconnected
server as fine. The checking is now done when starting the main loop and lacking
grants print errors but does not stop the monitor.
The default database was not extracted correctly as the length of the
user's name did not include the null terminator. Also the comparison for
database name length used the smaller than operator instead of the correct
larger than operator.
When the client reauthenticates via COM_CHANGE_USER the new SHA1 needs to
be stored as the backend connections rely on it being up-to-date.
This commit fixes the regression of the mxs548_short_session_change_user
test.
If the reauthentication of a client that is performing a COM_CHANGE_USER
fails, the users need to be reloaded. Without the reloading, the
reauthentication will fail if new users were added after the last loading
of users.
The parameter extraction caused a recursive lock of the server
spinlock. To work around this, an unlocked version of server_get_parameter
is needed.
Ideally, a lock-free setup would be used but due to this being a bug fix,
it will have to be done later on.
The re-authentication done in MaxScale caused multiple error packets to be
sent for the same COM_CHANGE_USER. In addition to this, the failure of
authentication did not terminate the client connection.
The change in behavior requires the test case to be changed as well.
Since monitors are now freed at MaxScale exit, the server data should be freed. Also,
gtid domain variables are now initialized with a common constant.
Multi-statement SELECTs were properly detected and handled,
but e.g. multi-statement UPDATESs were not, with the result
that erronous warnings were logged.
Now the responses are detected and handled properly.
If a connection is killed but the backend DCBs have not yet received their
thread IDs, the connections can be forcibly closed. This removes the
possibility of stale connections caused by an unfortunately timed KILL
query to a session that has partially connected to some servers.