KILL commands are now sent to the backends in an asynchronous manner. As
the LocalClient class is used to connect to the servers, this will cause
an extra connection to be created on top of the original connections
created by the session.
If the user does not have the permissions to execute the KILL, the error
message is currently lost. This could be solved by adding a "result
handler" into the LocalClient class which is called with the result.
When the LocalClient is constructed, it is possible to extract all the
needed information at that time. The only obstacle is the fact that the
LocalClient is constructed at the same time the session is. Since the
client DCB is created before the session, it is safe to extract the shared
data directly from it.
If the client sends data before authentication is complete, it must not be
discarded and it needs to be processed like as if it was sent in a
separate network packet.
If a module command returns a json object, it will always be
returned to the caller, irrespective of whether the command
itself succeeded or not.
Otherwise, if the command failed and if the module command has
set an error message, that error message will be returned as a
json object containing the error message.
Since the module command interface was expanded to include a JSON output
parameter, there is no longer a need for an output DCB. As the JSON can be
printed by both maxadmin and the REST API, this allows the removal of
explicit output formatting in module commands.
If the authenticator option is enabled, no users are loaded and no errors
have occurred in the user loading process, the service credentials are
injected.
The `script_timeout` and `journal_max_age` parameters weren't handled in
the monitor alteration code.
Also added missing documentation to maxadmin help output for
`alter monitor`.
When the database firewall filter is used in white-list mode,
'USE <db>' should be allowed. When connecting, it is always
possible to specify the database anyway so restricting
'USE <db>' serves no purpose.
The result of the authentication should be ignored but the scramble that
is calculated as a side-effect still needs to be stored. This can be done
by altering the SQL used to get the matching row to only match on the
username, not the network address.
Also expanded the test case to cover the use of bad credentials.
The avrorouter now uses the parameters from the source service. This
removes the need for redundant parameter definition in the avrorouter
service when they are defined in the binlogrouter service as parameters.
Added some missing configuration sanity checks and updated the tutorial to
reflect the new configuration method introduced in 2.1.
The documentation stated that the binlogrouter would use the cache
directory to store the binary log files. In reality, there was no default
value and the service would fail to start without a binlogdir parameter.
The router now uses the data directory (/var/lib/maxscale/) to store the
binary logs.
Set the default value of mariadb10-compatibility to true. This is in line
with the fact that most installations should use the router with a MariaDB
10.0 server or newer.
If a prepared statement fails to execute on a backend server, no prepared
statement ID is returned. As the connection to the slave backend will be
closed when the result of a session command differs from the master's
response, there's no need to even attempt extracting the response.
This change avoids the triggering of a false positive in
mxs_mysql_extract_ps_response when an attempt to extract a
COM_STMT_PREPARE response is made on a response that isn't a
COM_STMT_PREPARE response.
When readwritesplit is routing any queued queries, the currently executed
command of the protocol modules needs to be adjusted by
readwritesplit. This is not a true fix but more of a workaround to fix the
problems of queued query execution.
The correct solution would be to move the queued query handling into the
client protocol module so that all components see the same state.
Asserting that only a complete COM_STMT_PREPARE is returned when the
prepared statement preparation is extracted will guarantee that the
protocol works as expected.
If a prepared statement sends large amounts of data, the target server
where the data is sent will be tracked. The tracked target was not reset
after a multi-packet query was completed and the target itself was used to
check whether the session was processing a multi-packet query.
Changed the check to use the boolean variable instead of the target and
added a reset of the tracked target after a multi-packet query was
completed.
The new parameter allows the session to be "locked" to the master server
after a stored procedure is called. This will keep the session state
consistent if the stored procedure call modifies the state of the session.
The monitor performs simple monitoring of a group replication cluster. It
expects no extra parameters in addition to the common monitor parameters
and will only set master and slave status bits on the servers. It is a
part of the experimental package as it is a very experimental module.
Further improvements would be to add the usage of the synced state.