If the client sends data before authentication is complete, it must not be
discarded and it needs to be processed like as if it was sent in a
separate network packet.
If a module command returns a json object, it will always be
returned to the caller, irrespective of whether the command
itself succeeded or not.
Otherwise, if the command failed and if the module command has
set an error message, that error message will be returned as a
json object containing the error message.
Since the module command interface was expanded to include a JSON output
parameter, there is no longer a need for an output DCB. As the JSON can be
printed by both maxadmin and the REST API, this allows the removal of
explicit output formatting in module commands.
The yargs framework combined with the pkg packaging causes the executable
name to be mangled on installation. For this reason, the usage should be
explicitly added to each command.
The `script_timeout` and `journal_max_age` parameters weren't handled in
the monitor alteration code.
Also added missing documentation to maxadmin help output for
`alter monitor`.
When the database firewall filter is used in white-list mode,
'USE <db>' should be allowed. When connecting, it is always
possible to specify the database anyway so restricting
'USE <db>' serves no purpose.
If a script variable resolves to an empty string, the replacement attempt
will fail with an out-of-memory error. The following realloc call will
fail as it requires a positive value for the new size.
The test now checks that both users with passwords and users without
passwords work through MaxScale when skip_authentication is enabled. In
addition to this, the test also makes sure that the wrong password and a
missing password are properly detected.
The result of the authentication should be ignored but the scramble that
is calculated as a side-effect still needs to be stored. This can be done
by altering the SQL used to get the matching row to only match on the
username, not the network address.
Also expanded the test case to cover the use of bad credentials.
The avrorouter now uses the parameters from the source service. This
removes the need for redundant parameter definition in the avrorouter
service when they are defined in the binlogrouter service as parameters.
Added some missing configuration sanity checks and updated the tutorial to
reflect the new configuration method introduced in 2.1.
The documentation stated that the binlogrouter would use the cache
directory to store the binary log files. In reality, there was no default
value and the service would fail to start without a binlogdir parameter.
The router now uses the data directory (/var/lib/maxscale/) to store the
binary logs.
Set the default value of mariadb10-compatibility to true. This is in line
with the fact that most installations should use the router with a MariaDB
10.0 server or newer.
The binlogrouter tests can safely use the sync_slaves functionality of the
test framework as long as a sensible timeout is used.
Cleaned up the tests by removing redundant code and allocating classes
from the stack thus removing the need to handle memory allocation.
If a prepared statement fails to execute on a backend server, no prepared
statement ID is returned. As the connection to the slave backend will be
closed when the result of a session command differs from the master's
response, there's no need to even attempt extracting the response.
This change avoids the triggering of a false positive in
mxs_mysql_extract_ps_response when an attempt to extract a
COM_STMT_PREPARE response is made on a response that isn't a
COM_STMT_PREPARE response.
When readwritesplit is routing any queued queries, the currently executed
command of the protocol modules needs to be adjusted by
readwritesplit. This is not a true fix but more of a workaround to fix the
problems of queued query execution.
The correct solution would be to move the queued query handling into the
client protocol module so that all components see the same state.
Asserting that only a complete COM_STMT_PREPARE is returned when the
prepared statement preparation is extracted will guarantee that the
protocol works as expected.
The test inserted 200k rows with autocommit enabled which made it very
slow. Wrapping the inserts inside a transaction gives the test a
significant speed boost.