The multi-statement detection did not check for the existence of
semicolons before doing the heavier processing.
Calculcate the packet length only once for the result state management.
Replace the original version of the function with the reference version
and use it everywhere. Added runtime assertions to check that an invalid
DCB is never processed.
As the DCB passed as the clientReply parameter is guaranteed to match one
of the DCBs in the RWBackends. By using a reference, the need to copy a
shared_ptr is removed (along with the atomic operation that it implies)
thus reducing the overhead in the clientReply and the functions it uses.
The result collection did not reset properly when a non-resultset was
returned for a request. As collected result need to be distinguishable
from single packet responses, a new buffer type was added.
The new buffer type is used by readwritesplit which uses result collection
for preparation of prepared statements.
Moved the current command tracking to the RWBackend class as the command
tracked by the protocol is can change before a response to the executed
command is received.
Removed a false debug assertion in the mxs_mysql_extract_ps_response
function that was triggered when a very large prepared statement response
was processed in multiple parts.
Inlined the getter/setter type functions that are often used. Profiling
shows that inlining the RWBackend get/set functions for the reply state
manipulation reduces the relative cost of the function to acceptable
levels. Inlining the Backend state function did not have as large an
effect but it appears contribute a slight performance boost.
The result processing code did unnecessary work to confirm that the result
buffers are contiguous. The code also assumed that multiple packets can be
routed at the same time when in fact only one contiguous result packet is
returned at a time.
By assuming that the buffers are contiguous and contain only one packet,
most of the copying and buffer manipulation can be avoided.
If a module command returns a json object, it will always be
returned to the caller, irrespective of whether the command
itself succeeded or not.
Otherwise, if the command failed and if the module command has
set an error message, that error message will be returned as a
json object containing the error message.
Since the module command interface was expanded to include a JSON output
parameter, there is no longer a need for an output DCB. As the JSON can be
printed by both maxadmin and the REST API, this allows the removal of
explicit output formatting in module commands.
The `script_timeout` and `journal_max_age` parameters weren't handled in
the monitor alteration code.
Also added missing documentation to maxadmin help output for
`alter monitor`.
The avrorouter now uses the parameters from the source service. This
removes the need for redundant parameter definition in the avrorouter
service when they are defined in the binlogrouter service as parameters.
Added some missing configuration sanity checks and updated the tutorial to
reflect the new configuration method introduced in 2.1.
The documentation stated that the binlogrouter would use the cache
directory to store the binary log files. In reality, there was no default
value and the service would fail to start without a binlogdir parameter.
The router now uses the data directory (/var/lib/maxscale/) to store the
binary logs.
Set the default value of mariadb10-compatibility to true. This is in line
with the fact that most installations should use the router with a MariaDB
10.0 server or newer.
If a prepared statement fails to execute on a backend server, no prepared
statement ID is returned. As the connection to the slave backend will be
closed when the result of a session command differs from the master's
response, there's no need to even attempt extracting the response.
This change avoids the triggering of a false positive in
mxs_mysql_extract_ps_response when an attempt to extract a
COM_STMT_PREPARE response is made on a response that isn't a
COM_STMT_PREPARE response.
When readwritesplit is routing any queued queries, the currently executed
command of the protocol modules needs to be adjusted by
readwritesplit. This is not a true fix but more of a workaround to fix the
problems of queued query execution.
The correct solution would be to move the queued query handling into the
client protocol module so that all components see the same state.
If a prepared statement sends large amounts of data, the target server
where the data is sent will be tracked. The tracked target was not reset
after a multi-packet query was completed and the target itself was used to
check whether the session was processing a multi-packet query.
Changed the check to use the boolean variable instead of the target and
added a reset of the tracked target after a multi-packet query was
completed.
The new parameter allows the session to be "locked" to the master server
after a stored procedure is called. This will keep the session state
consistent if the stored procedure call modifies the state of the session.
The removal of a server from a service is intended to affect only new
sessions.
Added a test that checks that the connections are kept open even if the
server is removed from the service.
The enums exposed by the connector are not intended to be used by the
users of the library. The fact that the protocol, and other, modules used
it was in violation of how the library is intended to be used.
Adding an internal mapping into MaxScale also removes some of the
dependencies that the core has on the connector.
Cleaned up the MaxScale version of the mysql.h header by removing all
unused includes. This revealed a large amount of dependencies on these
removed includes in other files which needed to be fixed.
Also sorted all includes in changed files by type and alphabetical
order. Removed explicit revision history from modified files.
With GTID master registration always get filename from GTID repo.
The filestem option is not written if binlog_name is not set: this is
not needed by GTID registration.
router->fileroot is set when last name is loaded from GTID repo.