If the master succeeds in executing a session command but the slave fails,
the error message could help explain why it failed. At the moment this is
mainly relevant for inspection of test results.
Previously, if the server had no gtid:s, the method would fail leading to
a confusing error message. This could even totally stop the monitor from working
if a recent server version (10.X) did not have any gtid events.
'maxctrl list sessions' will now show the connection
time and idleness in addition to the id, user, host
and service of the session. Further, the columns have
be reordered somewhat so that the id, user and host are
shown first, and the service last.
The transaction replay could get mixed up with new queries if the client
managed to perform one while the delayed routing was taking place. A
proper way to solve this would be to cork the client DCB until the
transaction is fully replayed. As this change would be relatively more
complex compared to simply labeling queries that are being retried the
corking implementation is left for later when a more complete solution can
be designed.
This commit also adds some of the missing info logging for the transaction
replaying which makes analysis of failures easier.
If the client sends two different sets of capability bits during the
authentication phase of an SSL enabled connection, both sets need to be
combined. This prevents capabilities from degrading mid-connection which
is the case when Oracle Connector/J drops the SSL capability bit
mid-authentication.
The servers with a zero weight would be always used over ones that have a
weight. This means that the behavior was inverted and caused the
mxs2054_hybrid_cluster test to fail in 2.3.
Also fixed a typo in the deprecation message.
The blocking of the nodes that happens before it could cause the
connections to break. This also removes the need for the fixing of the
replication which takes time.
Commit a9e236497963251f8b4afa07484b88ad97e73a03 changed where the PS ID
for a binary protocol command is replaced with the internal form. This
caused prepared statements that are also session commands to be always
routed with the external ID.
As the external ID is almost always the master's ID, the aforementioned
bug resulted in odd side-effects and the true cause of these was only
revealed when the error message sent by the slave was included in the log
messages.
If the service doesn't require collection of complete packets, the user
reauthentication done with COM_CHANGE_USER would be skipped. This caused
the change_user test to fail.
By temporarily switching to full packet collection mode for the duration
of the COM_CHANGE_USER, we avoid duplicating the code for the streaming
router types.
Exclude systemd usage if the library is not installed.
Only excluding what is necessary. This keeps the object size the
same and still compiles most of the code.
Systemd wathdog notification at a little more than 2/3 of the
systemd configured time. In the service config (maxscale.service)
add e.g. WatchdogSec=30s to set and enable the watchdog.
For building: install libsystemd-dev.
The next commit will modify cmake configuration and code to
conditionally compile the new code based on existence of libsystemd-dev.
If the test uses two MaxScales, they are automatically stopped after the
test. This prevents the second MaxScale from interfering with subsequent
tests.
If the initial handshake that is sent by the accepting thread is buffered,
the subsequent flushing of it is done by the owning thread. As
cross-thread buffer usage is not allowed, the initial handshake must be
sent by the owning thread.
If a PS command is routed multiple times, the ID will not be reverted to
the external ID in the failure cases. This prevented prepared statements
from being re-routed correctly.
By exposing a (currently undocumented) debug endpoint that lets one
monitor interval pass, we make the reuse of the monitor waiting
functionality a lot easier. With it, when MaxScale is started by the test
framework it knows that at least one monitor interval will have passed for
all monitors and that the system is ready to accept queries.
As the ssh_node_f function supports full shell syntax, all of the work can
be done with a single ssh connection. This removes the overhead that each
extra ssh connection adds.
The collection of the various artifacts generated by a test case and the
core dump detection is now done in the same SSH command. This removes the
extra overhead that it added.
There were a total of five SSH connections opened at the start of each
test. Only two of these are currently required: the SSL certificate
directory check and the actual command that restarts MaxScale. Two of the
three remaining commands, stopping of MaxScale and copying of the
configuration, can be made conditional or combined into other
commands.
The stopping of MaxScale is done to prevent it from interfering with the
cluster setup process. As MaxScale does nothing if nothing is wrong, it is
safe to make the restart conditional so that it is done only when a
problem in the cluster setup is detected.
The final SSH command, the MaxScale health check via maxadmin, can be
removed as it is redundant: the daemonization already covers this by
exiting only after MaxScale is ready.
A certain templated parameter was only substituted when the VMs were
provisioned. This needs to be handled by the test framework to allow
changes into Galera clusters configuration.
Also made the startup of the "lesser" nodes parallel so minimize the
startup time.
The galera configurations need pre-processing before they can be
used. Switched to std::endl to automatically flush the output at the end
of each line. This makes it easier to see what is happening when the tests
are ran by buildbot. Also removed the extra startup of the servers that
was done right after installing the database.
Grouped all binlogrouter and avrorouter tests so that they are executed as
the last tests. This helps prevent some side effects that result from the
"aggressive" replication modifications the tests do. Also removed some
commented out test cases.