The test case covers a few bugs that were fixed by the previous
commits. The first part of the test covers the case when master
reconnection fails while session command history is being executed. The
second part of the test makes sure exceeding the session command history
will prevent master reconnections from taking place.
The test now uses standard library threads and lambda functions to make
the code simpler. Also made the test more robust by ignoring any errors
that are caused by the exhaustion of available client side TCP ports.
The hard limit of 10 seconds is too strict when taking into account the
fact that infinite refreshes was possible before the bug was fixed. This
also makes testing a lot easier where rapid reloads are necessary.
Certain MariaDB connectors will use the direct execution for batching
COM_STMT_PREPARE and COM_STMT_EXECUTE execution without waiting for the
COM_STMT_PREPARE to complete. In these cases the COM_STMT_EXECUTE (and
other COM_STMT commands as well) will use the special ID 0xffffffff. When
this is detected, it should be substituted with the ID of the latest
statement that was prepared.
The test appears to fail when the throttling is unable to keep the QPS
high enough for the test to pass. To reduce the likelihood of this, lower
the limit to 500 QPS.
In theory, the minimum delay of one millisecond in the delayed_call limits
the filter to a maximum QPS of 1000 as each query would wait for at least
a millisecond before being routed. This is yet to be proven but it would
explain why the tests are having a hard time approaching that level of
QPS.
Syncing the slaves should prevent replication lag from affecting the
test. The added logging will help determine what the error was that caused
the failure.
Before the MXS-2250 fix, the following ends with an error:
CREATE TEMPORARY TABLE t (f INT);
DESCRIBE t;
Reason is that the first is sent to the master (and the table will
not be replicated to slaves) and the latter to some slave.
The bug appears when a session command that is executed on the master
fails. The logic in the code doesn't take this case into consideration
when it processes failed connections.
'build' and 'run_test' copies public ssh keys to all created VMs
Legacy code takes keys from different locations, e.g. ~/build-scripts/team_keys which
causes errors due to lack of such files.
Now public keys goes from file defined on or if not defined
from .ssh/id_rsa.pub of current host machine
mdbci call can heppen before maxsales object creation and segfault hapens
To avoid it call of ssh for 'maxscale --version-full' is moved to the end
of Testconnections constructor
Previously Maxscale version check was in the 'check_backend', but this was
removed from the tests execution process. Now 'maxscale --version-full'
is added to nodes creation function - to call_mdbci()
Since UNIONs that would allow a column to be masked are
rejected, there is no need to check that masking is performed
since we will never get that far.