Basic tests can be executed with 15 machines Master/slave backend. Tests have label BIG_REPL_BACKEND Default template modified to support big backend. Tests temporaraly labeled as UNSTABLE to prevent their execution nightly
For big test maxscale.cnf is automatically generated for any number of nodes
Basic tests can be executed with 15 machines Master/slave backend. Tests have label BIG_REPL_BACKEND Default template modified to support big backend. Tests temporaraly labeled as UNSTABLE to prevent their execution nightly
maxscale-system-test changed in order to control test environment by itself.
Every test checks which machines are running, compare with list of needed machines
and start new VMs is they are missing in the running machines list.
Tests are executiong MDBCI commands, MDBCI executable should be in the PATH
The Galera documentation tells us to use the galera_new_cluster command to
start a new Galera cluster. This should prevent the problems of nodes
failing to join the cluster either on the initial startup or after a node
goes down.
When connecting to a node, a database can now be optionally given as a
parameter. This makes testing with different databases easier as the need
to use the explicit functions is removed.
As the ssh_node_f function supports full shell syntax, all of the work can
be done with a single ssh connection. This removes the overhead that each
extra ssh connection adds.
A certain templated parameter was only substituted when the VMs were
provisioned. This needs to be handled by the test framework to allow
changes into Galera clusters configuration.
Also made the startup of the "lesser" nodes parallel so minimize the
startup time.
The galera configurations need pre-processing before they can be
used. Switched to std::endl to automatically flush the output at the end
of each line. This makes it easier to see what is happening when the tests
are ran by buildbot. Also removed the extra startup of the servers that
was done right after installing the database.
If the replication is broken between the nodes, it is now fixed in
parallel on all nodes instead of doing it one server at a time.
This reduces the time from about 120 seconds to 13 seconds. The time was
measured by running the check_backend test first with all backends broken
and then with the fixed backends subtracting time of the latter from the
former.
See script directory for method. The script to run in the top level
MaxScale directory is called maxscale-uncrustify.sh, which uses
another script, list-src, from the same directory (so you need to set
your PATH). The uncrustify version was 0.66.
The test methods that take printf style input now have the printf
attribute. This enables format checks making oversights less likely.
Also fixed any existing errors in the code. Only the one in
test_binlog_fnc.cpp would've had an actual effect.
The test should stop MaxScale when it is fixing the replication to prevent
the triggering of the standalone master detection.
Also removed leading spaces from the messages and fixed a possible crash with a
NULL value given to `ssh_node`.
Added test cases that verify that the functionality works as
expected. Also made Mariadb_nodes::change_master less verbose when one of
the nodes is down.
When the test changes the master, it should reset the slave configuration
on the new master. This way no circular replication topologies are formed
and the monitor can be expected to perform correctly.
The test checks that failover works even when the master of the monitored
cluster is a slave to an external masters. The test also verifies that the
servers do not get unexpected status labels.
Now that mariadbmon supports failover and switchover, a test may
may change the master to some other node than node 0.
Consequently, as part of fixing the replication, gtid_slave_pos
of node 0 must be reset as well.