In some configs /etc/my.cnf.d/* files access rights are
very limited and server can not read configuration.
It leads to broken settings and unability to setup
replication during the test
Due to incorrect SSL certs copying to backend and wrong setting in
maxscale.cnf it was not possible to active backend SSL.
Additionally, one more maxscale restart added to 'sql_queries' test
to reproduce SSL bug in 2.4.1.
Also ssl.cnf tuned in order to reproduce SSL bug
By moving the setting up of the test environment from the constructor
to a separate setup()-function, it is possible to introduce virtual
functions and make it easier to do things differently depending on
whether the backend is MariaDB, Galera och Clustrix.
Implementations of check_replication() and start_replication() for Clustrix allows to use fix_replication()
also for Clustrix nodes without checking it.
Also several attempts to check nodes after restart are added - to wait for nodes if they are not running
right after server daemon restart
Basic tests can be executed with 15 machines Master/slave backend. Tests have label BIG_REPL_BACKEND Default template modified to support big backend. Tests temporaraly labeled as UNSTABLE to prevent their execution nightly
For big test maxscale.cnf is automatically generated for any number of nodes
Basic tests can be executed with 15 machines Master/slave backend. Tests have label BIG_REPL_BACKEND Default template modified to support big backend. Tests temporaraly labeled as UNSTABLE to prevent their execution nightly
maxscale-system-test changed in order to control test environment by itself.
Every test checks which machines are running, compare with list of needed machines
and start new VMs is they are missing in the running machines list.
Tests are executiong MDBCI commands, MDBCI executable should be in the PATH
maxscale-system-test changed in order to control test environment by itself.
Every test checks which machines are running, compare with list of needed machines
and start new VMs is they are missing in the running machines list.
Tests are executiong MDBCI commands, MDBCI executable should be in the PATH
The Galera documentation tells us to use the galera_new_cluster command to
start a new Galera cluster. This should prevent the problems of nodes
failing to join the cluster either on the initial startup or after a node
goes down.
When connecting to a node, a database can now be optionally given as a
parameter. This makes testing with different databases easier as the need
to use the explicit functions is removed.
As the ssh_node_f function supports full shell syntax, all of the work can
be done with a single ssh connection. This removes the overhead that each
extra ssh connection adds.
A certain templated parameter was only substituted when the VMs were
provisioned. This needs to be handled by the test framework to allow
changes into Galera clusters configuration.
Also made the startup of the "lesser" nodes parallel so minimize the
startup time.
The galera configurations need pre-processing before they can be
used. Switched to std::endl to automatically flush the output at the end
of each line. This makes it easier to see what is happening when the tests
are ran by buildbot. Also removed the extra startup of the servers that
was done right after installing the database.
If the replication is broken between the nodes, it is now fixed in
parallel on all nodes instead of doing it one server at a time.
This reduces the time from about 120 seconds to 13 seconds. The time was
measured by running the check_backend test first with all backends broken
and then with the fixed backends subtracting time of the latter from the
former.
See script directory for method. The script to run in the top level
MaxScale directory is called maxscale-uncrustify.sh, which uses
another script, list-src, from the same directory (so you need to set
your PATH). The uncrustify version was 0.66.