Basic tests can be executed with 15 machines Master/slave backend. Tests have label BIG_REPL_BACKEND Default template modified to support big backend. Tests temporaraly labeled as UNSTABLE to prevent their execution nightly
For big test maxscale.cnf is automatically generated for any number of nodes
Basic tests can be executed with 15 machines Master/slave backend. Tests have label BIG_REPL_BACKEND Default template modified to support big backend. Tests temporaraly labeled as UNSTABLE to prevent their execution nightly
maxscale-system-test changed in order to control test environment by itself.
Every test checks which machines are running, compare with list of needed machines
and start new VMs is they are missing in the running machines list.
Tests are executiong MDBCI commands, MDBCI executable should be in the PATH
When connecting to a node, a database can now be optionally given as a
parameter. This makes testing with different databases easier as the need
to use the explicit functions is removed.
If the replication is broken between the nodes, it is now fixed in
parallel on all nodes instead of doing it one server at a time.
This reduces the time from about 120 seconds to 13 seconds. The time was
measured by running the check_backend test first with all backends broken
and then with the fixed backends subtracting time of the latter from the
former.
See script directory for method. The script to run in the top level
MaxScale directory is called maxscale-uncrustify.sh, which uses
another script, list-src, from the same directory (so you need to set
your PATH). The uncrustify version was 0.66.
The test checks that failover works even when the master of the monitored
cluster is a slave to an external masters. The test also verifies that the
servers do not get unexpected status labels.
The users are now created on both the slaves as well as the master. This
allows static binlog coordinates to be used on the slaves and the
replication initialization boils down to a set of SQL queries.
* refactor test backend fixing
* return comatibility with 5.5 for backend restore
* remove backend configuration scripts
* adopt test backed restore function for 2.2 test fw
The test is composed of a few parts.
1: Test that failover happens on master failure.
2: Test that a server with slave sql thread stopped is not promoted.
3: Test that a server with log_slave_updates=1 is promoted before others.
The ssh_node function now supports printf style arguments. This is used to
simplify command execution on the nodes.
Curreltny, in addition to its old usage, it is used to drop extra
databases when replication is started.
When a backend is waiting for a response but no statement is stored for
the session, the buffer where the stored statement is copied is not
modified. This means that it needs to be initialized to a NULL value.
Added a test that checks that the behavior works as expected even with
persistent connections. A second test reproduces the crash by executing
parallel SET commands while slaves are blocked.
There is still a behavioral problem in readwritesplit. If a session
command is being executed and it fails on a slave, an error is sent to the
client. In this case it would not be necessary to close the session if the
master is still alive.