The test did not use the wait_for_monitor function to sync with the
monitor. This function speeds up the testing greatly by removing
unnecessary sleeps from it.
Also reduced the amount of data inserted into the cluster. There's no real
need to test with large amounts of data as it is only a functional test.
When a read-only transaction fails due to a connection error, no message
would be logged. Also added an info level message for the case when a
backend connection would get closed before the session is in the correct
state and a debug assertion that the router session should never be closed
when the handleError method is called.
The removing and slave status updating is now separated to a function.
As the MariaDBServer object now contains the updated slave connections,
keeping track of removed connections is no longer required.
The two cases are now separated. In switchover, the promotion and
demotion targets can swap connections between each other without worry.
In failover, the two connection lists must be merged semi-intelligently.
The slave connections of the two servers are now saved to the operation
descriptor object at the start of the operation. This allows slave status
updating during the operation.
If the test fails, there's no point in continuing with the load generation
as it only serves to slow things down. In few cases the test caused
std::bad_alloc to be thrown which prematurely stopped the ctest run.
Now the test program will
1) Write to each node in a Galera cluster and verify that the data
ends up in the slave.
2) At the end of 1) execute STOP SLAVE and START SLAVE to check that
replication can be stopped and started again (won't work unless
each node has the same server_id and value for @@log_bin_basename).
3) Block the node BLR is replicating from and expect it to connect
to the next configured master and that replication continues to
work. Do that for all nodes.
4) Stop MaxScale and restart it and expect 3) to work. That checks
that BLR saves all necessary information in master.ini and is
capable of reading it.
It should be possible to START SLAVE and STOP SLAVE irrespective
of which Galera node updates are mode to.
That will be the case if @@log_slave_updates is on and each node
in the Galera cluster have the same server id. Otherwise it will
fail with the current incarnation of BLR.