The CDC connector can be build directly into the core testing library for
testing purposes. This way we remove an unnecessary dependency on a
library. This commit fixes the linkage failure of the cdc_datatypes test.
- Start 4 threads where each thread sits in a loop and performs
20% updates and 80% selects. Each thread has a table of its own.
- The main thread executes the following in a loop.
- Perform a switchover from the current master to the next (which is
simply the next node % all nodes).
- Keep on doing that for 1.5 minutes.
The expectation is that the switchover will succeed, that is, after the
operation there will be a new master.
- Start 4 threads where each thread sits in a loop and performs
20% updates and 80% selects. Each thread has a table of its own.
- The main thread executes the following in a loop.
- Take down the current master and wait a while (failover assumed
to happen).
- Put up the old master node and wait a while.
Keep on doing that for 1.5 minutes.
At the end check that:
- There is one 'Master'.
- The other nodes are either
- 'Slave' or
- 'Running' in which case it is checked it is because the node could
not be rejoined.
The test uses standard setup (1xMaster, 3xSlaves).
1. Shutdown master (server 1), check that autofailover promotes
a new master.
2. Stop MaxScale.
3. Start server 1 and add some events to it so it can no longer rejoin
cluster.
4. Start MaxScale, check that server 1 does not join.
5. Set current master to replicate from server 1, turning it to a relay
master.
6. Check that server 1 is master, all others are slaves (due to auto-rejoin).
A subset of the checks done at connection creation time need to be done at
query routing time. This guarantees that the connection is closed if the
server no longer qualifies as a valid candidate.
Added teset case that checks that a change in the replication topology
correctly breaks the connection.
The cdc_datatypes test did not use the correct connector and instead it
used a stale version of the MaxScale CDC Connector. The connector should
be treated as an external dependency and thus cloned at configuration
time.
- 1 master, 3 slaves
- "stop slave" on server 2
- "disable" log-bin on server 3
- set multi-source replication on server 4
- take down master
- no slave should be promoted
- 1 master, 3 slaves
- stop maxscale so it does not autorejoin later on
- stop & reset slave on servers 3 & 4
- add data to server 4
- restart maxscale, check that server 3 is rejoined but not server 4
- manually set server 1 to replicate from server 4, creating a relay master
- check that servers 2 & 3 are redirected, making server 1 just a slave
- switchover master to server 1, check that it's the master
Also, moved some common functions into their own files. These functions
are used by multiple tests.
auto_failover=true
auto_rejoin=false
This test tests the following:
- Regular master-slave setup
- Create a table, insert some data
- Sync all slaves
- Stop a slave
- Insert some more data
- Sync remaining slaves
- Stop the master
- Expect the failover mechanism to pick a new master (server2)
- Bring up the slave
- Perform a switchover from server2 to server4
- Should fail
Currently it does fail, but only due to a timeout.
[mysqlmon] MASTER_GTID_WAIT() timed out on slave 'server4'.
There should be some check that would ensure that the failure happens
faster than that.
This test tests the following:
- Regular master-slave setup
- Create a table, insert some data
- Sync all slaves
- Stop a slave
- Insert some more data
- Sync remaining slaves
- Stop the master
- Expect the failover mechanism to pick a new master
- Bring up the slave
- Expect the slave to be rejoined
- The test starts with the usual setup of 1 master and 3 slaves.
- Then the master is taken down and it is checked that the failover
mechanism promotes some slave to master.
- This is continued until there is a single master left (no
slaves).
The same test now has two versions. In the automatic version failover
begins automatically. In the manual version failover is started with
maxadmin. The tests are otherwise identical.
Added some code to detect server states in a consistent manner and created
the test. The multi-source replication appeared to cause problems for the
test system which needs to be resolved.
Added --force flags to most direct `mysql` calls to prevent errors from
stopping the processing of remaining commands.
The test is composed of a few parts.
1: Test that failover happens on master failure.
2: Test that a server with slave sql thread stopped is not promoted.
3: Test that a server with log_slave_updates=1 is promoted before others.
The test suite now compiles Jansson instead of letting the CDC connector
do it. This way the connector can be used as a very simple static library
with a dependency on the Jansson library.
The cdc-connector did not build on Ubuntu Trusty due to the wrong order of
linker flags. Moving the crypt and crypto linkage to be after
cdc-connector appears to fix it.
The test suite now compiles Jansson instead of letting the CDC connector
do it. This way the connector can be used as a very simple static library
with a dependency on the Jansson library.
The test wasn't built as it is not a part of the test suite. The
executable should be built but it should not be added to the test suite.
Changed the management script to only add the configuration and added a
call to it at the start of the test.
The MaxCtrl test suite is now a part of the regression test suite. The
cluster tests are expected to fail as that is yet to be implemented.
Also fixed the return value of TestConnections::ssh_maxscale.