- 1 master, 3 slaves
- "stop slave" on server 2
- "disable" log-bin on server 3
- set multi-source replication on server 4
- take down master
- no slave should be promoted
Verifying that the requested replication type matches the one that is
currently in use allows the resetting of the nodes to automatically set up
the correct replication type. This means that a test that invokes
`Mariadb_nodes::require_gtid(true)` before initializing the
TestConnections class is given a replication setup that uses GTID
coordinates instead of file and position.
The users are now created on both the slaves as well as the master. This
allows static binlog coordinates to be used on the slaves and the
replication initialization boils down to a set of SQL queries.
- 1 master, 3 slaves
- stop maxscale so it does not autorejoin later on
- stop & reset slave on servers 3 & 4
- add data to server 4
- restart maxscale, check that server 3 is rejoined but not server 4
- manually set server 1 to replicate from server 4, creating a relay master
- check that servers 2 & 3 are redirected, making server 1 just a slave
- switchover master to server 1, check that it's the master
Also, moved some common functions into their own files. These functions
are used by multiple tests.
* refactor test backend fixing
* return comatibility with 5.5 for backend restore
* remove backend configuration scripts
* adopt test backed restore function for 2.2 test fw
auto_failover=true
auto_rejoin=false
This test tests the following:
- Regular master-slave setup
- Create a table, insert some data
- Sync all slaves
- Stop a slave
- Insert some more data
- Sync remaining slaves
- Stop the master
- Expect the failover mechanism to pick a new master (server2)
- Bring up the slave
- Perform a switchover from server2 to server4
- Should fail
Currently it does fail, but only due to a timeout.
[mysqlmon] MASTER_GTID_WAIT() timed out on slave 'server4'.
There should be some check that would ensure that the failure happens
faster than that.
This test tests the following:
- Regular master-slave setup
- Create a table, insert some data
- Sync all slaves
- Stop a slave
- Insert some more data
- Sync remaining slaves
- Stop the master
- Expect the failover mechanism to pick a new master
- Bring up the slave
- Expect the slave to be rejoined
Previously, the rejoin would only be ran on servers with a connected slave io
thread. This patch runs the rejoin also on slaves which cannot connect to a
downed old master while the master hostname or port differs from the current
cluster master server.
With this change it is ensured that you do not see
ERROR 1198 (HY000) at line 28: This operation cannot be performed
as you have a running slave ''; run STOP SLAVE '' first
when the slave is reset.
- The test starts with the usual setup of 1 master and 3 slaves.
- Then the master is taken down and it is checked that the failover
mechanism promotes some slave to master.
- This is continued until there is a single master left (no
slaves).
The backend DCBs didn't have a valid service pointer whereas the client
DCBs had one. The necessity of the pointer can be questioned as a similar
pointer is located in the session.
Combined functions into one which only used different default
values. Removed unused functions. Used std::string where possible to make
their usage easier. Hid code that isn't used externally.
The same test now has two versions. In the automatic version failover
begins automatically. In the manual version failover is started with
maxadmin. The tests are otherwise identical.
The service for a dummy session will be NULL. If authentication fails for
a dummy session, then no service level actions should be taken.
Only the binlogrouter can trigger authentication failure with a dummy
session as it creates connections before the service itself has started.
Current failover code better detects weird situations and refuses to work in one.
The test server slaves have events far ahead of the master in their binlogs,
causing the failover to stop. To fix this, slave binlogs are now deleted when
a test begins.