Gtid_Slave_Pos may contain multiple triplets even with single-source
replication if the domain has changed at some point. For failover, we
only need to know the current domain values, so the gtid-parsing now
accepts an optional domain parameter. The Gtid-class still only stores
one triplet of values.
When parsing the Show Slave Status result, Gtid_IO_Pos is parsed first.
The resulting domain is then read from Gtid_Slave_Pos.
When selecting the new master server, Gtid_IO_Pos is checked to
select the slave with the latest event in relay log. If there is a
tie, the slave that has processed most events wins.
It's possible that the winning slave has unprocessed events. In
this case, failover waits for the slave to complete processing the
log. The maximum wait is defined in monitor parameter
"failover_timeout", defaulting to 90 seconds. If time runs out
failover ends in failure.
The Gtid struct was separated to its own definition to handle gtid:s
easier.
From the documentation:
* `never`: When there is an active transaction, no data will be returned
from the cache, but all requests will always be sent to the backend.
The cache will be populated inside _explicitly_ read-only transactions.
Inside transactions that are not explicitly read-only, the cache will
be populated _until_ the first non-SELECT statement.
* `read_only_transactions`: The cache will be used and populated inside
_explicitly_ read-only transactions. Inside transactions that are not
explicitly read-only, the cache will be populated, but not used
_until_ the first non-SELECT statement.
* `all_transactions`: The cache will be used and populated inside
_explicitly_ read-only transactions. Inside transactions that are not
explicitly read-only, the cache will be used and populated _until_ the
first non-SELECT statement.
When deciding whether the cache should be consulted or not,
the value of the configuration parameter 'cache_in_transaction'
is taken into account as well.
The SlaveStatus info is now in a separate class, although it's
still embedded in the MYSQL_SERVER_INFO-class. Both classes now
use strings intead of char*:s.
As the passive parameter is only used by the failover and the failover can
only be initiated by the monitor, there is no true need to synchronize the
reads and write of this parameter.
As all runtime changes are protected by the runtime lock, only partial
reads are of concern. For the supported platforms, this is not a practical
problem and it only confuses the reader when other variables are modified
without atomic operations.
The master failure was assumed to be the only master related event for
each monitoring loop. If the master was switched by an external actor, the
monitor tracking would be out of sync.
The helper function provides map-like access to row values. This is used
to retrieve the values for all MariaDB 10.0+ versions as there are
differences in the returned results between 10.1 and 10.2.
Using timestamps to detect whether MaxScale was active or passive can
cause problems if multiple events happen at the same time. This can be
avoided by separating events into actively observed and passively observed
events. This clarifies the logic by removing the ambiguity of timestamps.
As the monitoring threads are separate from the worker threads, it is
prudent to use atomic operations to modify and read the state of the
MaxScale. This will impose an happens-before relation between MaxScale
being set into passive mode and events being classified as being passively
observed.
The master failure can now be verified by checking when the slaves are
connected to the master. If the slaves do not receive any events from the
master, the connections are considered as down after a configurable limit.
Added two parameters for controlling whether the check is done and for how
long the monitor waits before doing the failover.
The slave heartbeat count and period are collected from the SHOW ALL
SLAVES STATUS output. This, in addition to the relay log position, is used
to calculate the point in time when a slave has last interacted with the
master.
By using this timestamp, the monitor can enforce a minimum "timeout" for
the master before a failover is performed.
Now SHOW ALL SLAVES STATUS reports new fields:
Retried_transactions;
Max_relay_log_size,
Executed_log_entries,
Slave_received_heartbeats,
Slave_heartbeat_period,
Gtid_Slave_Pos"
As the Avro C API depends on the Jansson library, the build scripts must
build it. This is not optimal as the Jansson version needs to be updated
in two places.
Currently binlog server doesn't send to slaves these event types:
- MARIADB10_START_ENCRYPTION_EVENT
- IGNORABLE_EVENT
It also skips events with LOG_EVENT_IGNORABLE_F flag.
This modification allows sending events with that flag.
The test wasn't built as it is not a part of the test suite. The
executable should be built but it should not be added to the test suite.
Changed the management script to only add the configuration and added a
call to it at the start of the test.
Moved mon_process_failover() from monitor.cc to mysql_mon.cc. Renamed
some functions and variables related to previous failover functionality
to avoid confusion.
The get_server_info function takes the monitor handle and a database and
returns the corresponding MYSQL_SERVER_INFO struct. This hides a part of
the actual implementation of the info struct from the monitor code,
allowing future refactoring to be done. It also makes the code a bit more
readable.
The values in the MYSQL_SERVER_INFO struct can now be updated with the
update_slave_status function.
Also moved the number of configured and running slave configurations into
the info struct. This removes the need to pass output parameters.
If something is SELECTed that should be cached for some, but not
for the current user, the cached entry it nevertheless updated.
That way the cached data will always be the last fetched value
and it is also possible to use this behaviour for explicitly
updating the cache entry.
The MYSQL_SERVER_INFO struct is updated first and then the server status
is updated. This allows the function to be called without it affecting the
server state.
The crash happens if the slave is not configured for replication or the
connection is broken when results are read. Adding missing return value
checks will fix it.