Currently the state change explanations are only added to mariadbmon. They
are less relevant for Galera clusters as they themselves explain why they
change their states but should still be added to make them easier to
analyze.
The event that isn't explained and is most often encountered is the loss
of a Slave status. Most often the loss of a Slave status happens because
either the IO thread or the SQL thread has stopped. Printing the states of
the threads as well as the latest error should hint at what caused the
outage.
The information can be added to the REST API in 2.5 where the monitors can
add extra information to the server JSON.
During switchover/failover, server events are altered. The ALTER
EVENT command automatically modifies the event charset and collation
to the values of the connetion running the query. This may cause
the event to become invalid.
Fixed this by changing connection charset and collation to the ones
in the event description just before altering it.
The current code assumes that the variable names are in lowercase. This
fixes the galera monitoring that was broken by commit
43068d20b43a34d5f3b4b4db0fcce701b3cd7cad. In addition, lowercase names
also helps when comparisons are done with std::string.
The diagnostics_json call could access the std::unordered_map at the same
time it was being updated by the monitoring thread. This leads to
undefined behavior which in the case of MXS-3059 manifested as a segfault.
The mon_ping_or_connect_to_db resets the MYSQL handle which caused the
loss of the error message. Returning a new enumeration value for
authentication errors solves this problem.
The password values are now masked with asterisks. This tells whether a
password is set or not but it does not expose any information about the
password itself.
Galeramon will now only use the larger cluster in case a split brain
situation occurs. If the clusters are of equal size, the one whose UUID
compares less will be used. This will guarantee that all MaxScales that
see the same picture will end up using the same cluster.