If there have been any changes in the bootstrap servers specified
for the Clustrix monitor, then the persistent connection information
is not used.
Otherwise, if the bootstrap server is changed and inaccessible, we
may connect to another cluster than the intended one.
Persisted information about dynamic nodes must be used only if
the bootrap information has not been changed, as otherwise we risk
using information that is not valid.
"Once you eliminate the impossible, whatever remains, no matter
how improbable, must be the truth." Arthur Conan Doyle
Since server objects are never destroyed, currently the only
explanation for the crash described in MXS-2446 is that a server
created at runtime could not, immediately after the creation, be
found using its name.
If the nodes change while a multi HTTP GET is in process, the
corresponding delayed called must be cancelled. Otherwise we
eventually would end up attempting to update the state of the
nodes using the wrong result.
The STL regex implementations have proven to be unreliable on older
systems and replacing the regex with hand-written code for version
extraction is less prone to break.
If 'dynamic_node_detection' has been set to false, then the
Clustrix monitor will not dynamically figure out what nodes are
available, but instead use the bootstrap nodes as such.
With 'dynamic_node_detection' being false, the Clustrix monitor
will do no cluster checks, but simply ping the health port of
each server.
'dynamic_node_detection' specifies whether the Clustrix monitor
should dynamically figure out what nodes there are, or just rely
upon static information.
'health_check_port' specifies the port to be used when perforing
the health check ping.
If the monitor setting "replication_master_ssl" is set to on, any CHANGE MASTER TO-command
will have MASTER_SSL=1. If set to off or unset, MASTER_SSL is left unchanged to match existing
behaviour.
At runtime the Clustrix monitor will save to an sqlite3
database information about detected nodes and delete that
information if a node disappears.
At startup, if the monitor fails to connect to a bootstrap
node, it will try to connect any of the persisted nodes and
start from there.
This means that in general it is sufficient if the Clustrix
monitor at the very first startup can connect to a bootstrap
node; thereafter it will get by even if the bootstrap node
would disappear for good.
Information about the detected Clustrix nodes is now stored to
a Clustrix monitor specific sqlite-database. This will be used
for bootstrapping the Clustrix monitor, in case a statically
defined bootstrap server is unavailable.
In this context master should be interpreted as "can be read
and written to".
Marking them as master requires less changes in RWS to make it
usable with a Clustrix cluster.
The cluster check can only be made after the monitor has been
started. If done when monitor is configured it will at startup
be done when services are not yet available and hence they will
not be populated with the dynamically discovered servers.
Storing all the runtime errors makes it possible to return all of them
them via the REST API. MaxAdmin will still only show the latest error but
MaxCtrl will now show all errors if more than one error occurs.
Because runtime changes are performed one at a time, adding replication credentials
to a mariadbmon which didn't have any would cause an error to be printed, and
the monitor would not start.
This is now fixed by allowing replication_user without replication_password. This
is not an ideal solution as a configuration file with only replication_user would be
accepted. Also, when adding the credentials to a monitor, replication_user must be
given first to avoid the error.
All servers are now updated in their own threads simultaneously. This
should reduce the possibility of having significantly different gtid:s
shown for different servers.
The functions are now in MonitorServer. Disk space can only be checked
during specific ticks. If a server misses a tick (e.g. is down) it will
be checked after disk_space_check_interval has passed.
This fixes some situations where MaxAdmin/MaxCtrl would block and wait
until a monitor operation or tick is complete. This also fixes a deadlock
caused by calling monitor diagnostics inside a monitor script.
Concurrency is enabled by adding one mutex per server object to protect
array-like fields from concurrent reading/writing.
Previously, runtime monitor modifications could directly alter monitor fields,
which could leave the text-form parameters and reality out-of-sync. Also,
the configure-function was not called for the entire monitor-object, only the
module-implementation.
Now, all modifications go through the overridden configure-function, which calls the
base-class function. As most configuration changes are given in text-form, this
removes the need for specific setters. The only exceptions are the server add/remove
operations, which must modify the text-form serverlist.