Once the monitor has been able to connect to a clustrix node
and obtain the clustrix nodes, it'll primarily use those nodes
when looking for a Clustrix node to be used as the "hub".
With this change it is sufficient (but perhaps unwise) to provide
a single node boostrap node in the configuration file.
Some other rearrangements and renamings of functions has also been
made.
Whitespace in section names have been deprecated since 2.2 and will
be rejected in 2.4. Consequently, the configuration files of the
system tests must be updated.
Convention needs to be that the runtime object creating other objects
needs to incorporate its own name in the name of any object created.
Together with the '@@' prefix that ensures that the created name will
be reasonably globally unique.
Names starting with '@@' can now longer be used in configuration files.
Subsequent commits will prevent such names from being used when objects
are created dynamically.
The QueryResult-object remembers if a conversion failed. This makes checking
for errors more convenient, as just one check per row is required. The conversion
functions always return a valid value.
The class MonitorManager contains monitor-related functions that should not
be called from modules. MonitorManager can access private fields and methods
of the monitor.
If a server cannot be used, close the associated MYSQL connection.
Further, when an existing connection is used, verify that the server
is still part of the quorum.
From system.membership we can find out what server exist in the
cluster while system.nodeinfo contains information about those
servers. If a node goes down, it will disappear from system.nodeinfo,
but not from system.membership. Consequently, we must start from
system.membership and then fetch more information from system.nodeinfo.
Incidentally, a query like
SELECT ms.nid, ni.iface_ip
FROM system.membership AS ms
LEFT JOIN system.nodeinfo AS ni ON ms.nid=ni.nodeid;
should provide all information in one go, but it seems that such joins
are not supported on the system tables.
The node infos of the Clustrix servers are now kept around and
and updated based upon changing conditions instead of regularly
being re-created.
Further, the server is now looked up by name only right after
having been created (and that only due to runtime_create_server()
currently being used).
The state of the dynamically created server is now updated directly
as a result of the health-check ping, while the state of the bootstrap
servers is updated during the tick()-call according to the monitor
"protocol".
Both the replication lag and the message printing state are saved in SERVER,
although the values are mostly used by readwritesplit. A log message is printed
both when a server goes over the limit and when it comes back below.
Because of concurrency issues, a message may be printed multiple times before
different threads detect the new message state.
Documentation updated to explain the change.
MaxScale server objects are now created for all Clustrix nodes.
Currently the name is "Clustrix-Server-N" where N is the number
of the node.
The server is created using runtime_create_server() that has been
modified so that it optionally will not persist the created server.
That is probably just a temporary solution as a monitor should not
need to include .../core/internal-stuff.