Since the settings are now protected fields, all related functions were
moved inside the monitor class. mon_ping_or_connect_to_db() is now a method
of MXS_MONITORED_SERVER. The connection settings class is defined inside the
server since that is the class actually using the settings.
Currently the bit is set so that is may be overwritten by the monitor
if the setting of the bit takes place while the monitor is performing
its monitor loop.
Names starting with '@@' can now longer be used in configuration files.
Subsequent commits will prevent such names from being used when objects
are created dynamically.
The class MonitorManager contains monitor-related functions that should not
be called from modules. MonitorManager can access private fields and methods
of the monitor.
Both the replication lag and the message printing state are saved in SERVER,
although the values are mostly used by readwritesplit. A log message is printed
both when a server goes over the limit and when it comes back below.
Because of concurrency issues, a message may be printed multiple times before
different threads detect the new message state.
Documentation updated to explain the change.
MaxScale server objects are now created for all Clustrix nodes.
Currently the name is "Clustrix-Server-N" where N is the number
of the node.
The server is created using runtime_create_server() that has been
modified so that it optionally will not persist the created server.
That is probably just a temporary solution as a monitor should not
need to include .../core/internal-stuff.
The functions stores the current server status to the monitored
server's mon_prev_status and pending_status fields.
To be used at the start of the monitor loop, before the pending
status fields are updated.
There is a race condition between the addition of the DCB into epoll and
the execution of the event that initiates the protocol pointer for the DCB
and sends the handshake to the client. If a hangup event would occur
before the handshake would be sent, it would be possible that the DCB
would get freed before the code that sends the handshake is executed.
By picking the worker who owns the DCB before the DCB is placed into the
owner's epoll instance, we make sure no events arrive on the DCB while the
control is transferred from the accepting worker to the owning
worker.
When poll_add_dcb was called for a DCB that once was polling system but
was subsequently removed, the DCB would appear twice in the worker's list
of DCBs. This caused a hang when the DCB was the last one in the worker's
list and dcb_foreach_local would be called.
To prevent the aforementioned problem, the DCBs are now added and removed
directly to and from the workers instead of indirectly via poll_add_dcb
and poll_remove_dcb.