If the routing of a session command fails due to problems with the backend
connections, a more verbose error message is logged. The added status
information in the Backend class makes tracking the original cause of the
problem a lot easier due to knowing where, when and why the connection was
closed.
When looking for a master, if it does exist it should be found
even if it is not connectible. The fact that it is not connectible
should be dealt with when a connection is created.
If a master is found but it is being drained, the connection attempt
is rejected if the master failure mode is fail_instantly.
In that case the logged message makes it plain that it is the draining
that is the reason for the connection attempt to fail.
The schema router now deals with the being drained bit the same way
it deals with the maintenance bit, that is, a server being drained will
simply be ignored.
TODO: This behaviour needs to be document.
TODO: It seems the schema router simply ignores the maintenance bit
once a connection has been establisdhed.
If a server was not chosen as the target of a routing hint, the server
status would not be logged. By logging the server state in the message, it
is easier to figure out why a server wasn't chosen as the routing target.
Once the monitor has been able to connect to a clustrix node
and obtain the clustrix nodes, it'll primarily use those nodes
when looking for a Clustrix node to be used as the "hub".
With this change it is sufficient (but perhaps unwise) to provide
a single node boostrap node in the configuration file.
Some other rearrangements and renamings of functions has also been
made.
Convention needs to be that the runtime object creating other objects
needs to incorporate its own name in the name of any object created.
Together with the '@@' prefix that ensures that the created name will
be reasonably globally unique.
The QueryResult-object remembers if a conversion failed. This makes checking
for errors more convenient, as just one check per row is required. The conversion
functions always return a valid value.
If a server cannot be used, close the associated MYSQL connection.
Further, when an existing connection is used, verify that the server
is still part of the quorum.
From system.membership we can find out what server exist in the
cluster while system.nodeinfo contains information about those
servers. If a node goes down, it will disappear from system.nodeinfo,
but not from system.membership. Consequently, we must start from
system.membership and then fetch more information from system.nodeinfo.
Incidentally, a query like
SELECT ms.nid, ni.iface_ip
FROM system.membership AS ms
LEFT JOIN system.nodeinfo AS ni ON ms.nid=ni.nodeid;
should provide all information in one go, but it seems that such joins
are not supported on the system tables.
The node infos of the Clustrix servers are now kept around and
and updated based upon changing conditions instead of regularly
being re-created.
Further, the server is now looked up by name only right after
having been created (and that only due to runtime_create_server()
currently being used).
The state of the dynamically created server is now updated directly
as a result of the health-check ping, while the state of the bootstrap
servers is updated during the tick()-call according to the monitor
"protocol".
Both the replication lag and the message printing state are saved in SERVER,
although the values are mostly used by readwritesplit. A log message is printed
both when a server goes over the limit and when it comes back below.
Because of concurrency issues, a message may be printed multiple times before
different threads detect the new message state.
Documentation updated to explain the change.
MaxScale server objects are now created for all Clustrix nodes.
Currently the name is "Clustrix-Server-N" where N is the number
of the node.
The server is created using runtime_create_server() that has been
modified so that it optionally will not persist the created server.
That is probably just a temporary solution as a monitor should not
need to include .../core/internal-stuff.
Now the monitor
- will frequently ping the health port of each server
- less frequently check from system.membership the actual
number of available nodes
and act accordingly.
Currently, the updated servers are the ones listed in the conf
file. Subsequently this will be changed so that the servers listed
in the configuration file are only used for bootstrapping the monitor
and server objects are then created dynamically according to what is
found in the cluster.
There is a race condition between the addition of the DCB into epoll and
the execution of the event that initiates the protocol pointer for the DCB
and sends the handshake to the client. If a hangup event would occur
before the handshake would be sent, it would be possible that the DCB
would get freed before the code that sends the handshake is executed.
By picking the worker who owns the DCB before the DCB is placed into the
owner's epoll instance, we make sure no events arrive on the DCB while the
control is transferred from the accepting worker to the owning
worker.
If the connection to the master is lost, knowing what type of an error
caused the call to handleError helps deduce what was the real reason for
it. Logging the idle time of the connection helps detect when the
wait_timeout of a connection is exceeded.
If the session doesn't match the required username or remote address, the
match data is not allocated. This also doubles as a replacement of the
active member variable.