If 'dynamic_node_detection' has been set to false, then the
Clustrix monitor will not dynamically figure out what nodes are
available, but instead use the bootstrap nodes as such.
With 'dynamic_node_detection' being false, the Clustrix monitor
will do no cluster checks, but simply ping the health port of
each server.
'dynamic_node_detection' specifies whether the Clustrix monitor
should dynamically figure out what nodes there are, or just rely
upon static information.
'health_check_port' specifies the port to be used when perforing
the health check ping.
Added core functionality for UNIX domain sockets in servers. Currently the
address parameter accepts them both but a separate `socket` parameter is
needed.
If the monitor setting "replication_master_ssl" is set to on, any CHANGE MASTER TO-command
will have MASTER_SSL=1. If set to off or unset, MASTER_SSL is left unchanged to match existing
behaviour.
At runtime the Clustrix monitor will save to an sqlite3
database information about detected nodes and delete that
information if a node disappears.
At startup, if the monitor fails to connect to a bootstrap
node, it will try to connect any of the persisted nodes and
start from there.
This means that in general it is sufficient if the Clustrix
monitor at the very first startup can connect to a bootstrap
node; thereafter it will get by even if the bootstrap node
would disappear for good.
Information about the detected Clustrix nodes is now stored to
a Clustrix monitor specific sqlite-database. This will be used
for bootstrapping the Clustrix monitor, in case a statically
defined bootstrap server is unavailable.
The largest part of the code deals with the start of a response. Moving
this into a subfunction makes the function clearer as the switch statement
inside a switch statement is removed.
By processing the packets one at a time, the reply state is updated
correctly regardless of how many packets are received. This removes the
need for the clunky code that used modutil_count_signal_packets to detect
the end of the result set.
Internally durations are stored in milliseconds but runtime changes
using SQL are made in seconds. Consequently, the provided value must
be multiplied by 1000 before being stored.
Given the assumption that queries are rarely 16MB long and that
realistically the only time that happens is during a large dump of data,
we can limit the size of a single read to at most one MariaDB/MySQL packet
at a time. This change allows the network throttling to engage a lot
sooner and reduces the maximum overshoot of throtting to 16MB.
By logging the connection ID for each created connection, failures can be
traced back from the backend server all the way up to the client
application.
If a transaction replay has to be executed twice due to a failure of the
original candidate master, the query queue could contain replayed
queries. The replayed queries would be placed into the queue if a new
connection needs to be created before the transaction replay can start.
Backported the changes that convert the query queue in readwritesplit into
a proper queue. This changes combines both
5e3198f8313b7bb33df386eb35986bfae1db94a3 and
6042a53cb31046b1100743723567906c5d8208e2 into one commit.
By passing the raw password deeper into the authentication code, it can be
used to verify the user can access some systems. Right now, this is not
required by the simple salted password comparison done in MaxScale.
By storing the queries in the query queue and routing it once the
transaction replay is done, we prevent two problems:
* Multiple transaction replays would overwrite the m_interrupted_query
buffer that was used to store any queries executed during the
transaction replay.
* Incorrect ordering of queries when the query queue is not empty and a
new query is executed during transaction replay.
If the session starts with no master but later one becomes available, when
a transaction is started the code would unconditionally use the master's
name in a log message.
By allowing transactions to the master to end even if the server is in
maintenance mode makes it possible to terminate connections at a known
point. This helps prevent interrupted transactions which can help reduce
errors that are visible to the clients.
In this context master should be interpreted as "can be read
and written to".
Marking them as master requires less changes in RWS to make it
usable with a Clustrix cluster.
If set to true and if any of the other blocking related parameters
is true, then a statement that cannot be fully parsed will be blocked.
Default is true.
If a server with zero weight was chosen as the only candidate, it was
possible that the starting minimum value was smaller than the server
score. This would mean that a candidate wouldn't be chosen if the score
was too high. To preven this, the values are capped to a value smaller
than the initial minimum score.
Queries such as SHOW TABLES FROM db1 are now routed to the backend with db1.
This gives the correct result as long as db1 is not sharded to multiple
backends.
Increasing counter sizes from int to long for averages.
Rename random functions to end with _co instead of _exclusive to
indicate range [close, open[, and to allow future suffixes oc, cc and oo.
The code only handled the basic version of the command, returning incorrect
results if modifiers were used. The code is now removed, causing the command
to be routed to the backend of the current database. This will give correct
results as long as that backend contains all the tables of the database e.g.
no table sharding.
Adding the same task twice isn't allowed. The API of the housekeeper tasks
might have to be changed in a way that makes it possible for the caller to
know whether a task has been added.