If a transaction replay has to be executed twice due to a failure of the
original candidate master, the query queue could contain replayed
queries. The replayed queries would be placed into the queue if a new
connection needs to be created before the transaction replay can start.
Backported the changes that convert the query queue in readwritesplit into
a proper queue. This changes combines both
5e3198f8313b7bb33df386eb35986bfae1db94a3 and
6042a53cb31046b1100743723567906c5d8208e2 into one commit.
By passing the raw password deeper into the authentication code, it can be
used to verify the user can access some systems. Right now, this is not
required by the simple salted password comparison done in MaxScale.
By storing the queries in the query queue and routing it once the
transaction replay is done, we prevent two problems:
* Multiple transaction replays would overwrite the m_interrupted_query
buffer that was used to store any queries executed during the
transaction replay.
* Incorrect ordering of queries when the query queue is not empty and a
new query is executed during transaction replay.
If the session starts with no master but later one becomes available, when
a transaction is started the code would unconditionally use the master's
name in a log message.
By allowing transactions to the master to end even if the server is in
maintenance mode makes it possible to terminate connections at a known
point. This helps prevent interrupted transactions which can help reduce
errors that are visible to the clients.
If a server with zero weight was chosen as the only candidate, it was
possible that the starting minimum value was smaller than the server
score. This would mean that a candidate wouldn't be chosen if the score
was too high. To preven this, the values are capped to a value smaller
than the initial minimum score.
Queries such as SHOW TABLES FROM db1 are now routed to the backend with db1.
This gives the correct result as long as db1 is not sharded to multiple
backends.
Increasing counter sizes from int to long for averages.
Rename random functions to end with _co instead of _exclusive to
indicate range [close, open[, and to allow future suffixes oc, cc and oo.
The code only handled the basic version of the command, returning incorrect
results if modifiers were used. The code is now removed, causing the command
to be routed to the backend of the current database. This will give correct
results as long as that backend contains all the tables of the database e.g.
no table sharding.
Adding the same task twice isn't allowed. The API of the housekeeper tasks
might have to be changed in a way that makes it possible for the caller to
know whether a task has been added.
The connections to servers being drained should not be closed like they
should be for servers in maintenance mode. The change in functionality
between 2.3 and develop caused the connections to be discarded if the
server was in either maintenance or drain mode.
Using a std::deque to store the queries retains the exact state of the
object thus removing the need to parse the query again. It also removes
the need to split the queue into individual packets which makes the code
cleaner.
Moved the more verbose parts of the routing code into subfunctions and
arranged it so that more relevant parts are closer to each other. Also
added the SQL statement that is being delayed to the message.
When a readwritesplit session has a connection to a master server, servers
of the same rank as the master are used. If no master connection is
available, the server with the highest rank among all connected servers is
used. If there are no open connections, the server with the best rank is
chosen and a connection to it is made.
Connections with different rank values than what is the current rank value
of the session will be discarded. This reduces the use of server with
different ranks when the master server of a session fails. Without the
active pruning of connections, slave connections to primary clusters
without masters would remain in use even after the primary master
fails. This guarantees full switchover to a secondary cluster if a master
change occurs.
If a master with a better rank and a slave with a worse rank were
available and master_accept_reads wasn't enabled, the slave would be
preferred over the master. The check for master_accept_reads was done
twice and also in the wrong place.
Although the default value is the maximum value of a signed 32-bit
integer, the value is stored as a 64-bit integer. The integer type
conversion functions return 64-bit values so storing it as one makes
sense.
Currently values higher than the default are allowed but the accepted
range of input should be restricted in the future.
Readwritesplit now respects server ranks. When servers are selected for
either routing or connection creation, the servers are partitioned by
their rank into sets of servers. These sets of servers are never mixed so
the end result is that only servers of the same rank are considered for
candidacy.
The master selection is slightly different: the server with the best rank
that is capable of acting as a master is chosen. This means that a session
can have a master with a lower rank and slaves with higher ranks than the
master. In most cases this actually is the preferred behavior as the rank
is used to prioritize usage but not outright prevent it.
The connection creation is now internal to RWSplitSession. This makes the
code more readable by removing the need to pass parameters and allowing
easier reuse of existing functions. The various conditions require to
create connections are now also checked in only one place.
Readwritesplit now picks the best available master if no open master
connection is available. This is required if the server rank is to be
taken into account when master selection is done.
If a routing of a queued query caused it to be put back on the query
queue, the order in which the queue was reorganized was wrong. The first
query would get appended as the last query which caused the order to be
reversed.
Th discarding of connections in maintenance mode must be done after any
results have been written to them. This prevents closing of the connection
before the actual result is returned.
The candidate selection code used default values that would cause reads
past buffers. The code could also dereference the end iterator which
causes undefined behavior.
Previously, runtime monitor modifications could directly alter monitor fields,
which could leave the text-form parameters and reality out-of-sync. Also,
the configure-function was not called for the entire monitor-object, only the
module-implementation.
Now, all modifications go through the overridden configure-function, which calls the
base-class function. As most configuration changes are given in text-form, this
removes the need for specific setters. The only exceptions are the server add/remove
operations, which must modify the text-form serverlist.
Queries in the query queue need to be explicitly parsed since they are
stored in a single buffer and thus share the query classification
information. In the next major version this should be changed into an
array of individual buffers instead of a shared buffer.
If a session command is executed when lazy_connect is enabled and no
connections have been created, a connection must be made. This makes sure
that the session isn't closed and that the client receives a response.