The Listener::create method now takes a set of configuration parameters
from which it constructs a listener. This removes the duplicated code and
makes the behavior of listener creation similar to other objects in
MaxScale. It also allows the configuration parameters to be stored in the
listener object itself.
Added a test that makes sure the transaction replay cap is respected. Also
improved the logging to show how many transaction replay attemps have been
done and to log if a replay is not done due to too many attempts.
In most cases it is reasonable to stop attempting transaction replays
after a certain number of failed attempts. This prevents transactions from
being repeatedly replayed on the same server over and over again if, for
example, it keeps crashing.
Currently it's too laborious to use duration suffixes when saving
generated configs and also to handle suffixes when changes are made
dynamically using maxctrl.
It will be trivial to do that when the new configuration mechanism
has been taken into use everywhere. That will not happen before
MaxScale 2.5.
So, in MaxScale 2.4 duration suffixes will be accepted in manually
created configuration files, but no warning will be logged if a
suffix is not used.
Now considers other routing hints if first one fails. The order is inverted compared
to e.g. namedserver filter settings because of how routing hints are stored. If all hints
are unsuccessful, route to any slave.
The code that selects which worker to assign the DCB to is now completely
in the Listener class. This removes the need to change the ownership of a
DCB after it has been allocated.
If
- transaction replay is enabled,
- an error is returned and
- the error is one of the recoverable Clustrix errors
we will retry the transaction.
If it succeeds, then the client will not notice anything but
for a short delay.
Note that the error message is looked for irrespective of whether
the backend is Clustrix or not. However, as errors are not common
the price for doing that can probably be ignored.
However, a bigger problem is that explicit knowledge of different
backends should *not* be coded into routers.
The DCB callbacks shouldn't be used to send more events as they cause the
callback to be called recursively. The recursive calls caused rows to be
sent before the schemas for the rows were sent. Queuing the events via the
worker mechanism prevents this.
Took the Replicator into use in avrorouter as an alternative to the
binlogrouter based setup. This also allows the avrorouter to automatically
handle master failovers and to start replication from GTID coordinates.
Repurposed the Replicator from the CDC integration project as a
replication event processing service. It is similar to the CDC version of
the Replicator and is still in the same namespace but it lacks all of the
cross-thread communication that was a part of the integration project.
If a transaction replay has to be executed twice due to a failure of the
original candidate master, the query queue could contain replayed
queries. The replayed queries would be placed into the queue if a new
connection needs to be created before the transaction replay can start.
Backported the changes that convert the query queue in readwritesplit into
a proper queue. This changes combines both
5e3198f8313b7bb33df386eb35986bfae1db94a3 and
6042a53cb31046b1100743723567906c5d8208e2 into one commit.
By passing the raw password deeper into the authentication code, it can be
used to verify the user can access some systems. Right now, this is not
required by the simple salted password comparison done in MaxScale.
By storing the queries in the query queue and routing it once the
transaction replay is done, we prevent two problems:
* Multiple transaction replays would overwrite the m_interrupted_query
buffer that was used to store any queries executed during the
transaction replay.
* Incorrect ordering of queries when the query queue is not empty and a
new query is executed during transaction replay.
If the session starts with no master but later one becomes available, when
a transaction is started the code would unconditionally use the master's
name in a log message.
By allowing transactions to the master to end even if the server is in
maintenance mode makes it possible to terminate connections at a known
point. This helps prevent interrupted transactions which can help reduce
errors that are visible to the clients.
If a server with zero weight was chosen as the only candidate, it was
possible that the starting minimum value was smaller than the server
score. This would mean that a candidate wouldn't be chosen if the score
was too high. To preven this, the values are capped to a value smaller
than the initial minimum score.
Queries such as SHOW TABLES FROM db1 are now routed to the backend with db1.
This gives the correct result as long as db1 is not sharded to multiple
backends.
Increasing counter sizes from int to long for averages.
Rename random functions to end with _co instead of _exclusive to
indicate range [close, open[, and to allow future suffixes oc, cc and oo.
The code only handled the basic version of the command, returning incorrect
results if modifiers were used. The code is now removed, causing the command
to be routed to the backend of the current database. This will give correct
results as long as that backend contains all the tables of the database e.g.
no table sharding.
Adding the same task twice isn't allowed. The API of the housekeeper tasks
might have to be changed in a way that makes it possible for the caller to
know whether a task has been added.