By resetting the replay state the transaction replay can start again on a
new server. This allows the replay process work when a master server is
shutting down.
By delaying the replay for a second, we give the monitor a small chance to
adap to master failures. It'll also prevent rapid re-querying if multiple
transaction replays are supported.
A transaction that just completed will go through the start_trx_replay
function as from the client protocol's point of view the transaction is
still open. The debug assertion did not take this into account and would
fail if a successful commit was the last thing done on master that failed.
Also fixed the formatting.
If a Galera cluster drops down to a single node, the last node would not
be considered valid. During the failure of the second to last node, the
master would also temporarily lose the master status.
The behavior was changed to always keep the cluster UUID until the cluster
size drops down to zero. This guarantees that the same cluster is used as
long as possible.
The TOC was added only to the long documentation files to make them easier
to navigate. Also modified the headings for Avro and Binlog routers and did
some minor cleanup.
When a server is stopping, it'll send an error to the client before
terminating the TCP connection. The code in readwritesplit would detect
this error and create a hangup event on the DCB. This would cause it to
appear as if the TCP connection was broken and the router would
immediately try to reconnect to the same server.
By ignoring the error and allowing the connection to die on its own, we
avoid immediately reconnecting and retrying any transactions on the
stopping server. This increases the chances that the monitor will see it
first and assign the server states correctly before the transaction replay
is attempted.
The assertion would hold true for a single worker but it can't be
guaranteed to be true on a multi-worker system where the statistics are
distributed across the workers.
Enabling the feature by default prevents the master connection from dying
during times when there are very little or no writes. Having a modest ping
interval of 300 seconds serves to minimize the amount of extra work that
both MaxScale and the server have to do while still keeping the
connections in good shape.
Used only with supporting server versions. Using the time limit ensures that
the server interrupts the query at the same point Connector-C would cut the
connection. This prevents lingering queries.
Also, cleans up some associated error messages.
Rewrote the bug561.sh test as the error_messages test that covers what was
tested by the script as well as some new parts that were untested. This
revealed a bug in the error messages where MaxScale always returns the
database name in the error.
When connecting to a node, a database can now be optionally given as a
parameter. This makes testing with different databases easier as the need
to use the explicit functions is removed.
RPM packages by default strip all executables on some systems after
installation. To work around this, the post install part needs to be
prevented. This does not mean the post-install scripts used to create the
directories required by MaxScale.
Fetching the users may potentially take longer than the watchdog
timeout. To ensure that MaxScale is not killed by systemd, we must
ensure that the notifications are generated also when MaxScale
synchronously is fetching the users.
maxscale-system-test/set_env.sh script is slow because it checks if MariaDB start command is mysql or mysqld
Now all versions can be started with 'mysql' except MySQL 5.5 which is not supported
Some rearrangements to ensure that what should be private
can be kept private.
- WatchdogNotifier made a friend.
- WatchdogWorkaround defined in RoutingWorker and made a friend.
- mxs::WatchdogWorker defined with 'using'.
The systemd watchdog mechanism requries notifications at
regular intervals. If a synchronous operation of some kind
is performed by a worker, then those notfications will not
be generated.
This change provides each worker with a secondary thread that
can be used for triggering those notifications when the worker
itself is busy doing other stuff. The effect is that there will
be an additional thread for each worker, but most of the time
that thread will be idle.
Sofar only the mechanism; in subsequent changes the mechanism
will be taken into use.