When fake hangup events are delivered via DCBs, the current DCB would not
be updated. This would cause error messages without a session ID which
makes failure analysis harder.
It was possible that a one-second outage that caused immediate rejection
of network connections would cause all of the query retry attempts to fail
within a very short period of time. By preventing rapid reconnections,
query_retries is more effective as an error filtering mechanism.
If a COM_STMT_EXECUTE has no metadata in it and it has more than one
parameter, it must be routed to the same backend where the previous
COM_STMT_EXECUTE with the same ID was routed to. This prevents MDEV-19811
that is triggered by MaxScale routing the queries to different backends.
By having a separate FINISHED state and a STOPPED state, it is possible to
know at which point in the worker's lifetime an event is done. Posting of
messages before a worker is started is allowed but posting them after the
worker has stopped is not.
This fixes avrorouter related failures and all other failures that stem
from worker messages being ignored at startup.
By stopping the REST API before the workers and moving the shutdown to the
same worker that handles REST API requests, we prevent the hang on
shutdown. This also makes the signal handler signal-safe.
If a worker has been stopped, tasks must not be executed on it. To prevent
this, the calling code should check whether the worker has been
stopped. This does not prevent the case where a message is successfully
posted to a worker but the worker is stopped before it processes it.
In MaxScale, a "deprecated" parameter is not in use and can be ignored.
Leaving the parameters out of serialized configuration files avoids warning
messages.
It is an error to register the same task multiple times, but
for a maintenance release it is simpler and less risky to simply
ignore an attempt (that BLR does) to do that.
Allowing a task to be registered anew causes behaviour akin
to a leak.
Before this change, the masking could be bypassed simply by
> set @@sql_mode='ANSI_QUOTES';
> select concat("ssn") from person;
The reason is that as the query classifier is not aware of whether
'ANSI_QUOTES' is on or not, it will not know that what above appears
to be the string "ssn", actually is the field name `ssn`. Consequently,
the select will not be blocked and the result returned in cleartext.
It's now possible to instruct the query classifier to report all string
arguments of functions as fields, which will prevent the above. However,
it will also mean that there may be false positives.
Reduced the default cache size from 40% to 15%. Most cases don't benefit
from that much memory and the defaults have caused problems in live
environments.
If a query spans more than a single packet, it will never be successfully
classified due to the fact that the complete SQL is never available to the
query classifier. For this reason, it is pointless to cache them.
The new `force=yes` option closes all connections to the server that is
being put into maintenance mode. This will immediately close all open
connections to the server without allowing results to return.
Given the assumption that queries are rarely 16MB long and that
realistically the only time that happens is during a large dump of data,
we can limit the size of a single read to at most one MariaDB/MySQL packet
at a time. This change allows the network throttling to engage a lot
sooner and reduces the maximum overshoot of throtting to 16MB.
The load_persisted_configs parameter now controls whether persisted
runtime changes are loaded on startup. The changes are still generated as
it persists the current state of MaxScale making problem analysis easier.
The function allocated a constant-sized chunk of memory for all messages
which was excessive as well as potentially dangerous when used with large
strings.