It is an error to register the same task multiple times, but
for a maintenance release it is simpler and less risky to simply
ignore an attempt (that BLR does) to do that.
Allowing a task to be registered anew causes behaviour akin
to a leak.
This makes iterating over packets in buffers faster while still
maintaining the requirements for forward iterators. Not using operator+=
makes it clear that this is not a random access iterator.
Necessary if the firewall should be able to block columns when
'ANSI_QUOTES' as enabled and " instead of backticks are used.
Without this, the following
> set @@sql_mode='ANSI_QUOTES';
> select "ssn" from person;
will not be blocked if the database firewall has been configured
to block the column ssn.
The masking filter will now consider all string arguments to
functions to be fields. This in order to prevent bypassing of
the masking with
> set @@sql_mode='ANSI_QUOTES';
> select concat("ssn") from masking;
This may lead to false positives, but no can do.
Before this change, the masking could be bypassed simply by
> set @@sql_mode='ANSI_QUOTES';
> select concat("ssn") from person;
The reason is that as the query classifier is not aware of whether
'ANSI_QUOTES' is on or not, it will not know that what above appears
to be the string "ssn", actually is the field name `ssn`. Consequently,
the select will not be blocked and the result returned in cleartext.
It's now possible to instruct the query classifier to report all string
arguments of functions as fields, which will prevent the above. However,
it will also mean that there may be false positives.
Now considers other routing hints if first one fails. The order is inverted compared
to e.g. namedserver filter settings because of how routing hints are stored. If all hints
are unsuccessful, route to any slave.
If the DCB was closed before the handshake for the LocalCliet connection
was received, the gw_decode_mysql_server_handshake would use the closed
DCB to log the connection ID. Clearing out the pointer prevents it.
The DCB callbacks shouldn't be used to send more events as they cause the
callback to be called recursively. The recursive calls caused rows to be
sent before the schemas for the rows were sent. Queuing the events via the
worker mechanism prevents this.
Reduced the default cache size from 40% to 15%. Most cases don't benefit
from that much memory and the defaults have caused problems in live
environments.
If a query spans more than a single packet, it will never be successfully
classified due to the fact that the complete SQL is never available to the
query classifier. For this reason, it is pointless to cache them.
The STL regex implementations have proven to be unreliable on older
systems and replacing the regex with hand-written code for version
extraction is less prone to break.
If the monitor setting "replication_master_ssl" is set to on, any CHANGE MASTER TO-command
will have MASTER_SSL=1. If set to off or unset, MASTER_SSL is left unchanged to match existing
behaviour.
The largest part of the code deals with the start of a response. Moving
this into a subfunction makes the function clearer as the switch statement
inside a switch statement is removed.
By processing the packets one at a time, the reply state is updated
correctly regardless of how many packets are received. This removes the
need for the clunky code that used modutil_count_signal_packets to detect
the end of the result set.
The new `force=yes` option closes all connections to the server that is
being put into maintenance mode. This will immediately close all open
connections to the server without allowing results to return.
Given the assumption that queries are rarely 16MB long and that
realistically the only time that happens is during a large dump of data,
we can limit the size of a single read to at most one MariaDB/MySQL packet
at a time. This change allows the network throttling to engage a lot
sooner and reduces the maximum overshoot of throtting to 16MB.
By logging the connection ID for each created connection, failures can be
traced back from the backend server all the way up to the client
application.
If a transaction replay has to be executed twice due to a failure of the
original candidate master, the query queue could contain replayed
queries. The replayed queries would be placed into the queue if a new
connection needs to be created before the transaction replay can start.
Backported the changes that convert the query queue in readwritesplit into
a proper queue. This changes combines both
5e3198f8313b7bb33df386eb35986bfae1db94a3 and
6042a53cb31046b1100743723567906c5d8208e2 into one commit.