Fixed leak in load_utils.cc and the cache filter. Also changed all
instances of json_object_set with json_object_set_new to make sure it's
only used when the references are to be stolen.
The call to gwbuf_length will fail if the pointed to buffer isn't the head
of the chain. To prevent this, the length is calculated before the buffer
is appended.
Also fixed the use of gwbuf_append. The return value should be assigned
and the code shouldn't reply on the value passed to the function being
correct.
In some cases the dbfwfilter is too strict and SQL that would not match a
rule is blocked due to it not being fully parsed. To allow a more lenient
mode of operation, the requirement for full parsing must be made
configurable.
By incrementing the counters when the session is created, we know that the
counter will always be decremented correctly. This does cause the listener
session to be counted as an actual session but this is already present in
the statistics calculations and is something we have to live with in 2.3
This change also makes it possible to overshoot the connection count
limitation as the session creation is delayed until authentication
fails. Both of these problems are fixed in 2.4.
The events are similar to normal query events except that they have an
extra 13 bytes of static data. This data is of no relevance to Maxscale
and thus can be ignored. This also allows the reuse of the same query
event code for execute load query events.
The use of a regular expression allows multiple rewrite rules to be
combined into one. This allows more versatile conversions but, given the
simple nature of regular expressions, also makes accidental changes more
likely.
Addd mxs::pcre2_substitute that is a more C++-friendly version of
mxs_pcre2_substitute to make. This makes string replacement a lot easier
to do when the source and destination are not C strings.
When rewrite_src and rewrite_dest have different lengths, the slave must
use GTID based replication. This removes the need for one-to-one matching
between the slave's relay log and the master's binlog which gets broken
when event lengths are modified due to event rewriting.
The replication events use a redundant format that has both the length of
the event and the position of the next event. The length can be modified
so that the next event position of the previous event and the length of
the curren event can be different. This includes overlap of the events
where the next event position of an event is "inside" the current event.
The next event position must retain its original value as that allows
replication slaves to reconnect with the correct position when file and
position based replication is used. For GTID replication, the slave asks
for the coordinates from the master and uses those.
When a slave receives a heartbeat event from a master, it checks that the
binlog name matches and that the next event position in the event is not
behind the slave's relay log position. These events must be modified to
contain a fake next event position that will never be reached by the
slave. This makes sure that the simple sanity checks never fail even if
we've caused the slave's relay log to be ahead of the master's binlog.
The parameters allow rudimentary database rewriting in the replication
stream. This is still very limited as the replacement must have the same
length as the original. In theory it could be shorter without causing
problems but making it longer is not easy.
The binlogfilter needs to read results one packet at a time but it needs
resultsets to be collected into a single buffer. This behavior is
guaranteed implicitly when the binlogrouter is used but is not present
when it is used without it. To support the use of the binlogfilter with
readconnroute, the filter must properly declare the capabilities.
If an existing cache-entry should be updated, but the new value
is larger that the maximum size of the cache, then the cache can
not be updated, but the old value must be removed.
Whether or not we succeed in removing the entry, an error result
must be returned. Earlier OK was returned, but no node was
allocated, which then caused a crash.
The `global` parameter causes the time window defined by the `time`
parameter to be applied at the instance level instead of the session
level. This means that a write from one connection will cause all other
connections to use the master for a certain period of time.
Using a configurable time window for consistency is not good as it is not
absolute and cannot adjust to how servers behave.
One example that demonstrates this is when a slave is normally lagging
behind by less than a second but some event causes the lag to spike up to
several seconds. In this case the configured time window would no longer
guarantee consistency.
Another reason to avoid a "static" time window is the fact taht it
prevents load balancing in the cases where slaves catch up to the master
within time window. This happens when time is configured to a higher value
to avoid inconsistencies at all costs.
Added a test case that verified the feature works.