It's now possible to specify in the config parameter declaration
that the smallest allowed unit is seconds. For parameters whose
granularity is seconds, allowing to specify a duration in
milliseconds would open up a possibility for hard to detect errors.
Now the desired type must be specified when getting a duration.
The type also dictates how durations without suffixes should be
interpreted.
That removes the need for remembering that to convert a returned
millisecond duration to a second duration.
Now considers other routing hints if first one fails. The order is inverted compared
to e.g. namedserver filter settings because of how routing hints are stored. If all hints
are unsuccessful, route to any slave.
The code that selects which worker to assign the DCB to is now completely
in the Listener class. This removes the need to change the ownership of a
DCB after it has been allocated.
The DCB is now fully allocated on the thread that owns it. This guarantees
that the owner is always correct when it is used.
The code in poll_add_dcb still manipulates which worker the DCB is
allocated. This needs to be removed and the detection of special needs
(maxadmin, maxinfo) must be moved into the listener.
If
- transaction replay is enabled,
- an error is returned and
- the error is one of the recoverable Clustrix errors
we will retry the transaction.
If it succeeds, then the client will not notice anything but
for a short delay.
Note that the error message is looked for irrespective of whether
the backend is Clustrix or not. However, as errors are not common
the price for doing that can probably be ignored.
However, a bigger problem is that explicit knowledge of different
backends should *not* be coded into routers.
If the DCB was closed before the handshake for the LocalCliet connection
was received, the gw_decode_mysql_server_handshake would use the closed
DCB to log the connection ID. Clearing out the pointer prevents it.
The DCB callbacks shouldn't be used to send more events as they cause the
callback to be called recursively. The recursive calls caused rows to be
sent before the schemas for the rows were sent. Queuing the events via the
worker mechanism prevents this.
Reduced the default cache size from 40% to 15%. Most cases don't benefit
from that much memory and the defaults have caused problems in live
environments.
If a query spans more than a single packet, it will never be successfully
classified due to the fact that the complete SQL is never available to the
query classifier. For this reason, it is pointless to cache them.
Took the Replicator into use in avrorouter as an alternative to the
binlogrouter based setup. This also allows the avrorouter to automatically
handle master failovers and to start replication from GTID coordinates.
Repurposed the Replicator from the CDC integration project as a
replication event processing service. It is similar to the CDC version of
the Replicator and is still in the same namespace but it lacks all of the
cross-thread communication that was a part of the integration project.
The STL regex implementations have proven to be unreliable on older
systems and replacing the regex with hand-written code for version
extraction is less prone to break.
If 'dynamic_node_detection' has been set to false, then the
Clustrix monitor will not dynamically figure out what nodes are
available, but instead use the bootstrap nodes as such.
With 'dynamic_node_detection' being false, the Clustrix monitor
will do no cluster checks, but simply ping the health port of
each server.
'dynamic_node_detection' specifies whether the Clustrix monitor
should dynamically figure out what nodes there are, or just rely
upon static information.
'health_check_port' specifies the port to be used when perforing
the health check ping.
Added core functionality for UNIX domain sockets in servers. Currently the
address parameter accepts them both but a separate `socket` parameter is
needed.