If a PS command is routed multiple times, the ID will not be reverted to
the external ID in the failure cases. This prevented prepared statements
from being re-routed correctly.
When the connection to the master is broken, the session is not configured
to use the read-only modes and the monitor can still connect to the
server, the connection will be closed and and error is sent to the
client. To leave some trace of this problem in the MaxScale logs, a
message should always be logged when a network error occurs.
As the router is the only one that knows what backends a particular
statement has been sent to, it is the responsibility of the router
to keep the session bookkeeping up to date. If it doesn't we will
know what statements a session has received (provided at least some
component in the routing chain has RCAP_TYPE_STMT_INPUT capability),
but not how long their processing took. Currently only readwritesplit
does that.
All queries are stored and not just COM_QUERY as that makes the
overall bookkeeping simpler; at clientReply() time we do not need to
know whether or not to bookkeep information, we can just do it.
When session information is queried for, we report as much information
we have available.
The causal_reads_timeout default value is too long when considering the
behavioral changes that MXS-2141 introduced. With a 10 second default
value, a result is returned to the client in a reasonable amount of time.
With causal_reads enabled, the query would return with an error if the
slave was not able to catch up to the master fast enough. By automatically
retrying the query on the master, we're guaranteed that a valid result is
always returned to the client.
When a connection to a server is lost and the session command history is
disabled, the session will continue as long as at least one connection is
open. Previously the open connection calculation used the same code that
was used when a new session was created which only inspected the
configured server count instead of the actual open connection count.
The table creation was not detected as the function used to extract the
table name did not return the fully qualified names. Even if it did return
a fully qualified name, it wouldn't have been correctly processed.
When a read-only transaction fails due to a connection error, no message
would be logged. Also added an info level message for the case when a
backend connection would get closed before the session is in the correct
state and a debug assertion that the router session should never be closed
when the handleError method is called.
By biasing the values of all counter type scores to positive integers, the
server weights are always taken into use.
This fixes the case when weights were ignored until all score base values
were larger than zero (the mxs922_server test).
The debug assertion is wrong as the code was changed to prioritize hints
over the router target selection. Also removed the superficial check for
master, slave and relay master states as they are implied by the fact that
the connection is in use.
The readwritesplit transaction management was a large part of the
clientReply function. Moving it into a separate function clarifies the
clientReply function by hiding the comments and details of the transaction
management.
By splitting the processing and state querying into two separate
functions, the result can be inspected multiple times without triggering
the result processing.
The collection of resultsets needs to be disabled by default when a
response is received to cover the cases where an error is returned.
The collection of results should also not be set for queries that do not
generate any responses.
The read-write distribution in readwritesplit is now stored in a map
partitioned by the servers that the router has used. Currently, the
statistics for removed servers aren't dropped so some filtering still
needs to be added.
The additions into the server.h header used C++ language which caused C
programs to fail to compile. Moved the implementation of the EMAverage
class into the private Server class in the server.hh header and exposed it
via functions in the server.h header. Also temporarily moved
almost_equal_server_scores into the public server.hh as there is no
service.hh header.
See script directory for method. The script to run in the top level
MaxScale directory is called maxscale-uncrustify.sh, which uses
another script, list-src, from the same directory (so you need to set
your PATH). The uncrustify version was 0.66.
Changes that allow slow or new servers to quickly apply samples towards the
server average. The most important changes are to not ignore the first N samples,
and apply an average to the server as soon as there is one available.
The new ResponseStat::make_valid() will use filter samples to add an average,
if no averages have yet been added, even if the number of filter samples is less
than the filter limit.
The math becomes simpler when the weight is inverted, i.e. a simple multiplication
to get the (inverse) score. Inverse weights are normalized to the range [0..1] where a lower
number is a higher weight,
The enum select_criteria_t is used to provide a std::function that takes the backends
as vector (rather than the prior pairwise compares) and returns the best backend.
This commit refactors slave selection. The compare is still pair-wise but isolated into a small run_comparison() function. The function get_slave_candidate() is used when new connections are created, which I both moved and modified (had to move due to scoping), so diff is off.
The slave selection for routing: get_slave_backend() now has the filtering logic from old get_slave_backend() and compare_backends(), the latter of which is removed.
Backend functions mostly take shared_ptr<SRWBackend> in various forms (as is, const ref, in a container). Ideally the shared_ptr would be used only to where it is really needed, and either naked ptrs or references to RWBackend would be used. This refactor does not address that issue, but compounds it by using even deeper shared_ptr structures. Fixing that in a future commit.
The main piece of code, slave selection (backend_cmp_response_time), uses the available
method of pair-wise comparison of slaves. This will be changed to selection using all
available slaves, along with removal of hard coded values.
This is to support calculating the average from a session, and the slave selection criteria to be able to route based on averages. This commit, like the next one, have TODOs which you should feel free to comment on. Undecided things.