In most cases it is reasonable to stop attempting transaction replays
after a certain number of failed attempts. This prevents transactions from
being repeatedly replayed on the same server over and over again if, for
example, it keeps crashing.
The lazy connection creation reduces the burden that short sessions place
on the backend servers. This also prevents the problems caused by early
disconnections that happen when only one server is used but multiple
connections are created. This does not solve the problem (MXS-619) but it
does mitigate it to acceptable levels.
This commit also adds a change to the weighting algorithm that prefers
existing connections over unopened ones. This helps avoid the
flip-flopping that happens when the absolute scores are very similar. The
hard-coded value might need to be tuned once testing is done.
Replaces uses of config_get_param() in modules either with contains()
or get_string(). The config_get_param() is moved to internal headers,
as it allows seeing inside a config setting.
This commit adds a new parameter that, when enabled, prunes the session
command history to a known length. This makes it possible to keep a
client-side pooled connection open indefinitely at the cost of making
reconnections theoretically unsafe. In practice the maximum history length
can be set to a value that encompasses a single session using the pooled
connection with no risk to session state integrity. The default history
length of 50 commands is quite likely to be adequate for the majority of
use-cases.
By storing the server statistics object in side the session, the lookup
involved in getting a worker-local value is avoided. Since the lookup is
done multiple times for a single query, it is beneficial to store it in
the session.
As the worker-local value is never deleted, it is safe to store a
reference to it in the session. It is also never updated concurrently so
no atomic operations are necessary.
The assertion would hold true for a single worker but it can't be
guaranteed to be true on a multi-worker system where the statistics are
distributed across the workers.
Enabling the feature by default prevents the master connection from dying
during times when there are very little or no writes. Having a modest ping
interval of 300 seconds serves to minimize the amount of extra work that
both MaxScale and the server have to do while still keeping the
connections in good shape.
The causal_reads_timeout default value is too long when considering the
behavioral changes that MXS-2141 introduced. With a 10 second default
value, a result is returned to the client in a reasonable amount of time.
The read-write distribution in readwritesplit is now stored in a map
partitioned by the servers that the router has used. Currently, the
statistics for removed servers aren't dropped so some filtering still
needs to be added.
See script directory for method. The script to run in the top level
MaxScale directory is called maxscale-uncrustify.sh, which uses
another script, list-src, from the same directory (so you need to set
your PATH). The uncrustify version was 0.66.
The math becomes simpler when the weight is inverted, i.e. a simple multiplication
to get the (inverse) score. Inverse weights are normalized to the range [0..1] where a lower
number is a higher weight,
The enum select_criteria_t is used to provide a std::function that takes the backends
as vector (rather than the prior pairwise compares) and returns the best backend.
This is to support calculating the average from a session, and the slave selection criteria to be able to route based on averages. This commit, like the next one, have TODOs which you should feel free to comment on. Undecided things.
The configuration updating in readwritesplit was the inspiration for the
mxs::rworker_local type. Due to this, taking it into use simply means that
the type changes from Config to mxs::rworker_local<Config>.
The configuration doesn't need to be contained in shared pointer as each
session holds its own version of it. This removes most of the overhead in
configuration reloading. The only thing that's left is any overhead added
by the use of thread-local storage.
By using the worker local data mechanism, data can be efficiently cached
on the local worker. This avoids all synchronization on reads and only
requires synchronization on a configuration update.
As an additional observation, the testing of std::mutex and SPINLOCK shows
that std::mutex far outperforms the MaxScale SPINLOCK even on
non-conflicting workloads.