The common monitor parameters are now stored as module-style
parameters. This makes the error reporting as well as the type checks for
the parameters consistent with parameters declared by the modules.
The same mechanism that is used for modules can be used for the
configuration of the core objects. This removes the need for the redundant
code that validates various values that is already present in the code
that modules use.
The auto_failover is a more reliable solution and should be used instead. Several
unused parameters were removed, although they can still be defined in the config
file. Updated documentation on the relevant parts.
Relaced router_options with configuration parameters in the createInstance
router entry point. The same needs to be done for the filter API as barely
any filters use the feature.
Some routers (binlogrouter) still support router_options but using it is
deprecated. This had to be done as their use wasn't deprecated in 2.2.
To prepare the router's for the eventual removal of the router_options
parameter, the API option arguments should not be used. The parameters can
be substituted by tokenizing the value of the parameter that is still
stored as a part of the service.
If a router parameter has no default value, the previous value would be
returned as an empty string. A debug assertion would be triggered when a
parameter of this type was altered.
When a new router parameter is encountered and the alteration fails, the
modified value in the service need to be removed. Previously, the new
value would have been stored in the service with an empty value which
would have caused problems.
The schemarouter now also uses versioned configurations implemented by
shared pointers to configuration objects. Moved all the configuration
management into the Config class. Removed router options from
schemarouter.
Extended the test to cover modifications to readconnroute as well as do
checks on detection of invalid parameters.
Also allowed modifications to router_options at runtime.
Readconnroute can now be configured at runtime. The changes to
configuration processing allow the removal of router_options now that the
parameters are parsed inside the router.
The transaction migration in the case of a changed master never worked as
transaction replay would only be triggered when the master fails. To cover
this case, the transaction replay just needs to be started when the need
for a transaction migration is detected.
To help diagnose the behavior, the Trx class no longer logs a message when
a transaction is closed. This is now done by readwritesplit which has more
knowledge of the context in which the transaction is closed.
Moved transaction statistics calculations into a member function and
placed all target type specific processing into their respective
functions.
Also inverted the connection keepalive check to also cover hinted queries.
The parameter accepts both counts and percentages which requires special
handling in the router. This needs to be done when the configuration is
updated.
If a transaction is replayed, queued commands must not be processed. The
exception to this rule is when pending session commands are executed
before the first statement in the replayed transaction is executed.
If transaction replaying was enabled and a result was returned in more
than one call to clientReply, a NULL value would be added to the statement
which in turn would trigger a debug assertion.
Similarly any following statements in the transaction would be executed
regardless of whether the result was complete.
Renamed the statement execution function to better describe what it does.
Extended the basic functional test case to cover this.
Added a new router API entry point that allows configuration changes after
the instance has been created. This makes alterations to most service
parameters at runtime possible.
An option to reconfiguration would have been the creation of a new
service and the eventual destruction of the old one. This would be a more
complicated and costly method but from an architectural point of view it
is interesting.
The actual implementation of the configuration change is left to the
router. Currently, only readwritesplit performs reconfiguration as
implementing it with versioned configurations is very easy.
Versioned configurations can be considered an adequate first step but it
is not an optimal solution as it causes a bottleneck in the reference
counting of the shared configuration. Thread-specific configuration
definitions would make for a more efficient solution but the
implementation is more complex.
By using a shared pointer instead of a plain object, we can replace the
router configuration without it affecting existing sessions. This is a
change that is required to enable runtime reconfiguration of
readwritesplit.
In previously the status bits were assigned only for running servers. Due
to the changes done in the monitoring algorithm, the slave and master
status bits are assigned to servers that are down. This change broke a
number of tests and deviates from previous behavior.
To keep the old behavior and to fix the test, the status bits are not
assigned to servers that are down.
The runtime configuration of a MaxScale can now be exported to a single
file. This allows modifications made via runtime configuration commands to
be "committed" for later use.
With the global configuration parameter 'query_classifier_cache'
the query classification cache can be turned on. At the moment it
does not matter what value it has; its presence simply enables the
caching.
Eventually you will be able to specify how much memory the cache
is allowed to consume.
Now takes a structure that, if present, enables the query
classification caching and specifies the properties of the
cache.
For the time being no actual properties are yet available.
The sql mode affects the result of the query classification.
Consequently, we cannot use the cached result if it was generated
when the sql mode was something else than what it currently is.
The mapping from a canonical statement to the query classification
result is maintained by the class QCInfoCache of which there exist
an instance per thread. That way no locking is needed but the
information will be cached multiple times (but that is a smaller
price to pay). Currently the information is stored in a regular
std::unordered_map, which means that the consumed amount of
memory will just keep on growing unless the number of canonical
statements used by clients happens to have an upper bound.
The LRU cache (that provides means for putting a bound on the
amount of memory used and number of items) used in the cache filter
will be generalized and be taken into use here as well.
The key is now the canonical statement itself, which means that
a fair amount of memory will be used. To preserve memory it might
make sense to use a hashed value instead, although that at least
in principle opens up the possibility for unintended collisions.
This feature will also be made configurable.
Minor cleanup:
- Unit variables places in anonymous namespace.
- Unit variables accessed via this_unit struct.
This is the first commit of many where the mapping from canonical
statement to query classification result is introduced.
The mapping is placed above the actual query classifier, so that all
query classifiers will benefit from it.
The mapping will be maintained by thread so that there will not be
any synchronization issues. Further, initially a simple map without
upper boundary will be used; if this is found to provide measurable
benefits, the map will then be replaced with an LRU mechanism so
that it becomes possible to specify just how much memory the mapping
may use.
The evq_length file held the returned number of descriptors from
the last epoll_wait() call. As such it is highly temporal and not
particularly meaningful.
That has now been removed and the instead the average number of
returned descriptors is maintained. That information changes slowly
and thus carries some meaning.
It is now possible to prevent the masking filter from rejecting
statements using functions in conjunction with fields to be
masked. So now it is possible to not use the blanket rejection
of the masking filter and replace it with more detailed firewall
rules.