All internal code is now inside an anonymous namespace to prevent their
use outside of the compilation unit.
Also fixed the wrong return type of ResourceWatcher::etag.
Executing the commands inside a worker thread allows further improvements
to job queuing but mainly it fixes the problem of loading users when
listeners are allocated at runtime.
When a runtime listener was being created, it was allocated in the admin
thread whereas the listeners created at startup were allocated in the
"main" thread. This caused a minor difference in how administrative
functions were handled by the REST API and MaxAdmin. The only real problem
is that listener allocation depends on being done inside a worker thread
as it lazily initializes some resources.
The memory for the rate limit struct was allocated but it was not assigned
for the service. Also corrected a false debug assertion in
service_refresh_users.
The internal header directory conflicted with in-source builds causing a
build failure. This is fixed by renaming the internal header directory to
something other than maxscale.
The renaming pointed out a few problems in a couple of source files that
appeared to include internal headers when the headers were in fact public
headers.
Fixed maxctrl in-source builds by making the copying of the sources
optional.
The error message was not 100% accurate about the value. In addition to
that, neither the value itself nor the monitor or parameter names were
printed in the error message.
The state of the monitored servers is only persisted if the states of the
servers have changed. This removes the unnecessary disk IO caused by the
writing on the monitor journal.
The new `ssl_verify_peer_certificate` parameter controls whether the peer
certificate is verified. This allows self-signed certificates to be
properly used with MaxScale.
With "--daemon" or "-n" MaxScale can now be told to run in daemon
mode, that is, it forks and the parent exits. This is the default
behaviour, but a flag to this effect is needed if the default
behaviour is changed.
MaxScale now refuses to run as root. However, it is possible to
start MaxScale as root, as long as a user to run MaxScale as is
provided as a command line argument.
It is possible to run as root by invoking MaxScale as root and
by specifying the MaxScale user to be root.
Only used in conjunction with queued connections, which are not
enabled anyway. Once that comes on the table again, better to use
some standard data structures.
This commit introduces maxscale::future, maxscale::packaged_task
and maxscale::thread that are modeled after C++11 std::future,
std::packaged_task and std::thread as described here:
http://en.cppreference.com/w/cpp/thread
The standard classes rely upon rvalue references (and move
constructors) introduced by C++11. As the C++ compilers we must use
are pre-C++11 that feature is obviously not present. The absence of
rvalue references is circumvented by implementing regular copy
constructors and assignment operators as if the arguments were rvalue
references.
In practice the above means that when one of these objects are copied,
the state is _moved_ rendering the copied object in default initialized
state. Some care is needed to ensure that unintended copying does not
occur.
As the passive parameter is only used by the failover and the failover can
only be initiated by the monitor, there is no true need to synchronize the
reads and write of this parameter.
As all runtime changes are protected by the runtime lock, only partial
reads are of concern. For the supported platforms, this is not a practical
problem and it only confuses the reader when other variables are modified
without atomic operations.
The master failure was assumed to be the only master related event for
each monitoring loop. If the master was switched by an external actor, the
monitor tracking would be out of sync.
The helper function provides map-like access to row values. This is used
to retrieve the values for all MariaDB 10.0+ versions as there are
differences in the returned results between 10.1 and 10.2.
Using timestamps to detect whether MaxScale was active or passive can
cause problems if multiple events happen at the same time. This can be
avoided by separating events into actively observed and passively observed
events. This clarifies the logic by removing the ambiguity of timestamps.
As the monitoring threads are separate from the worker threads, it is
prudent to use atomic operations to modify and read the state of the
MaxScale. This will impose an happens-before relation between MaxScale
being set into passive mode and events being classified as being passively
observed.
Moved mon_process_failover() from monitor.cc to mysql_mon.cc. Renamed
some functions and variables related to previous failover functionality
to avoid confusion.
The JSON API specification states that all resources must support direct
modification of resource relationships by providing only the definition
for a particular relationship type to a /:type/:id/relationships/:type
endpoint.
The relevant part of the JSON API specification:
http://jsonapi.org/format/#crud-updating-to-many-relationships
The test did not properly move the relationships from the old monitor to
the new one. The test to passed as the relationship modification was not
really tested.
When pre-parsing the configuration file, the existence of environment
variables is only done for the [maxscale] section. For other sections
a nicer error message is obtained if the comlplaint is made when the
configuration file is actually loaded.
Mechanism for providing custom error message from the pre-parsing
function added.
If 'substitute_variables' has been set to true, then the value of
a parameter like `some_param=$SOME_VAR' is replaced with the value
of the environment variable 'SOME_VAR'.
It is a fatal error to refer to a variable that does not exist.
With this variables set to true, if $VAR is used as a value in the
configuration file, then `$VAR` will be replaced with the value of
the environment variable VAR.
When binary data was processed, it was possible that the values were
misinterpreted as OK packets which caused debug assertions to trigger.
In addition to this, readwritesplit did not handle the case when all
packets were routed individually.
If multiple queries that only generate OK packets were executed, the
result returned by the server would consist of a chain of OK packets. This
special case needs to be handled by the modutil_count_signal_packets.
The current implementation is very ugly as it simulates a result with at
least one resultset in it. A better implementation would hide it behind a
simple boolean return value and an internal state object.
A multi-statements can return multiple resultsets in one response. To
accommodate for this, both the readwritesplit and modutil code must be
altered.
By ignoring complete resultsets in readwritesplit, the code can deduce
whether a result is complete or not.
The original offset needs to be separately tracked to assert that an OK
packet is not the first packet in the buffer. The functional offset into
the buffer is modified to reduce the need to iterate over buffers that
have already been processed.