The state of the monitored servers is only persisted if the states of the
servers have changed. This removes the unnecessary disk IO caused by the
writing on the monitor journal.
The new `ssl_verify_peer_certificate` parameter controls whether the peer
certificate is verified. This allows self-signed certificates to be
properly used with MaxScale.
Only used in conjunction with queued connections, which are not
enabled anyway. Once that comes on the table again, better to use
some standard data structures.
This builds on commit 1287b0e595a5f99026f66df7eeaef091b8ffc774 and cleans
up the original code. This fixes a bug introduced in the aforementioned
commit and cleans up the code.
This commit introduces maxscale::future, maxscale::packaged_task
and maxscale::thread that are modeled after C++11 std::future,
std::packaged_task and std::thread as described here:
http://en.cppreference.com/w/cpp/thread
The standard classes rely upon rvalue references (and move
constructors) introduced by C++11. As the C++ compilers we must use
are pre-C++11 that feature is obviously not present. The absence of
rvalue references is circumvented by implementing regular copy
constructors and assignment operators as if the arguments were rvalue
references.
In practice the above means that when one of these objects are copied,
the state is _moved_ rendering the copied object in default initialized
state. Some care is needed to ensure that unintended copying does not
occur.
The helper function provides map-like access to row values. This is used
to retrieve the values for all MariaDB 10.0+ versions as there are
differences in the returned results between 10.1 and 10.2.
Using timestamps to detect whether MaxScale was active or passive can
cause problems if multiple events happen at the same time. This can be
avoided by separating events into actively observed and passively observed
events. This clarifies the logic by removing the ambiguity of timestamps.
As the monitoring threads are separate from the worker threads, it is
prudent to use atomic operations to modify and read the state of the
MaxScale. This will impose an happens-before relation between MaxScale
being set into passive mode and events being classified as being passively
observed.
Moved mon_process_failover() from monitor.cc to mysql_mon.cc. Renamed
some functions and variables related to previous failover functionality
to avoid confusion.
With this variables set to true, if $VAR is used as a value in the
configuration file, then `$VAR` will be replaced with the value of
the environment variable VAR.
Returning the results of a query as a set of packets is currently more
efficient. This is mainly due to the fact that each individual packet for
single packet routing is allocated from the heap which causes a
significant loss in performance.
Took the new capability into use in readwritesplit and modified the
reply_is_complete function to work with non-contiguous results.
The backend protocol command tracking didn't check whether the session was
the dummy session. The DCB's session is always set to this value when it
is put into the persistent pool.
8 + 4 + 4 ensures 16 with 8 byte alignment, which means that
'data' is certain to be 8 byte aligned. 4 + 8 + 4 might result
in something else in some funky environment.
The GWBUF shared buffer and its data is now allocated in one
chunk so that the data directly follows the shared buffer.
That way, creating a GWBUF will involve 2 and not 3 calls to
malloc and freeing one will involve 2 and not 3 calls to free.
The result collection did not reset properly when a non-resultset was
returned for a request. As collected result need to be distinguishable
from single packet responses, a new buffer type was added.
The new buffer type is used by readwritesplit which uses result collection
for preparation of prepared statements.
Moved the current command tracking to the RWBackend class as the command
tracked by the protocol is can change before a response to the executed
command is received.
Removed a false debug assertion in the mxs_mysql_extract_ps_response
function that was triggered when a very large prepared statement response
was processed in multiple parts.
Inlined the getter/setter type functions that are often used. Profiling
shows that inlining the RWBackend get/set functions for the reply state
manipulation reduces the relative cost of the function to acceptable
levels. Inlining the Backend state function did not have as large an
effect but it appears contribute a slight performance boost.
Returning the results of a query as a set of packets is currently more
efficient. This is mainly due to the fact that each individual packet for
single packet routing is allocated from the heap which causes a
significant loss in performance.
Took the new capability into use in readwritesplit and modified the
reply_is_complete function to work with non-contiguous results.
The backend protocol command tracking didn't check whether the session was
the dummy session. The DCB's session is always set to this value when it
is put into the persistent pool.
8 + 4 + 4 ensures 16 with 8 byte alignment, which means that
'data' is certain to be 8 byte aligned. 4 + 8 + 4 might result
in something else in some funky environment.
The GWBUF shared buffer and its data is now allocated in one
chunk so that the data directly follows the shared buffer.
That way, creating a GWBUF will involve 2 and not 3 calls to
malloc and freeing one will involve 2 and not 3 calls to free.
The result collection did not reset properly when a non-resultset was
returned for a request. As collected result need to be distinguishable
from single packet responses, a new buffer type was added.
The new buffer type is used by readwritesplit which uses result collection
for preparation of prepared statements.
Moved the current command tracking to the RWBackend class as the command
tracked by the protocol is can change before a response to the executed
command is received.
Removed a false debug assertion in the mxs_mysql_extract_ps_response
function that was triggered when a very large prepared statement response
was processed in multiple parts.
Inlined the getter/setter type functions that are often used. Profiling
shows that inlining the RWBackend get/set functions for the reply state
manipulation reduces the relative cost of the function to acceptable
levels. Inlining the Backend state function did not have as large an
effect but it appears contribute a slight performance boost.
The parameter handling for monitors can now be done in a consistent manner
by establishing a rule that the monitor owns the parameter object as long
as it is running. This will allow parameters to be added and removed
safely both from outside and inside monitors.
Currently this functionality is only used by mysqlmon to disable failover
after an attempt to perform a failover has failed.
KILL commands are now sent to the backends in an asynchronous manner. As
the LocalClient class is used to connect to the servers, this will cause
an extra connection to be created on top of the original connections
created by the session.
If the user does not have the permissions to execute the KILL, the error
message is currently lost. This could be solved by adding a "result
handler" into the LocalClient class which is called with the result.
When the LocalClient is constructed, it is possible to extract all the
needed information at that time. The only obstacle is the fact that the
LocalClient is constructed at the same time the session is. Since the
client DCB is created before the session, it is safe to extract the shared
data directly from it.
The total timeout for the retrying of interrupted queries can now be
configured with the `query_retry_timeout` parameter. It controls the total
timeout in seconds that the query can take.
The actual connection, read and write timeouts of the connector aren't a
good configuration value to use for abstracted queries as the time that it
takes to execute a query can be composed of both connections, reads and
writes. This is caused by the usage of MYSQL_OPT_RECONNECT that hides the
fact that the connector reconnects to the server when a query is
attempted.
The new `query_retries` parameter controls how many times an interrupted
query is retried. This retrying of interrupted queries will reduce the
rate of false positives that MaxScale monitors detect.