The master failure can now be verified by checking when the slaves are
connected to the master. If the slaves do not receive any events from the
master, the connections are considered as down after a configurable limit.
Added two parameters for controlling whether the check is done and for how
long the monitor waits before doing the failover.
The slave heartbeat count and period are collected from the SHOW ALL
SLAVES STATUS output. This, in addition to the relay log position, is used
to calculate the point in time when a slave has last interacted with the
master.
By using this timestamp, the monitor can enforce a minimum "timeout" for
the master before a failover is performed.
Now SHOW ALL SLAVES STATUS reports new fields:
Retried_transactions;
Max_relay_log_size,
Executed_log_entries,
Slave_received_heartbeats,
Slave_heartbeat_period,
Gtid_Slave_Pos"
Currently binlog server doesn't send to slaves these event types:
- MARIADB10_START_ENCRYPTION_EVENT
- IGNORABLE_EVENT
It also skips events with LOG_EVENT_IGNORABLE_F flag.
This modification allows sending events with that flag.
Moved mon_process_failover() from monitor.cc to mysql_mon.cc. Renamed
some functions and variables related to previous failover functionality
to avoid confusion.
The get_server_info function takes the monitor handle and a database and
returns the corresponding MYSQL_SERVER_INFO struct. This hides a part of
the actual implementation of the info struct from the monitor code,
allowing future refactoring to be done. It also makes the code a bit more
readable.
The values in the MYSQL_SERVER_INFO struct can now be updated with the
update_slave_status function.
Also moved the number of configured and running slave configurations into
the info struct. This removes the need to pass output parameters.
If something is SELECTed that should be cached for some, but not
for the current user, the cached entry it nevertheless updated.
That way the cached data will always be the last fetched value
and it is also possible to use this behaviour for explicitly
updating the cache entry.
The MYSQL_SERVER_INFO struct is updated first and then the server status
is updated. This allows the function to be called without it affecting the
server state.
When binary data was processed, it was possible that the values were
misinterpreted as OK packets which caused debug assertions to trigger.
In addition to this, readwritesplit did not handle the case when all
packets were routed individually.
The authentication phase expects full packets. If the packets aren't
complete a debug assertion would get hit. To detect this, the result of
the extracted buffer needs to be checked.
A multi-statements can return multiple resultsets in one response. To
accommodate for this, both the readwritesplit and modutil code must be
altered.
By ignoring complete resultsets in readwritesplit, the code can deduce
whether a result is complete or not.
When LEAST_BEHIND_MASTER routing criteria was used, the info level logging
function would fall through to the default case. In debug builds, this
would trigger a debug assertion.
Returning the results of a query as a set of packets is currently more
efficient. This is mainly due to the fact that each individual packet for
single packet routing is allocated from the heap which causes a
significant loss in performance.
Took the new capability into use in readwritesplit and modified the
reply_is_complete function to work with non-contiguous results.
When the router requires statement based output, the gathering of complete
packets can be skipped as the process of splitting the complete packets
into individual packets implies that only complete packets are handled.
Also added a quicker check for stored protocol commands than a call to
protocol_get_srv_command.
The backend protocol command tracking didn't check whether the session was
the dummy session. The DCB's session is always set to this value when it
is put into the persistent pool.
If a query was processed in the client protocol module when a prepared
statement was being executed by the backend module, the current command
would get overwritten. This caused a debug assertion in readwritesplit to
trigger as the result was neither a single packet nor a collected result.
The RCAP_TYPE_STMT_INPUT capability guarantees that a buffer contains a
complete packet. This information can be used to track the currently
executed command based on the buffer contents which allows asynchronicity
betweent the client and backend protocol. In practice this only comes in
play when routers queue queries for later execution.
The multi-statement detection did not check for the existence of
semicolons before doing the heavier processing.
Calculcate the packet length only once for the result state management.
Replace the original version of the function with the reference version
and use it everywhere. Added runtime assertions to check that an invalid
DCB is never processed.
As the DCB passed as the clientReply parameter is guaranteed to match one
of the DCBs in the RWBackends. By using a reference, the need to copy a
shared_ptr is removed (along with the atomic operation that it implies)
thus reducing the overhead in the clientReply and the functions it uses.
The result collection did not reset properly when a non-resultset was
returned for a request. As collected result need to be distinguishable
from single packet responses, a new buffer type was added.
The new buffer type is used by readwritesplit which uses result collection
for preparation of prepared statements.
Moved the current command tracking to the RWBackend class as the command
tracked by the protocol is can change before a response to the executed
command is received.
Removed a false debug assertion in the mxs_mysql_extract_ps_response
function that was triggered when a very large prepared statement response
was processed in multiple parts.
Inlined the getter/setter type functions that are often used. Profiling
shows that inlining the RWBackend get/set functions for the reply state
manipulation reduces the relative cost of the function to acceptable
levels. Inlining the Backend state function did not have as large an
effect but it appears contribute a slight performance boost.
The result processing code did unnecessary work to confirm that the result
buffers are contiguous. The code also assumed that multiple packets can be
routed at the same time when in fact only one contiguous result packet is
returned at a time.
By assuming that the buffers are contiguous and contain only one packet,
most of the copying and buffer manipulation can be avoided.
When binary data was processed, it was possible that the values were
misinterpreted as OK packets which caused debug assertions to trigger.
In addition to this, readwritesplit did not handle the case when all
packets were routed individually.
The authentication phase expects full packets. If the packets aren't
complete a debug assertion would get hit. To detect this, the result of
the extracted buffer needs to be checked.
A multi-statements can return multiple resultsets in one response. To
accommodate for this, both the readwritesplit and modutil code must be
altered.
By ignoring complete resultsets in readwritesplit, the code can deduce
whether a result is complete or not.
When LEAST_BEHIND_MASTER routing criteria was used, the info level logging
function would fall through to the default case. In debug builds, this
would trigger a debug assertion.
Returning the results of a query as a set of packets is currently more
efficient. This is mainly due to the fact that each individual packet for
single packet routing is allocated from the heap which causes a
significant loss in performance.
Took the new capability into use in readwritesplit and modified the
reply_is_complete function to work with non-contiguous results.