The updating of GTIDs was only considered successful if both the current
GTID position and binlog GTID positions were non-empty. If a slave has no
binlogged events, the GTID update would always fail.
This change in behavior caused the mysqlmon_failover_auto and
mysqlmon_failver_manual tests to break. The test disabled the binary log
on one of the servers which caused it to be left out from the rejoining
process.
The router did not take large packets into account when determining
whether the server will respond. This caused the response counts to be off
by one for all large packets.
Accidental modifications of scripts/programs are more likely if the owner
has write permissions on the file. In addition, they are not required and
thus can be removed.
The same problem that caused maxadmin to lock up was also what caused
maxinfo to lock up. The concurrent access to the legacy administrative
functions caused deadlocks.
Parameter deprecation on the module level means that the parameter should
no longer be used but using it will not cause an error. If a deprecated
parameter is used, it will be removed from the configuration.
The creation of the EOF packet is not needed as the last packet of a
result set is always guaranteed to be of the correct type. This also
allows non-resultsets to be correctly processed as the internal packet
number will be at 0 when the last result arrives.
Cleaned up some of the function names and changed the signatures to be
better suited for their use-cases.
Use angle bracket includes, combine some of the more unwieldly
conditionals into functions, added more comments.
The two-part shutdown procedure for the housekeeper was not needed and
caused problems if SIGINT wasn't raised. Since the main thread returns to
the main function, a single shutdown function is all that the housekeeper
needs to function.
Moved all the shutdown related code into Housekeeper::stop to remove the
waiting for the thread in the destructor.
The master down verification through slaves won't work with this commit. It needs to be
redesigned to handle multiple slave connections or removed. Also, only the first row of
slave status data is used by the monitor, so multiple slave connections are still
incorrectly handled.
The possibility to have multiple cache rules in a cache
configuration file is now handled throughout the cache
filter.
The major difference is that while you earlier directly
queried the Cache whether data should be stored to the
cache and whether data in the cache should be used, you
now query the Cache whether data should be stored to the
cache and, if so, get a CacheRules object from which you
subsequently query whether data from the cache should
be used.
It's now possible to have a rules file with an array of rule
objects, e.g.
[
{
store: [ ... ],
use: [ ... ]
},
{
store: [ ... ],
use: [ ... ]
}
]
This commit only contains the low-level modifications for
supporting that; the upper-level modifications are made in
another commit.
The resultset processing for MySQL requires some extra work as it lacks
the proper SERVER_MORE_RESULTS_EXIST flag in the last EOF packet. Instead,
the first EOF packet has the SERVER_PS_OUT_PARAMS flag which needs to be
interpreted as a SERVER_MORE_RESULTS_EXIST flag for the second EOF packet.
Also corrected the EOF packet handling to do the flag checks in the code
that deals with the EOF packets.
As the modutil_state parameter is now used for more than large packet
tracking, the correct solution is to store this state object in the
readwritesplit session instead of interpreting it to a boolean value.