When the test finishes and is about to check whether MaxScale is alive,
the servers should be cleared from maintenance mode and the replication
should be fixed. This way the test will clean up after itself.
When the test changes the master, it should reset the slave configuration
on the new master. This way no circular replication topologies are formed
and the monitor can be expected to perform correctly.
If a DCB is closed before a response to the handshake packet is received,
the DCB's session will point to the dummy session. In this case no error
should be written to the DCB.
This is a cherry-pick of commit f53e112bf49766f1cc55516c2d7ee571461d483f
from the 2.2 branch.
If the avrorouter is being build and the required libraries are not found,
the configuration process should fail. Adding the command to bypass this
into the error message should make it easier to disable this part if it is
not needed.
The message now states the impliciations of missing permissions. If the
MaxScale user does not have the permissions to view all databases, it will
only see its own databases.
A linefeed is whitespace, so given the rules
"\n"+ return '\n'
{SPACE} ;
a line consisting of space followed by a linefeed, will be matched
as space and not as a linefeed and hence will cause the parser to
barf.
"INTERVAL N <unit>" is now handled as an expression in itself and
as asuch will cause both statements such as
"SELECT '2008-12-31 23:59:59' + INTERVAL 1 SECOND;"
and
"select id from db2.t1 where DATE_ADD("2017-06-15", INTERVAL 10 DAY) < "2017-06-15";"
to be handled correctly. The compare test program contains some
heuristic checking, as the the embedded parser will in all cases
report date manipulation as the use of the add_date_interval()
function.
After a temporary table is created, readwritesplit will check whether a
query drops or targets that temporary table. The check for query type was
missing from the table dropping part of the code. The temporary table read
part was checking that the query is a text form query.
Added a debug assertion to the query parsing function in qc_sqlite to
catch this type of interface misuse.
MySQLAuth requires the SHOW DATABASES privilege to see all the databases
so it should be checked that the current user has the permission. A
missing permission will cause errors that are hard to resolve.
With a granularity of 1 second, the load will from a human
perspective reflect the current situation. That also means
that the maxadmin output shows "natural" steps; 1s, 1m and 1h.
When the IO thread of a relay master is stopped, the knowledge that it is
not a real master but a relay master is lost. To prevent this loss of
information, the master server's server_id value should always be stored
if it is available.
If a server is removed from a service, readconnroute will not verify that
the server it is connected to is still the same root master. This fixes
the regression of MXS-1418.
When auto_failover has been disabled due to failure to perform automatic
failover, the test should reset the number of failures. Resetting it
before the check for the log message allows the test to still fail if the
message is missing.
Test that `list servers` works with both monitored and unmonitored
servers. The test doesn't check that the GTID is present as the test setup
uses file and position based replication.
When TSV output is requested, the output should not contain the ANSI color
codes. This appears to be a "feature" of the table generation library but
it is quite simple to work around.
When the servers were iterated, each related monitor was requested. This
caused as many requests for monitors as there are servers.
By creating a set of unique monitor names and requesting them once, we
avoid the redundant requests from the monitor.
The function getResource had two different implementations that did very
different things. The newer version, that returns the raw JSON object that
represents a resource, was renamed to getJson.
Added the `GTID` field to make it easier to track GTID values of the
servers. This is done by requesting the related monitors for each server
and injecting an extra parameter into the server resource.
By definition, the load is calculated using the following formula:
L = 100 * ((T - t) / T)
where T is a time period and t the time of that period that the worker
spends in epoll_wait(). So, if there is so much work that epoll_wait()
always returns immediately, then the load is 100 and if the thread
spends the entire period in epoll_wait(), then the load is 0.
The basic idea is that the timeout given to epoll_wait() is adjusted
so that epoll_wait() will always return roughly at 10 seconds interval.
By making a note of when we are about to enter epoll_wait() and when we
return from it, we have all the information we need for calculating the
load.
Due to the nature of things, we will not be able to calculate the load
at exact 10-second boundaries, but it will be pretty close. And the load
is always calculated using the true length of the period.
We will then calculate 1 minute load by averaging the load value for 6
consecutive 10-second periods and the 1 hour load by averaging the load
value of 60 consecutive 1 minute loads.
So, while the 10-second load represents the load of the most recently
measured 10-second period (and not the load of the most recent 10
seconds), the 1 minute load and the 1 hour load represents the load of
the most recent minute and hour respectively.
When the parsing of a query failed, the message would treat the parameter
as a string as the printf format was `%*s` instead of `%.*s`.
The manpage of printf states the following about the precision specifier:
... or the maximum number of characters to be printed from a string
for `s` and `S` conversions.
This means that the field length specifier is somewhat meaningless for
strings.