In this case, the server was already a slave and is not being demoted. Also, the file may
contain queries which cannot be ran while a slave connection is running.
Fixed string truncation warnings by reducing max parameter lengths by one
where applicable. The binlogrouter filename lengths are slightly different
so using memcpy to work around the warnings is an adequate "solution"
until the root of the problem is solved.
Removed unnecessary CMake policy settings from qc_sqlite. Adding a
self-dependency on the source file of an external project has no effect
and only caused warnings to be logged.
The documentation stated that all CPUs would be used when threads=auto was
used. In reality the behavior was the same as was with 2.0 (number of CPUs
minus one).
The sql queries are given in two text files, defined by options promotion_sql_file
and demotion_sql_file. The files must exist when monitor starts. The files are read
line by line, ignoring empty lines and lines starting with '#'. All other lines
are sent to the server being promoted, demoted or rejoined. Any error in opening
a file, reading it or executing the contents will cause the entire operation to
fail.
The filed defined in demotion_sql_file is also ran when rejoining a server. This
is to ensure a previously failed master is "demoted" properly when it joins the
cluster.
With the changes to the DCB handling, the service pointer of a client DCB
must always be assigned.
Also removed the unnecessary parentheses around the comparison.
Large session commands weren't properly handled which caused the router to
think that the trailing end of a multi-packet query was actually a new
query.
This cannot be confidently solved in 2.2 which is why the router session
is now closed the moment a large session command is noticed.
Only commands that can contain an SQL statements should be stored for
retrying (COM_QUERY and COM_EXECUTE). Other commands are either session
commands or do not work with query retrying.
If maxadmin connections are handled by different workers, then
there may be a deadlock if some maxadmin command requires
communication with all workers.
Namely, in that case a message will be sent to all other workers
but the current one, but that message will not be handled if that
other worker at that point sits in the debugcmd_lock spinlock
in debugcmd.c:execute_cmd().
We can prevent that deadlock from happening simply by ensuring
that all maxadmin connections are handled by one thread.
The commands needs to be handled separately from the rest of the result
types.
Added a test case that reproduces the problem and verifies that the change
in code fixes it.
When a LOAD DATA LOCAL INFILE finishes, the client sends an empty
packet. The second case when the client sends an empty packet when the
previous packet was exactly 0xffffff bytes long. These two packets were
confused which caused the internal state to temporarily flip from inactive
to ending and back to inactive.
The aforementioned flip-flopping didn't have any practical differences but
it was caught by a debug assertion.
The COM_STMT_FETCH command will create a response. This was a
readwritesplit-specific interpretation of the command and it was wrong.
Also record the currently executed command event for session commands.
Readwritesplit would not handle multiple overlapping COM_STMT_EXECUTE
commands properly if they opened cursors. This was due to the fact that
the result would not be marked as complete and COM_STMT_FETCH commands
were executed as if they did not return results.
The correct implementation is to consider a COM_STMT_EXECUTE that opens a
cursor complete only when the first EOF packet is read (that is, when the
resultset header is read). This allows subsequent COM_STMT_FETCH commands
to be handled separately.
The separate COM_STMT_FETCH handling must count the number of packets that
are being fetched. This allows correct tracking of the state of a
COM_STMT_FETCH by checking that the number of packets is correct or the
second EOF/ERR packet is read.
When a LOAD DATA LOCAL INFILE is actively rejected by the server, the
server sends an error to the client. This error was not detected and the
router was stuck in the special mode that handles LOAD DATA LOCAL INFILE.
The current command needs to be updated before the queries are actually
routed. This allows the KILL command detection and processing to correctly
work.
When the connection pool is inspected, both the client username and IP
must match. This causes the pool to be partitioned by username and IP,
prevening unintentional sharing of connections between different users.
The use of `router_options=master,slave` was not working as expected. This
was mostly caused by the master bit checks using a bitwise AND instead of
comparing equality. In addition to this, the master would not be
considered a valid candidate if both slaves and masters were available.
The Backend::dcb() method gives the raw pointer to the internal DCB. This
pointer is used by at least readwritesplit to map raw DCB pointers to
backends. To prevent stale pointers from being returned, m_dcb needs to be
set to NULL after it has been closed.
The `MYSQL_ROW row` variable was being overwritten by the extra query done
by the SST method detection code. Moving it into its own function prevents
this and makes the code significantly easier to comprehend.
Added a test case that reproduced the problem (MaxScale crashed) and
verifies that the patch fixes the problem.
The monitor queried the session-specific domain id, which does not follow the global
value while the session is alive. This caused the monitor to follow the wrong gtid
domain if the domain was changed after MaxScale was started. This patch modifies the
query to read the global value instead. Even this is not fool-proof, as existing
sessions can issue writes with the old domain, confusing the gtid-parsing.
From a practical perspective it makes no relevant difference
whether you have to add an entry to the config file and restart
maxscale or if you have to restart maxscale and provide a specific
command line, so better to provide just either possiblity.
More important would be to provide a way for turning this feature
on and off at runtime.
With the configuration entry
dump_last_statements=[never|on_close|on_error]
you can now specify when and if to dump the last statements
of of a session.
With the configuration entry
retain_last_statements=<unsigned>
or the debug flag '--debug=retain-last-statements=<unsigned>',
MaxScale will store the specified number of last statements
for each session. By calling
session_dump_statements(session);
MaxScale will dump the last statements as NOTICE messages.
For debugging purposes.
If a MaxScale-generated configuration defines an empty value, it is
ignored with the assumption that the next modification will cause the
problem to correct itself.
Disabling the session cache prevents errors from being generated as the
default OpenSSL configuration is to enable session caching but with an
uninitialized context ID. In addition to preventing the errors, it
prevents the possible security problems implicated by the definition a
"static" context ID.
We need to copy some data from a AF_UNIX based listener dcb
to the accepted client dcb, to prevent assertion violation in
dcb_get_port(). Further, to be able to log the path in the case
of an authentication error we need to copy that as well.
If a table/database rule has been provided then if the resultset
does not contain table/database names, then we consider it a match
(subject to the column obviously).
Otherwise a rule like
{
"replace": {
"table": "info",
"column": "email"
},
"with": {
"fill": "*"
}
}
could be bypassed with a statement like
SELECT * FROM info UNION SELECT * from info
as the resultset in that case will not indicate that the column emain
is from info, which it will if the statement is
SELECT * FROM info;