The current command needs to be updated before the queries are actually
routed. This allows the KILL command detection and processing to correctly
work.
The use of `router_options=master,slave` was not working as expected. This
was mostly caused by the master bit checks using a bitwise AND instead of
comparing equality. In addition to this, the master would not be
considered a valid candidate if both slaves and masters were available.
The Backend::dcb() method gives the raw pointer to the internal DCB. This
pointer is used by at least readwritesplit to map raw DCB pointers to
backends. To prevent stale pointers from being returned, m_dcb needs to be
set to NULL after it has been closed.
The `MYSQL_ROW row` variable was being overwritten by the extra query done
by the SST method detection code. Moving it into its own function prevents
this and makes the code significantly easier to comprehend.
Added a test case that reproduced the problem (MaxScale crashed) and
verifies that the patch fixes the problem.
The monitor queried the session-specific domain id, which does not follow the global
value while the session is alive. This caused the monitor to follow the wrong gtid
domain if the domain was changed after MaxScale was started. This patch modifies the
query to read the global value instead. Even this is not fool-proof, as existing
sessions can issue writes with the old domain, confusing the gtid-parsing.
From a practical perspective it makes no relevant difference
whether you have to add an entry to the config file and restart
maxscale or if you have to restart maxscale and provide a specific
command line, so better to provide just either possiblity.
More important would be to provide a way for turning this feature
on and off at runtime.
With the configuration entry
dump_last_statements=[never|on_close|on_error]
you can now specify when and if to dump the last statements
of of a session.
With the configuration entry
retain_last_statements=<unsigned>
or the debug flag '--debug=retain-last-statements=<unsigned>',
MaxScale will store the specified number of last statements
for each session. By calling
session_dump_statements(session);
MaxScale will dump the last statements as NOTICE messages.
For debugging purposes.
If a MaxScale-generated configuration defines an empty value, it is
ignored with the assumption that the next modification will cause the
problem to correct itself.
Disabling the session cache prevents errors from being generated as the
default OpenSSL configuration is to enable session caching but with an
uninitialized context ID. In addition to preventing the errors, it
prevents the possible security problems implicated by the definition a
"static" context ID.
We need to copy some data from a AF_UNIX based listener dcb
to the accepted client dcb, to prevent assertion violation in
dcb_get_port(). Further, to be able to log the path in the case
of an authentication error we need to copy that as well.
If a table/database rule has been provided then if the resultset
does not contain table/database names, then we consider it a match
(subject to the column obviously).
Otherwise a rule like
{
"replace": {
"table": "info",
"column": "email"
},
"with": {
"fill": "*"
}
}
could be bypassed with a statement like
SELECT * FROM info UNION SELECT * from info
as the resultset in that case will not indicate that the column emain
is from info, which it will if the statement is
SELECT * FROM info;
When a multi-statement query consisting completely of UPDATE statements is
received, the packets can be received in two separate buffers. To cope
with this situation, the state change into REPLY_STATE_RSET_COLDEF must
only be done if the buffer contains more than a single packet.
Add missing listener JSON diagnostics call. Check that the
diagnostics_json function exists before calling it.
As the protocol modules don't have diagnostics functions, they aren't
called.
Replace hard-coded strings with constant parameters. This makes it
slightly cleaner.
The database-level query now only takes rows with either a global
select privileges or non-null database privileges. The table-level
query only accepts non-null databases and no global privileges,
as users with global select are added by the previous section.
If two services referred to the same filter instance, it would
cause the filter to deleted twice at MaxScale shutdown with a
crash as the result.
Now when the services are deleted we just collect the unique
filter instances and then delete them after all services have
been deleted.
Earlier, if a service had multiple listeners you would have had
MaxScale> show dbusers MyService
User names: alice@% ...
User names: bob@% ...
That is, no indication of which listener is reporting what. With
this commit the result will be
User names (MyListener1): alice@% ...
User names (MyListener2): bob@% ...
Further, the diagnostics function of an authenticator is now expected
to write the list of users to the provided DCB, without performing any
other formatting. The formatting (printing "User names" and appending
a line-feed) is now handled by the handler for the MaxAdmin command
"show dbusers".
This commit introduces changes that fix the relay master detection that
was broken by the merge from 2.1 into 2.2 by commit
1ecd791887994209eb29e56e1271f8c407cd0cdf.
In 2.2, the master server ID is used to detect whether a slave is actually
replicating from a master. The value is still displayed even if the slave
is not actively replicating from a master. The commit in 2.1 causes this
value to be stored unconditionally if it is available. By checking the
value of Last_IO_Errno and comparing it to a list of known error codes, we
know whether the slave is replicating properly.
The slave detection in 2.2 correctly identifies a broken slave with a
stopped IO thread. Due to this, the test case must be modified to check
that the relay master is not a slave if the IO thread is stopped.
If local address has been specified, then all connections created
using mxs_mysql_real_connect() will use that same local address as
well.
A system test has not been created as our VMs do not have more than
one usable IP-address. Locally it has been verified to work as
expected.
When MaxScale thinks that the master has failed, it tries to verify it by
seeing if the slave server is receiving events. There was a missing IO
thread status check in the slave_receiving_events function which caused
the failover to wait until the verification timed out.
The relay master detection logic also lacked a check for the slave SQL
thread status. The code should check the state of the SQL thread to
determine whether the server is actually a functional slave to a master.
The fix causes a regression in the failover functionality as there is a
dependency between the slave's master ID and how the failover
performs. This dependency should not exist but fixing it causes a problem
with the mysqlmon_rejoin_bad2 test.
The variable containing the number of workers must be updated
only after the workers have been successfully created.
Failure to do this led to crash in Worker::shutdown_all() if a
terminating signal was received after the worker initialization
had failed.
Instead of trying to figure out whether the kernel supports O_DIRECT
in conjunction with pipes, let's just use it and if it fails, try
without O_DIRECT.
Case in point, based on circumstantial evidence it seems that in a
container context, it may appear as if the kernel supports O_DIRECT
when it in reality does not. So better to use brute-force.
If a listener section specifies both a 'socket' and a 'port' the
creation will fail with a clear error message.
If 'address' and 'socket' is specified, there will be a warning that
the address is meaningless.
It is now impossible to create two listeners for a service that
would listen on the same port/socket (as before), but the error
message is now sensible and provides detailed information to the
user.
When MaxScale is starting, the loading of the listeners can take a while
if there are a large number of services and users to load. To signal this
to the user, progress messages should be logged after every service is
started.
When backend SSL connections were created, the connection creation was
done twice. This was due to the lacking detection of an already
established SSL connection.