If the service user does not have adequate grants to the mysql tables, the
legacy query is used. This prevents an upgrade failure when the user was
lacking the new privileges.
The user now requires SELECT privileges on the mysql.roles_mapping
table.
Currently this is a mandatory grant but it needs to be made into an
optional requirement. This allows upgrades from 2.2.9 to 2.2.10 without
needing new grants.
The cache filter walks through the resultset in order to detect
when the resultset ends. That is, it reads each packet header as
they arrive.
In case the resultset is large, the cache will have to read several
packet headers. That it does using gwbuf_copy_data(). However, as that
was done using the first received GWBUF as the starting point, it meant
that in gwbuf_copy_data() the buffer chain was walked over and over
and over again, with a significant performance hit as the result.
Now we separetely store the last buffer received, and the the starting
offset of it. That way there will be no buffer chain walking.
As this is a common problem, GWBUF could cache the offset of the tail,
thus removing the performance penalty if you read from an offset that
happens to be in the tail. However, it's better to do that as a part
of a general overhaul of GWBUF.
If the server sends a server shutdown error, it is safe for readwritesplit
to ignore it. When the TCP connection is closed, the router error handling
will discard the connection, optionally replacing it.
The function implemented redundant functionality and replacement with
modutil_get_next_MySQL_packet was planned.
When faced with a packet header spread over multiple buffers, the packet
length calculation would read past the buffer end. This is fixed by taking
modutil_get_next_MySQL_packet into use.
Identical behavior to the old function is achieved by calling
gwbuf_make_contiguous for each packet to store them in a contiguous area
of memory. This should be either removed and only done when
RCAP_TYPE_CONTIGUOUS_INPUT is requested or be made an innate feature of
statement based routing.
This also applies to autoselect switchover. The disk space warning has the least
priority, as the other criteria could lead to replication failures. Also, print
the reason the new master was selected over the second best candidate.
The server information in galeramon is gathered every monitoring
interval. To prevent stale information from being used, the server
information needs to be cleared at the start of each monitoring interval.
At the start of the monitor tick, the monitor pending status is set to the
value of the current server status. This allows the informative and
history bits to survive even if a server goes down.
To make sure that no stale state bits are in effect when the post_tick
method is called, the pending status must be cleared of all status bits.
Readconnroute with router_options=slave will use a master if one is
available. The test still expected the old behavior where masters were
never used with router_options=slave.
The state of the backend needs to be checked before any pending session
commands are executed on it.
Added debug assertions to catch invalid use of the status functions of
closed backends.
After a session command fails on all connected slaves but succeeds on the
master, the servers that failed are discarded from the pool of valid
servers. If there are available servers, they will be taken into use when
the next read query is performed.
When a query is routed to an unconnected slave, it is taken into use and
the session command history is replayed. If the execution of the history
failed and there were more session commands to be executed, any queued
queries would get lost. This is due to a missing check that would detect
the fact that the server has been discarded.
The core library now contains the maxscale_shutdown() command. This makes
it possible to resolve all symbols at link time even for administrative
modules.
MaxScale now defines events for which the syslog
facility and level can explicitly be defined by the
administrator. Currently there is only one such
event, namelt AUTHENTICATION_FAILURE.
In a subsequent commit, config.cc will be modified so
that event-related configuration parameters are passed
to event::configure() and in another subsequent commit
the authenticators will be modifed to use this mechanism.
In practice a line like:
MXS_WARNING("%s: login attempt for user '%s'@[%s]:%s, "
"authentication failed.",
dcb->service->name, client_data->user,
dcb->remote, dcb->path);
will be changed to
MXS_LOG_EVENT(event::AUTHENTICATION_FAILURE,
"%s: login attempt for user '%s'@[%s]:%s, "
"authentication failed.",
dcb->service->name, client_data->user,
dcb->remote, dcb->path);
The test unnecessarily did an unconditional drop of the user resulting in
errors. Also prevented stopping of MaxScale if it wasn't started in the
first place.
The debug assertion introduced by commit 3d1c2b421a fails when a
COM_CHANGE_USER was executed. This was caused by the fact that the
authentication data was being interpreted as a command when it should've
been ignored.
Added a debug assertion into the reauthentication code to make sure the
current command remains the same.