The cache filter walks through the resultset in order to detect
when the resultset ends. That is, it reads each packet header as
they arrive.
In case the resultset is large, the cache will have to read several
packet headers. That it does using gwbuf_copy_data(). However, as that
was done using the first received GWBUF as the starting point, it meant
that in gwbuf_copy_data() the buffer chain was walked over and over
and over again, with a significant performance hit as the result.
Now we separetely store the last buffer received, and the the starting
offset of it. That way there will be no buffer chain walking.
As this is a common problem, GWBUF could cache the offset of the tail,
thus removing the performance penalty if you read from an offset that
happens to be in the tail. However, it's better to do that as a part
of a general overhaul of GWBUF.
If the server sends a server shutdown error, it is safe for readwritesplit
to ignore it. When the TCP connection is closed, the router error handling
will discard the connection, optionally replacing it.
The function implemented redundant functionality and replacement with
modutil_get_next_MySQL_packet was planned.
When faced with a packet header spread over multiple buffers, the packet
length calculation would read past the buffer end. This is fixed by taking
modutil_get_next_MySQL_packet into use.
Identical behavior to the old function is achieved by calling
gwbuf_make_contiguous for each packet to store them in a contiguous area
of memory. This should be either removed and only done when
RCAP_TYPE_CONTIGUOUS_INPUT is requested or be made an innate feature of
statement based routing.
The debug assertion introduced by commit 3d1c2b421a fails when a
COM_CHANGE_USER was executed. This was caused by the fact that the
authentication data was being interpreted as a command when it should've
been ignored.
Added a debug assertion into the reauthentication code to make sure the
current command remains the same.
Auto-rejoin now explains more accurately if a server cannot be joined due
to conflicting gtid.
Also, auto-rejoin is no longer disabled if a join fails. Usually the fail
is due to the server not replying fast enough with query completion. The
query is often completed anyways. This can lead to some log spam.
Auto-failover is no longer considered to have failed if the preconditions
are not met. An error message with the failed checks is printed once, but
the checks are repeated every loop as long as the master is down.
The mysqlauth SQLite database is now opened in WAL mode if possible. This
should prevent lockups of the database when the list of users is updated.
Also moved the starting of the SQLite transaction one level up to also
include the delete part in it. This should further reduce the effects of
updating users.
When a client connection is closed by MaxScale before the client initiates
a controlled closing of the connection, an error message is sent. This
error message now also explains why the connection was closed to make
problem resolution easier.
The number of arguments to createListener was incremented but the maximum
count was not. Also fixed the parameter types for createListener and
alterServer.
The server runtime alteration was broken by commit
c850336199c3c19508a3d280fb7000291d66b80c when it increased the maximum
argument count of the `alter server` command to 14.
Servers in MaxScale can encrypt the connections without client keys and
certificates. As keys and certificates are no longer required, the CA
certificate must always be initialized.
When a listener is created at runtime or SSL is being enabled for an
already created listener, the ssl_verify_peer_certificate parameter can
now be defined.
If a server responds when no response was expected, dump stored
statements. This should help deduce root causes of problems relating to
unexpected responses.
Tracking how many times the monitor has performed its monitoring allows
the test framework to consistently wait for an event instead of waiting
for a hard-coded time period. The MaxCtrl `api get` command can be used to
easily extract the numeric value.
The code that selects the candidate backend always returned the root
master if the server bitmask contained the master bit. This should only be
done if the master bit is the only bit in the bitmask and when there are
other bits, the normal candidate selection code should be used.
Also added a query to the expanded test case to make sure the connection
actually works.
The two operations return different types of results and need to be
treated differently in order for them to be handled correctly in 2.2.
This fixes the unexpected internal state errors that happened in all 2.2
versions due to a wrong assumption made by readwritesplit. This fix is not
necessary for newer versions as the LOAD DATA LOCAL INFILE processing is
done with a simpler, and more robust, method.
Commit 67386980e327ad063b24cb55971cf44f4930e241 caused the actual events
to be ignored. This meant that the larger event size was assumed for all
events. In most cases this works but it is not the correct way to do it.
Up until 2.1.12, if it in the configuration file said
'router_options=slave', the master was used if there were no
slaves at session creation time.
That broke in 2.1.13 as a side-effect of MXS-1516 that checks
at routing time whether the server initially selected as master
still is the master.
Now the required server status is stored separately for each
session, so that if the master was chosen, even though we have
'router_options=slave', we can turn on the SERVER_MASTER bit.
That allows us to handle the case correctly in connection_is_valid().
Single spot where an existing hint ptr was overwritten. Removed gwbuf_add_hint()
because it was adding hints at the opposite end compared to functions in hint.h.
Added hint_splice() to replace.
Defining the [maxscale] section in a configuration file that is not the
root configuration file is now treated as an error instead of silently
ignored.
- If a client DCB should be moved to some other worker than
the current one (cli and maxinfo), and that fails, the
thread id must be reset to that of the calling thread as
otherwise asserts will be triggered.
- If the creation of the first DCB fails, then the dcb list
for that thread will be NULL and thus must be accessed
with some caution.
When the pipe buffer size is maximized, the message queue can hold more
messages. This will mitigate the problem of too many messages being placed
in the queue.
When DCBs are being hung in dcb_hangup_foreach, the hangup event can be
processed directly. This prevents excessive use of the worker message
queue pipe thus reducing the possibility of it being full.
The fact that a client dcb was immediately added to the epoll-
instance of the relevant worker (possible, since that is thread-
safe), but was added to the book-keeping via the message mechanism
(necessary, since that is not thread-safe), meant that if the
connection was closed before the message was delivered, the handling
of the message then caused an access error.
Now the fd is also added to the epoll-instance via the messaging
mechanism, so the problem can no longer occur. The only fds this
affects are connections made to maxadmin or maxinfo as they are
always handled by the main thread due to deadlock issues.
If the feature is enabled (default off), at the end of a monitor loop
(once server states are known), read_only is enabled on slaves servers
without it.