Previously schemarouter only mapped databases to the servers
they were resided on. Now all the tables are also mapped to allow the
router to route queries to the right server based on the tables used in
that query.
The only way to cleanly separate the maxutils library from the MaxScale
CMake project is to make it a standalone CMake project. With the help of
ExternalProject, it should be relatively easy to use.
Backend::execute_session_command would use the overridden write method
instead of the Backend::write method that it intended to use. This caused
session commands that did not expect a response to be in a state that
expected a result.
Also fixed RWBackend::write pass the response_type value to
Backend::write.
If the service user does not have adequate grants to the mysql tables, the
legacy query is used. This prevents an upgrade failure when the user was
lacking the new privileges.
The cache filter walks through the resultset in order to detect
when the resultset ends. That is, it reads each packet header as
they arrive.
In case the resultset is large, the cache will have to read several
packet headers. That it does using gwbuf_copy_data(). However, as that
was done using the first received GWBUF as the starting point, it meant
that in gwbuf_copy_data() the buffer chain was walked over and over
and over again, with a significant performance hit as the result.
Now we separetely store the last buffer received, and the the starting
offset of it. That way there will be no buffer chain walking.
As this is a common problem, GWBUF could cache the offset of the tail,
thus removing the performance penalty if you read from an offset that
happens to be in the tail. However, it's better to do that as a part
of a general overhaul of GWBUF.
If the server sends a server shutdown error, it is safe for readwritesplit
to ignore it. When the TCP connection is closed, the router error handling
will discard the connection, optionally replacing it.
The function implemented redundant functionality and replacement with
modutil_get_next_MySQL_packet was planned.
When faced with a packet header spread over multiple buffers, the packet
length calculation would read past the buffer end. This is fixed by taking
modutil_get_next_MySQL_packet into use.
Identical behavior to the old function is achieved by calling
gwbuf_make_contiguous for each packet to store them in a contiguous area
of memory. This should be either removed and only done when
RCAP_TYPE_CONTIGUOUS_INPUT is requested or be made an innate feature of
statement based routing.
This also applies to autoselect switchover. The disk space warning has the least
priority, as the other criteria could lead to replication failures. Also, print
the reason the new master was selected over the second best candidate.
The server information in galeramon is gathered every monitoring
interval. To prevent stale information from being used, the server
information needs to be cleared at the start of each monitoring interval.
The state of the backend needs to be checked before any pending session
commands are executed on it.
Added debug assertions to catch invalid use of the status functions of
closed backends.
The debug assertion introduced by commit 3d1c2b421a fails when a
COM_CHANGE_USER was executed. This was caused by the fact that the
authentication data was being interpreted as a command when it should've
been ignored.
Added a debug assertion into the reauthentication code to make sure the
current command remains the same.
The new method is called for each new CREATE TABLE statement that is
processed as well as all ALTER TABLE statemets that modify the table
structure.
Right now the entry point not in use but it opens up the possibility of
persisting the CREATE TABLE statements at creation time. Currently the
tables are only persisted when the first actual event for the table is
received.
Added string conversion methods to the gtid_pos_t class that can be used
to store and load a GTID value.
Also added the missing rpl.cc file that previously only had the Rpl class
constructor in it.
The actual processing of the replicated events is now delegated to the Rpl
class. This class only deals with the raw binary format log events which
allows it to be used for both binlogs stored on disk as well as binlogs
that have just been replicated.
The code that checks for stop events is not needed as it is only used for
log messages. These aren't really useful to the end user so they can be
removed.
Moved the modification of the event size in case binlog checksums are
enabled outside of the handle_one_event function. This will make it
possible to move most of the processing done inside it into a reusable
class of its own.
Also fixed the memory leak for the event data.
The RowEventConverter is now passed as a parameter to the Avro
instance. Wrapped the value in an std::auto_ptr to make the cleanup
automatic (when it is implemented).
Fixed a typo in the event handler member variable and removed the unused
stats member.
The file indexing provided very little benefit for the intended purpose of
the router. Removing it makes the whole system more robust and simplifies
the code by a large amount.
Allowing calls to select_connect_backend_servers even when all slaves are
connected solves the debug assertion in select_connect_backend_servers
that happens when the execution of a queued query causes a new connection
to be created.
The Backend class response state tracking was not updated when a one-way
command was executed. This caused the logic in handleError to break if a
master was executing a command that wouldn't create a response.
Readwritesplit would hang when the query execution is postponed due to the
fact that the target server is executing a session command. The number of
expected responses was incremented when no response was expected.
Auto-rejoin now explains more accurately if a server cannot be joined due
to conflicting gtid.
Also, auto-rejoin is no longer disabled if a join fails. Usually the fail
is due to the server not replying fast enough with query completion. The
query is often completed anyways. This can lead to some log spam.
Auto-failover is no longer considered to have failed if the preconditions
are not met. An error message with the failed checks is printed once, but
the checks are repeated every loop as long as the master is down.