The new method is called for each new CREATE TABLE statement that is
processed as well as all ALTER TABLE statemets that modify the table
structure.
Right now the entry point not in use but it opens up the possibility of
persisting the CREATE TABLE statements at creation time. Currently the
tables are only persisted when the first actual event for the table is
received.
Added string conversion methods to the gtid_pos_t class that can be used
to store and load a GTID value.
Also added the missing rpl.cc file that previously only had the Rpl class
constructor in it.
The actual processing of the replicated events is now delegated to the Rpl
class. This class only deals with the raw binary format log events which
allows it to be used for both binlogs stored on disk as well as binlogs
that have just been replicated.
The code that checks for stop events is not needed as it is only used for
log messages. These aren't really useful to the end user so they can be
removed.
Moved the modification of the event size in case binlog checksums are
enabled outside of the handle_one_event function. This will make it
possible to move most of the processing done inside it into a reusable
class of its own.
Also fixed the memory leak for the event data.
The RowEventConverter is now passed as a parameter to the Avro
instance. Wrapped the value in an std::auto_ptr to make the cleanup
automatic (when it is implemented).
Fixed a typo in the event handler member variable and removed the unused
stats member.
The file indexing provided very little benefit for the intended purpose of
the router. Removing it makes the whole system more robust and simplifies
the code by a large amount.
Allowing calls to select_connect_backend_servers even when all slaves are
connected solves the debug assertion in select_connect_backend_servers
that happens when the execution of a queued query causes a new connection
to be created.
The Backend class response state tracking was not updated when a one-way
command was executed. This caused the logic in handleError to break if a
master was executing a command that wouldn't create a response.
Readwritesplit would hang when the query execution is postponed due to the
fact that the target server is executing a session command. The number of
expected responses was incremented when no response was expected.
The test now sets a two minute timeout for all larger operations. This
prevents excessive waiting when the test is executed.
Removed the row count output to stdout to prevent excessive logging when
the terminal contents are written to a file.
Also renamed the file to match the test case name. This should remove the
need to find the source file to test name mapping from the CMakeLists.txt.
Auto-rejoin now explains more accurately if a server cannot be joined due
to conflicting gtid.
Also, auto-rejoin is no longer disabled if a join fails. Usually the fail
is due to the server not replying fast enough with query completion. The
query is often completed anyways. This can lead to some log spam.
Auto-failover is no longer considered to have failed if the preconditions
are not met. An error message with the failed checks is printed once, but
the checks are repeated every loop as long as the master is down.
In principle a syslog priority consists of a syslog level
bit-or:d with a syslog facility. That's clear from the syslog
man page, but not so clear from the code in syslog.h.
Anyway, to make it possible to log using a specific facility
(instead of the default LOG_USER), we must allow priorities
that include a specified facility.
The purpose of this library is to create a utility library that is not
dependent on maxscale for use in both maxscale and system test, and
possibly other apps. As time permits general purpose utilities from
maxscale-common can be moved to the new library.
Here are answers to questions you may have:
- A top level directory "maxutils" contains the libraries. The current
structure is simply maxutils/maxbase. Each library has an 'include' and
a 'scr' directory where public headers exist in 'include'
- Code is in a namespace with the same name as the directory.
- Headers are included like this: `#include <maxbase/stopwatch.hh>`
- In case the library is published on its own, the include directives stay
the same (headers would be in /usr/include/maxutil, for example).
- I am not advocating many small libraries. But if some larger library
is written, say a general purpose statemachine, it would not pollute
util/maxutil but go to util/maxsm.
Another example: Worker. It is a larger concept, but used so widely in
code that it could very well live in maxutil.
NOTE: this was previously Review Request #6245.
The mysqlauth SQLite database is now opened in WAL mode if possible. This
should prevent lockups of the database when the list of users is updated.
Also moved the starting of the SQLite transaction one level up to also
include the delete part in it. This should further reduce the effects of
updating users.
dependent on maxscale for use in both maxscale and system test, and
possibly other apps. As time permits general purpose utilities from
maxscale-common can be moved to the new library.
Here are answers to questions you may have:
- Headers and sources are separated to allow public/private headers.
- A top level directory "util" contains the libraries. The current
structure is simply util/maxbase. The name of the top level
directory is not important.
- Code is in a namespace with the same name as the directory.
- Headers are included like this: \`#include <maxbase/stopwatch.hh>\`
- In case the library is published on its own, the include directives stay
the same (headers would be in /usr/include/maxbase, for example).
- I am not advocating many small libraries. But if some larger library
is written, say a general purpose statemachine, it would not pollute
util/maxutil but go to util/maxsm.
Another example: Worker. It is a larger concept, but used so widely in
code that it could very well live in maxutil.
NOTE: this was previously Review Request #6245.
When a client connection is closed by MaxScale before the client initiates
a controlled closing of the connection, an error message is sent. This
error message now also explains why the connection was closed to make
problem resolution easier.
The number of arguments to createListener was incremented but the maximum
count was not. Also fixed the parameter types for createListener and
alterServer.
Not yet used, as more is needed to replace the old code. The
algorithm is based on counting the total number of slave nodes
a server has, possibly in multiple layers and/or cycles.
The server runtime alteration was broken by commit
c850336199c3c19508a3d280fb7000291d66b80c when it increased the maximum
argument count of the `alter server` command to 14.