The new method is called for each new CREATE TABLE statement that is
processed as well as all ALTER TABLE statemets that modify the table
structure.
Right now the entry point not in use but it opens up the possibility of
persisting the CREATE TABLE statements at creation time. Currently the
tables are only persisted when the first actual event for the table is
received.
Added string conversion methods to the gtid_pos_t class that can be used
to store and load a GTID value.
Also added the missing rpl.cc file that previously only had the Rpl class
constructor in it.
The actual processing of the replicated events is now delegated to the Rpl
class. This class only deals with the raw binary format log events which
allows it to be used for both binlogs stored on disk as well as binlogs
that have just been replicated.
The code that checks for stop events is not needed as it is only used for
log messages. These aren't really useful to the end user so they can be
removed.
Moved the modification of the event size in case binlog checksums are
enabled outside of the handle_one_event function. This will make it
possible to move most of the processing done inside it into a reusable
class of its own.
Also fixed the memory leak for the event data.
The RowEventConverter is now passed as a parameter to the Avro
instance. Wrapped the value in an std::auto_ptr to make the cleanup
automatic (when it is implemented).
Fixed a typo in the event handler member variable and removed the unused
stats member.
The file indexing provided very little benefit for the intended purpose of
the router. Removing it makes the whole system more robust and simplifies
the code by a large amount.
Allowing calls to select_connect_backend_servers even when all slaves are
connected solves the debug assertion in select_connect_backend_servers
that happens when the execution of a queued query causes a new connection
to be created.
The Backend class response state tracking was not updated when a one-way
command was executed. This caused the logic in handleError to break if a
master was executing a command that wouldn't create a response.
Readwritesplit would hang when the query execution is postponed due to the
fact that the target server is executing a session command. The number of
expected responses was incremented when no response was expected.
When a client connection is closed by MaxScale before the client initiates
a controlled closing of the connection, an error message is sent. This
error message now also explains why the connection was closed to make
problem resolution easier.
The number of arguments to createListener was incremented but the maximum
count was not. Also fixed the parameter types for createListener and
alterServer.
Not yet used, as more is needed to replace the old code. The
algorithm is based on counting the total number of slave nodes
a server has, possibly in multiple layers and/or cycles.
The server runtime alteration was broken by commit
c850336199c3c19508a3d280fb7000291d66b80c when it increased the maximum
argument count of the `alter server` command to 14.
Replaced the HASHTABLE in galeramon with an std::unordered_map. This
simplifies the code by a great amount and makes it more readable. Removed
the extraneous functions that mostly logged debug information and
simplified the logic by removing redundant checks.
Servers in MaxScale can encrypt the connections without client keys and
certificates. As keys and certificates are no longer required, the CA
certificate must always be initialized.
The code in avrorouter that returned the current transaction was not very
useful and it can be acquired via the REST API in a more convenient
format.
The number of created sessions is tracked on the service level so there is
no need to track it in the avrorouter.
Removed declarations for functions that do not exist and moved code around
to reduce the scope.
The code that handles the Avro files is now fully abstracted behind the
AvroConverter class that implements the RowEventHandler interface.
The code still has some avro specific behavior in a few places (parsing of
JSON files into TableCreate objects). This can be replaced, if needed, by
querying the master server for the CREATE TABLE statements.
The various file operation related binlog events are now processed on the
upper level. This makes the actual data event processing simpler and
easier to comprehend.