The id has now been moved from mxs::Worker to mxs::RoutingWorker
and the implications are felt in many places.
The primary need for the id was to be able to access worker specfic
data, maintained outside of a routing worker, when given a worker
(the id is used to index into an array). Slightly related to that
was the need to be able to iterate over all workers. That obviously
implies some kind of collection.
That causes all sorts of issues if there is a need for being able
to create and destroy a worker at runtime. With the id removed from
mxs::Worker all those issues are gone, and its perfectly ok to create
and destory mxs::Workers as needed.
Further, while there is a need to broadcast a particular message to
all _routing_ workers, it hardly makes sense to broadcast a particular
message too _all_ workers. Consequently, only routing workers are kept
in a collection and all static member functions dealing with all
workers (e.g. broadcast) have now been moved to mxs::RoutingWorker.
Now, instead of passing the id around we instead deal directly
with the worker pointer. Later the data in all those external arrays
will be moved into mxs::[Worker|RoutingWorker] so that worker related
data is maintained in exactly one place.
To get rid of the need that a Worker must have an id, we store
in the MXS_POLL_DATA structure a pointer to the owning worker
instead of the id of the owning worker. This also allows some
further cleanup as the need for switching back and forth between
the id and the worker disappears.
The id will be moved from Worker to RoutingWorker as there
currently is a fair amount of code that assumes that the id of
routing workers start from 0.
If a session command produces a different result on the slave than it did
on the master, a warning is logged. This warning now also logs the query
that was being executed to make investigation of the problem easier.
Previously schemarouter only mapped databases to the servers
they were resided on. Now all the tables are also mapped to allow the
router to route queries to the right server based on the tables used in
that query.
Backend::execute_session_command would use the overridden write method
instead of the Backend::write method that it intended to use. This caused
session commands that did not expect a response to be in a state that
expected a result.
Also fixed RWBackend::write pass the response_type value to
Backend::write.
If the server sends a server shutdown error, it is safe for readwritesplit
to ignore it. When the TCP connection is closed, the router error handling
will discard the connection, optionally replacing it.
The state of the backend needs to be checked before any pending session
commands are executed on it.
Added debug assertions to catch invalid use of the status functions of
closed backends.
The new method is called for each new CREATE TABLE statement that is
processed as well as all ALTER TABLE statemets that modify the table
structure.
Right now the entry point not in use but it opens up the possibility of
persisting the CREATE TABLE statements at creation time. Currently the
tables are only persisted when the first actual event for the table is
received.
Added string conversion methods to the gtid_pos_t class that can be used
to store and load a GTID value.
Also added the missing rpl.cc file that previously only had the Rpl class
constructor in it.
The actual processing of the replicated events is now delegated to the Rpl
class. This class only deals with the raw binary format log events which
allows it to be used for both binlogs stored on disk as well as binlogs
that have just been replicated.
The code that checks for stop events is not needed as it is only used for
log messages. These aren't really useful to the end user so they can be
removed.
Moved the modification of the event size in case binlog checksums are
enabled outside of the handle_one_event function. This will make it
possible to move most of the processing done inside it into a reusable
class of its own.
Also fixed the memory leak for the event data.
The RowEventConverter is now passed as a parameter to the Avro
instance. Wrapped the value in an std::auto_ptr to make the cleanup
automatic (when it is implemented).
Fixed a typo in the event handler member variable and removed the unused
stats member.
The file indexing provided very little benefit for the intended purpose of
the router. Removing it makes the whole system more robust and simplifies
the code by a large amount.
Allowing calls to select_connect_backend_servers even when all slaves are
connected solves the debug assertion in select_connect_backend_servers
that happens when the execution of a queued query causes a new connection
to be created.
The Backend class response state tracking was not updated when a one-way
command was executed. This caused the logic in handleError to break if a
master was executing a command that wouldn't create a response.
Readwritesplit would hang when the query execution is postponed due to the
fact that the target server is executing a session command. The number of
expected responses was incremented when no response was expected.
The number of arguments to createListener was incremented but the maximum
count was not. Also fixed the parameter types for createListener and
alterServer.
The server runtime alteration was broken by commit
c850336199c3c19508a3d280fb7000291d66b80c when it increased the maximum
argument count of the `alter server` command to 14.
Servers in MaxScale can encrypt the connections without client keys and
certificates. As keys and certificates are no longer required, the CA
certificate must always be initialized.
The code in avrorouter that returned the current transaction was not very
useful and it can be acquired via the REST API in a more convenient
format.
The number of created sessions is tracked on the service level so there is
no need to track it in the avrorouter.
Removed declarations for functions that do not exist and moved code around
to reduce the scope.
The code that handles the Avro files is now fully abstracted behind the
AvroConverter class that implements the RowEventHandler interface.
The code still has some avro specific behavior in a few places (parsing of
JSON files into TableCreate objects). This can be replaced, if needed, by
querying the master server for the CREATE TABLE statements.
The various file operation related binlog events are now processed on the
upper level. This makes the actual data event processing simpler and
easier to comprehend.