The command is saved in a function object which is read by the monitor
thread. This way, manual and automatic cluster modification commands are
ran in the same step of a monitor cycle.
This update required several modifications in related code.
The monitor now detects when a server has changed such that a replication
graph rebuild is needed and only then rebuilds the graph and detects
cycles and master.
Also, some old code is no longer called in the monitor cycle. It will be
removed in later commits. Refactored some of the related functions.
Don't test failover functionality when it is not needed. The bug is only
about the extra events that appear when a master is demoted and a slave is
promoted.
Prepared statements via readwritesplit need to have their IDs mapped from
the internal representation to the backend specific one. The RWBackend
class does this in its write method but the fix in commit
e561c3995c7396cf3749ccdf6a3357d7dd32c856 caused this to be bypassed and
the base version was always used.
The id has now been moved from mxs::Worker to mxs::RoutingWorker
and the implications are felt in many places.
The primary need for the id was to be able to access worker specfic
data, maintained outside of a routing worker, when given a worker
(the id is used to index into an array). Slightly related to that
was the need to be able to iterate over all workers. That obviously
implies some kind of collection.
That causes all sorts of issues if there is a need for being able
to create and destroy a worker at runtime. With the id removed from
mxs::Worker all those issues are gone, and its perfectly ok to create
and destory mxs::Workers as needed.
Further, while there is a need to broadcast a particular message to
all _routing_ workers, it hardly makes sense to broadcast a particular
message too _all_ workers. Consequently, only routing workers are kept
in a collection and all static member functions dealing with all
workers (e.g. broadcast) have now been moved to mxs::RoutingWorker.
Now, instead of passing the id around we instead deal directly
with the worker pointer. Later the data in all those external arrays
will be moved into mxs::[Worker|RoutingWorker] so that worker related
data is maintained in exactly one place.
To get rid of the need that a Worker must have an id, we store
in the MXS_POLL_DATA structure a pointer to the owning worker
instead of the id of the owning worker. This also allows some
further cleanup as the need for switching back and forth between
the id and the worker disappears.
The id will be moved from Worker to RoutingWorker as there
currently is a fair amount of code that assumes that the id of
routing workers start from 0.
It's no longer necessary to inherit from Worker in order to use
it, but it can now be used in a stand-alone fashion. This fits
the MonitorInstance use-case better.
In case an array of cache rules is provided, we will only store
references to the objects in the array. Consequently, the counts of
the borrewed references to the objects must be increased, and the
reference count of the array itself decreased.
If a session command produces a different result on the slave than it did
on the master, a warning is logged. This warning now also logs the query
that was being executed to make investigation of the problem easier.
Previously schemarouter only mapped databases to the servers
they were resided on. Now all the tables are also mapped to allow the
router to route queries to the right server based on the tables used in
that query.
The stacktrace generation is now a part of the maxbase library. The code
is the same code that was previously defined in gateway.cc as a part of
MaxScale.
The only way to cleanly separate the maxutils library from the MaxScale
CMake project is to make it a standalone CMake project. With the help of
ExternalProject, it should be relatively easy to use.
Backend::execute_session_command would use the overridden write method
instead of the Backend::write method that it intended to use. This caused
session commands that did not expect a response to be in a state that
expected a result.
Also fixed RWBackend::write pass the response_type value to
Backend::write.
Look for the expected message several times, with short sleeps in
between. That way we will not sleep more than necessary, yet will
not immediately give up either.
Going the belt-and-suspenders way of both sleeping and waiting for the
moitor should make sure MaxScale has at least some time to start up, query
the servers and do a single iteration of monitoring.