StartMonitor() now takes a MXS_MONITOR_INSTANCE and returns
true, if the monitor could be started and false otherwise.
So, the setup is such that in createInstance(), the instance
data is created and then using startMonitor() and stopMonitor()
the monitor is started/stopped. Finally in destroyInstance(),
the actual instance data is deleted.
The following type name changes
MXS_MONITOR_OBJECT -> MXS_MONITOR_API
MXS_SPECIFIC_MONITOR -> MXS_MONITOR_INSTANCE
Further, the 'handle' instance variable of what used to be
called MXS_MONITOR_OBJECT has been renamed to 'api'.
An example, what used to look like
mon->module->stopMonitor(mon->handle);
now looks like
mon->api->stopMonitor(mon->instance);
which makes it more obvious what is going on.
MonitorDestroy() (renamed to monitor_destroy()) will be used for
actually destroying the monitor instance, that is, execute
destroyInstance() on the loaded module instance and freeing the
the monitor instance.
TODO: monitor_deactivate() could do all the stuff which is currently
done to the monitor in config_runtime(), instead of just
turning off the flag.
Now, all monitor functions but startMonitor takes a
MXS_SPECIFIC_MONITOR instead of MXS_MONITOR. That is, startMonitor
is now like a static factory member returning a new specific
monitor instance and the other functions are like member functions
of that instance.
Returning the length of the value instead of a boolean allows the user to
know when the parameter value exceeded the buffer size passed as the
parameter.
The individual servers were missing a statistic that would give an
estimated query count. As there is no simple way to count queries for all
modules, counting the number of routed protocol packets is a suitable
substitute.
The same problem that caused maxadmin to lock up was also what caused
maxinfo to lock up. The concurrent access to the legacy administrative
functions caused deadlocks.
Parameter deprecation on the module level means that the parameter should
no longer be used but using it will not cause an error. If a deprecated
parameter is used, it will be removed from the configuration.
The two-part shutdown procedure for the housekeeper was not needed and
caused problems if SIGINT wasn't raised. Since the main thread returns to
the main function, a single shutdown function is all that the housekeeper
needs to function.
Moved all the shutdown related code into Housekeeper::stop to remove the
waiting for the thread in the destructor.
The resultset processing for MySQL requires some extra work as it lacks
the proper SERVER_MORE_RESULTS_EXIST flag in the last EOF packet. Instead,
the first EOF packet has the SERVER_PS_OUT_PARAMS flag which needs to be
interpreted as a SERVER_MORE_RESULTS_EXIST flag for the second EOF packet.
Also corrected the EOF packet handling to do the flag checks in the code
that deals with the EOF packets.
As the modutil_state parameter is now used for more than large packet
tracking, the correct solution is to store this state object in the
readwritesplit session instead of interpreting it to a boolean value.
The Checksum class defines an interface which the SHA1Checksum and
CRC32Checksum implement.
Added test unit test cases to verify that the checksums work and perform
as expected.
Fixed string truncation warnings by reducing max parameter lengths by one
where applicable. The binlogrouter filename lengths are slightly different
so using memcpy to work around the warnings is an adequate "solution"
until the root of the problem is solved.
Removed unnecessary CMake policy settings from qc_sqlite. Adding a
self-dependency on the source file of an external project has no effect
and only caused warnings to be logged.
When providing pointer to instance and pointer to member function
of the class of the instance, the pointer to the member function
should be first and the pointer to the instance second.
There's now double bookkeeping:
- All delayed calls are in a map whose key is the next
invocation time. Since it's a map (and not an unordered_map)
it's sorted just the way we want to have it.
- In addition, there's an unordered set for each tag.
With this arrangement we can easily invoke the delayed calls
in the right order and be able to efficiently remove all
delayed calls related to a particular tag.
When canceling, a DelayedCall instance must be removed from the
collection holding all delayed calls. Consequently priority_queue
cannot be used as it 1) does not provide access to the underlying
collection and 2) the underlying collection (vector or deque)
is a bad choise if items in the middle needs to be removed.
The interface for canceling calls is now geared towards the needs
of sessions. Basically the idea is as follows:
class MyFilterSession : public maxscale::FilterSession
{
...
int MyFilterSession::routeQuery(GWBUF* pPacket)
{
...
if (needs_to_be_delayed())
{
Worker* pWorker = Worker::current();
void* pTag = this;
pWorker->delayed_call(5000, pTag, this,
&MyFilterSession::delayed_routeQuery,
pPacket);
return 1;
}
...
}
bool MyFilterSession::delayed_routeQuery(Worker::Call:action_t action,
GWBUF* pPacket)
{
if (action == Worker::Call::EXECUTE)
{
routeQuery(pPacket);
}
else
{
ss_dassert(action == Worker::Call::CANCEL);
gwbuf_free(pPacket);
}
return false;
}
~MyFilterSession()
{
void* pTag = this;
Worker::current()->cancel_delayed_calls(pTag);
}
}
The alternative, returning some key that the caller must keep
around seems more cumbersome for the general case.
It's now possible to provide Worker with a function to call
at a later time. It's possible to provide a function or a
member function (with the object), taking zero or one argument
of any kind. The argument must be copyable.
There's currently no way to cancel a call, which must be added
as typically the delayed calling is associated with a session
and if the session is closed before the delayed call is made,
bad things are likely to happen.
Worker::Timer class and Worker::DelegatingTimer templates are
timers built on top of timerfd_create(2). As such they consume
descriptor and hence cannot be created independently for each
timer need.
Each Worker has now a private timer member variable on top of
which a general timer mechanism will be provided.
The maximum number of workers and routing workers are now
hardwired to 128 and 100, respectively. It is still so that
all workers must be created at startup and destroyed at
shutdown, creating/destorying workers at runtime is not
possible.
The documentation stated that all CPUs would be used when threads=auto was
used. In reality the behavior was the same as was with 2.0 (number of CPUs
minus one).