The interface for canceling calls is now geared towards the needs
of sessions. Basically the idea is as follows:
class MyFilterSession : public maxscale::FilterSession
{
...
int MyFilterSession::routeQuery(GWBUF* pPacket)
{
...
if (needs_to_be_delayed())
{
Worker* pWorker = Worker::current();
void* pTag = this;
pWorker->delayed_call(5000, pTag, this,
&MyFilterSession::delayed_routeQuery,
pPacket);
return 1;
}
...
}
bool MyFilterSession::delayed_routeQuery(Worker::Call:action_t action,
GWBUF* pPacket)
{
if (action == Worker::Call::EXECUTE)
{
routeQuery(pPacket);
}
else
{
ss_dassert(action == Worker::Call::CANCEL);
gwbuf_free(pPacket);
}
return false;
}
~MyFilterSession()
{
void* pTag = this;
Worker::current()->cancel_delayed_calls(pTag);
}
}
The alternative, returning some key that the caller must keep
around seems more cumbersome for the general case.
It's now possible to provide Worker with a function to call
at a later time. It's possible to provide a function or a
member function (with the object), taking zero or one argument
of any kind. The argument must be copyable.
There's currently no way to cancel a call, which must be added
as typically the delayed calling is associated with a session
and if the session is closed before the delayed call is made,
bad things are likely to happen.
Worker::Timer class and Worker::DelegatingTimer templates are
timers built on top of timerfd_create(2). As such they consume
descriptor and hence cannot be created independently for each
timer need.
Each Worker has now a private timer member variable on top of
which a general timer mechanism will be provided.
The maximum number of workers and routing workers are now
hardwired to 128 and 100, respectively. It is still so that
all workers must be created at startup and destroyed at
shutdown, creating/destorying workers at runtime is not
possible.
The documentation stated that all CPUs would be used when threads=auto was
used. In reality the behavior was the same as was with 2.0 (number of CPUs
minus one).
The SESSION_TRACK_SCHEMA tracking capability handling assumed an encoding
integer in the data. This value does not exist for the data returned by
schema change or session state tracking.
The sql queries are given in two text files, defined by options promotion_sql_file
and demotion_sql_file. The files must exist when monitor starts. The files are read
line by line, ignoring empty lines and lines starting with '#'. All other lines
are sent to the server being promoted, demoted or rejoined. Any error in opening
a file, reading it or executing the contents will cause the entire operation to
fail.
The filed defined in demotion_sql_file is also ran when rejoining a server. This
is to ensure a previously failed master is "demoted" properly when it joins the
cluster.
With the changes to the DCB handling, the service pointer of a client DCB
must always be assigned.
Also removed the unnecessary parentheses around the comparison.
Large session commands weren't properly handled which caused the router to
think that the trailing end of a multi-packet query was actually a new
query.
This cannot be confidently solved in 2.2 which is why the router session
is now closed the moment a large session command is noticed.
Only commands that can contain an SQL statements should be stored for
retrying (COM_QUERY and COM_EXECUTE). Other commands are either session
commands or do not work with query retrying.
The state of each individual listener is now displayed in the REST
API. Created common functions for printing the listener state and took
them into use. Added the new state into MaxCtrl output.
Worker is now the base class of all workers. It has a message
queue and can be run in a thread of its own, or in the calling
thread. Worker can not be used as such, but a concrete worker
class must be derived from it. Currently there is only one
concrete class RoutingWorker.
There is some overlapping in functionality between Worker and
RoutingWorker, as there is e.g. a need for broadcasting a
message to all routing workers, but not to other workers.
Currently other workers can not be created as the array for
holding the pointers to the workers is exactly as large as
there will be RoutingWorkers. That will be changed so that
the maximum number of threads is hardwired to some ridiculous
value such as 128. That's the first step in the path towards
a situation where the number of worker threads can be changed
at runtime.
A new class mxs::Worker will be introduced and mxs::RoutingWorker
will be inherited from that. mxs::Worker will basically only be a
thread with a message-loop.
Once available, all current non-worker threads (but the one
implicitly created by microhttpd) can be creating by inheriting
from that; in practice that means the housekeeping thread, all
monitor threads and possibly the logging thread.
The benefit of this arrangement is that there then will be a general
mechanism for cross thread communication without having to use any
shared data structures.
If maxadmin connections are handled by different workers, then
there may be a deadlock if some maxadmin command requires
communication with all workers.
Namely, in that case a message will be sent to all other workers
but the current one, but that message will not be handled if that
other worker at that point sits in the debugcmd_lock spinlock
in debugcmd.c:execute_cmd().
We can prevent that deadlock from happening simply by ensuring
that all maxadmin connections are handled by one thread.
The commands needs to be handled separately from the rest of the result
types.
Added a test case that reproduces the problem and verifies that the change
in code fixes it.
When a LOAD DATA LOCAL INFILE finishes, the client sends an empty
packet. The second case when the client sends an empty packet when the
previous packet was exactly 0xffffff bytes long. These two packets were
confused which caused the internal state to temporarily flip from inactive
to ending and back to inactive.
The aforementioned flip-flopping didn't have any practical differences but
it was caught by a debug assertion.
The functions do not set errno on all invalid input, so it's best to check
endptr.
Also, strtoll is now used for server id scanning through QueryResult.
The worker task should never be immediately executed to allow the task to
be executed on the next "tick" of the worker. This prevents recursive
calls to e.g. routeQuery in readwritesplit when errors are handled.
Only servers that qualify to be connected should be considered as
candidate servers. This triggered a debug assertion when a slave server
failed to execute a session command but it was chosen as a candidate
server later on.