And add the Worker header...
The epoll instance is not used yet, but the common creation of epoll
instances in poll.cc will be removed and the epoll instances of each
worker used instead.
This is the first step in turning the worker mechanism and everything
around it into a set of C++ classes. In this change, the original C
API is still present, but in subsequent changes that will be removed.
This is not globally safe yet, but all other access is directly or
indirectly related to maxadmin, which is irrelevant as far as
performance testing is concerned.
Now possible to send a function and arguments to a specific worker
thread for execution.
In particular, this will be used for transferring the injection of
fake hangup events into DCBs, related to a particular server, from
the monitor thread to the worker threads, thus removing the need
for locks.
The shutdown is now performed so that a shutdown message is
sent to all workers. When the workers receive that message, they
turn on a shutdown flag, which subsequently is checked in the poll
loop.
MXS_WORKER is an abstraction of a worker aka worker thread.
It has a pipe whose read descriptor is added to the worker/thread
specific poll set and a write descriptor used for sending messages
to the worker.
The worker exposes a function mxs_worker_post_message using which
messages can be sent to the worker. These messages can be sent from
any thread but will be delivered on the thread dedicated for the
worker.
To illustrate how it works, maxadmin has been provided with a new
command "ping workers" that sends a message to every worker, which
then logs a message to the log.
Additional refactoring are needed, since there currently are overlaps
and undesirable interactions between the poll mechanism, the thread
mechanism and the worker mechanism.
This is visible currently, for instance, by it not being possible to
shut down MaxScale. The reason is that the workers should be shut down
first, then the poll mechanism and finally the threads. The shutdown
need to be arranged so that a shutdown message is sent to the workers
who then cause the polling loop to exit, which will cause the threads
to exit.
That can be arranged cleanly by making poll_waitevents() a "method"
of the worker, which implies that the poll set becomes a "member
variable" of the worker.
To be continued.
The whole worker thread mechanism assumes EPOLLET and non-blocking
descriptors, so that should be the default.
TODO: In debug mode, check that the provided file descriptor indeed
is non-blocking.
The handler callback should now return a bitmask with bits set
according to what it did when it was called. That way the actual
statistics gathering can be done in poll_waitevents() and the
handler need not be aware of any thread structs.
Actually, the only thing that needs any assistance is accept handling,
because in poll_waitevents() we do not know whether a READ event
relates to a listening or a normal socket, that is, should the
event be counted as an accept or as a read.
This is just a first step in a trial that will allow the addition
of any file descriptor to the general poll mechanism and hence
allow any i/o to be handled by the worker threads.
There is a structure
typedef struct mxs_poll_data
{
void (*handler)(struct mxs_poll_data *data, int wid, uint32_t events);
struct
{
int id;
} thread;
} MXS_POLL_DATA;
that any other structure (e.g. a DCB) encapsulating a file descriptor must
have as its first member (a C++ struct could basically derive from it).
That structure contains two members; 'handler' and 'thread.id'. Handler is a
pointer to a function taking a pointer to a struct mxs_poll_data, a worker thread
if and an epoll event mask as argument.
So, DCB is modified to have MXS_POLL_DATA as its first member and 'handler'
is initialized with a function that *knows* the passed MXS_POLL_DATA can
be downcast to a DCB.
process_pollq no longer exists, but is now called process_pollq_dcb. The
general stuff related to statistics etc. will be moved to poll_waitevents
itself after which the whole function is moved to dcb.c. At that point,
the handler pointer will be set in dcb_alloc().
Effectively poll.[h|c] will provide a generic mechanism for listening on
whatever descriptors and the dcb stuff will be part of dcb.[h|c].
A module can now declare a path parameter for a directory that does not
yet exist. If the directory does not exist, MaxScale will create the
directory with the requested permissions.
If a complete response is delivered in many buffers, then calling
gwbuf_length() whenever you need the complete size starts to hurt.
By caching the length of the data received sofar and by updating
the length in clientReply(), gwbuf_length() will be called exactly
once for each buffer(chain) delivered to routeQuery().
When the databases are mapped, it is desirable to get the complete
response in one contiguous buffer. This removes the need to manually
process the partial packets in the router code.
test_poll was calling poll_init() two times since it's already included in
init_test_env().
test_queuemanager was missing a bunch of frees. This doesn't fix it completely,
but removes most of the leaks and valgrind errors.