With the changes to the DCB handling, the service pointer of a client DCB
must always be assigned.
Also removed the unnecessary parentheses around the comparison.
Worker is now the base class of all workers. It has a message
queue and can be run in a thread of its own, or in the calling
thread. Worker can not be used as such, but a concrete worker
class must be derived from it. Currently there is only one
concrete class RoutingWorker.
There is some overlapping in functionality between Worker and
RoutingWorker, as there is e.g. a need for broadcasting a
message to all routing workers, but not to other workers.
Currently other workers can not be created as the array for
holding the pointers to the workers is exactly as large as
there will be RoutingWorkers. That will be changed so that
the maximum number of threads is hardwired to some ridiculous
value such as 128. That's the first step in the path towards
a situation where the number of worker threads can be changed
at runtime.
A new class mxs::Worker will be introduced and mxs::RoutingWorker
will be inherited from that. mxs::Worker will basically only be a
thread with a message-loop.
Once available, all current non-worker threads (but the one
implicitly created by microhttpd) can be creating by inheriting
from that; in practice that means the housekeeping thread, all
monitor threads and possibly the logging thread.
The benefit of this arrangement is that there then will be a general
mechanism for cross thread communication without having to use any
shared data structures.
If maxadmin connections are handled by different workers, then
there may be a deadlock if some maxadmin command requires
communication with all workers.
Namely, in that case a message will be sent to all other workers
but the current one, but that message will not be handled if that
other worker at that point sits in the debugcmd_lock spinlock
in debugcmd.c:execute_cmd().
We can prevent that deadlock from happening simply by ensuring
that all maxadmin connections are handled by one thread.
The old hkheartbeat variable was changed to the mxs_clock() function that
simply wraps an atomic load of the variable. This allows it to be
correctly read by MaxScale as well as opening up the possibility of
converting the value load to a relaxed memory order read.
Renamed the header and associated macros. Removed inclusion of the
heartbeat header from the housekeeper header and added it to the files
that were missing it.
We need to copy some data from a AF_UNIX based listener dcb
to the accepted client dcb, to prevent assertion violation in
dcb_get_port(). Further, to be able to log the path in the case
of an authentication error we need to copy that as well.
When a double close is detected in a debug build, a debug assertion is
triggered. This will generate a core dump which should help investigate
the double close.
Since there is a concept called "listener" it is confusing that the
dcb "this" argument in some dcb functions is called 'listener' instead
of 'dcb' as it is called everywhere else.
The internal header directory conflicted with in-source builds causing a
build failure. This is fixed by renaming the internal header directory to
something other than maxscale.
The renaming pointed out a few problems in a couple of source files that
appeared to include internal headers when the headers were in fact public
headers.
Fixed maxctrl in-source builds by making the copying of the sources
optional.
Only used in conjunction with queued connections, which are not
enabled anyway. Once that comes on the table again, better to use
some standard data structures.
Apart from listeners, all DCBs will be assigned to the current
thread. This simplifies the addition of DCBs to worker threads.
Also performed a small cleanup of poll_add_dcb to make it more readable.
Now it is also possible to ensure that a DCB stays alive while
a task referring to it is posted from one worker to another.
That will be implemented in a subsequent commit.
The polling mechanism can now optionally be used for managing
the lifetime of an object placed into the poll set.
If a MXS_POLL_DATA has a non-null 'free', then the reference count
of the data will be increased before calling the handler and
decreased after. In that case, if the reference count reaches 0,
the free function will be called.
Note that the reference counts of *all* MXS_POLL_DATAs returned
by 'epoll_wait' will be increased before the events are delivered
to the handlers individually for each MXS_POLL_DATA, and then once
all events have been delivered, the reference count of each
MXS_POLL_DATA will be decreased.
This ensure that if there are interdependencies between different
MXS_POLL_DATAs returned by one call to 'epoll_wait', the case that
an MXS_POLL_DATA is deleted before its events have been delivered
can be avoided by using the reference count for lifetime management.
In subsequent commits, the reference count will be taken into use
in the lifetime management of DCBs.
When something fails inside dcb_connect we rewind the situation
properly, without calling any of the close functions intended for
shutting down a properly created DCB. That way they can be simplified
and once the reference counting is taken into use it is sufficient to
call dcb_dec_ref(dcb) instead of dcb_free_all_memory().
If a fake event is added to the current dcb, we arrange things so
that it is delivered immediately when the handling of the event(s)
during which the fake event was added, has been performed.
Otherwise the event is delivered via the event loop.
The adminusers test did not properly initialize all subsystems in
MaxScale. The polling and DCB tests weren't updated with the changes to
the DCB closing.
dcb_readq_append()
dcb_readq_prepend()
dcb_readq_set()
dcb_readq_has()
dcb_readq_release()
dcb_readq_get()
dcb_readq_length()
No code but for DCB code itself should directly manipulate the
internals of a DCB. Thesse functions will be taken into use in
protocol modules.
Before each event handler is called, it is checked whether the
dcb has been closed. If it has been, then the event handler is
not called.
The check has to be made before each event handler, because any
event handler can close the dcb.
If the current worker id is -1, we do not insist that the dcb
is closed by the owning thread. That will happen only for dcbs
that are created before the workers have been started and hence
it is also ok to close them after the workers have exited.
If a dcb being closed is the dcb for which events are currently being
processed, the dcb is not closed immediately but only after all events
have been delivered.