The connections that relate to a particular session are now a part of the
sessions resource. Currently, only the generic information is stored for
each connection (id and server name).
The fact that a client dcb was immediately added to the epoll-
instance of the relevant worker (possible, since that is thread-
safe), but was added to the book-keeping via the message mechanism
(necessary, since that is not thread-safe), meant that if the
connection was closed before the message was delivered, the handling
of the message then caused an access error.
Now the fd is also added to the epoll-instance via the messaging
mechanism, so the problem can no longer occur. The only fds this
affects are connections made to maxadmin or maxinfo as they are
always handled by the main thread due to deadlock issues.
Now it is also possible to ensure that a DCB stays alive while
a task referring to it is posted from one worker to another.
That will be implemented in a subsequent commit.
If a fake event is added to the current dcb, we arrange things so
that it is delivered immediately when the handling of the event(s)
during which the fake event was added, has been performed.
Otherwise the event is delivered via the event loop.
The adminusers test did not properly initialize all subsystems in
MaxScale. The polling and DCB tests weren't updated with the changes to
the DCB closing.
dcb_readq_append()
dcb_readq_prepend()
dcb_readq_set()
dcb_readq_has()
dcb_readq_release()
dcb_readq_get()
dcb_readq_length()
No code but for DCB code itself should directly manipulate the
internals of a DCB. Thesse functions will be taken into use in
protocol modules.
This will be used by a subsequent `session_get_current()` and
`session_get_current_id()` for obtaining the current SESSION and
session id, respectively. The latter of those will be used by the
logging mechanism for logging the session id in conjunction with
messages.
When dcb_close() is called, the DCB is only marked for closing
and the actual closing takes place only after all event handlers
have been called. That way, the state of the DCB will not change
during event processing but only after.
From a handler perspective this should now be just like it was
when the zombie queue was present.
TODO: There are far too many state variables or variables akin to
state variables - dcb_role, state, persistentstart, n_close -
in DCB. A cleanup is warranted.
The server internal session id may be larger than 4 bytes (MariaDB uses 8)
but only 4 are sent in the handshake. The full value can be queried
from the server, but this query is not supported by MaxScale yet. In any
case, both the protocol and MXS_SESSION now have 64 bit counters. Only the
low 32 bits are sent in the handshake, similar to server.
Preparation for adding KILL syntax support.
Session id changed to uint32 everywhere. Added atomic op.
Session id can be acquired before session_alloc().
Added session_alloc_with_id(), which is given a session id number.
Worker object has a session_id->SESSION* mapping, not used yet.
The function was no longer thread-safe as it used the obsolete per-thread
spinlocks to iterate over the DCBs. Now the function uses the newly added
WorkerTask class to iterate over them.
Since the new WorkerTask mechanism is far superion to dcb_foreach, the
latter is now deprecated.
This is just a first step in a trial that will allow the addition
of any file descriptor to the general poll mechanism and hence
allow any i/o to be handled by the worker threads.
There is a structure
typedef struct mxs_poll_data
{
void (*handler)(struct mxs_poll_data *data, int wid, uint32_t events);
struct
{
int id;
} thread;
} MXS_POLL_DATA;
that any other structure (e.g. a DCB) encapsulating a file descriptor must
have as its first member (a C++ struct could basically derive from it).
That structure contains two members; 'handler' and 'thread.id'. Handler is a
pointer to a function taking a pointer to a struct mxs_poll_data, a worker thread
if and an epoll event mask as argument.
So, DCB is modified to have MXS_POLL_DATA as its first member and 'handler'
is initialized with a function that *knows* the passed MXS_POLL_DATA can
be downcast to a DCB.
process_pollq no longer exists, but is now called process_pollq_dcb. The
general stuff related to statistics etc. will be moved to poll_waitevents
itself after which the whole function is moved to dcb.c. At that point,
the handler pointer will be set in dcb_alloc().
Effectively poll.[h|c] will provide a generic mechanism for listening on
whatever descriptors and the dcb stuff will be part of dcb.[h|c].
Both the listeners and servers now support IPv6 addresses.
The namedserverfilter does not yet use the new structures and needs to be
fixed in a following commit.
Due to the changes in the threading model, the DCB write code can be
simplified by a great amount.
Since only one thread can write to a DCB, it's safe to assume that no new
data is added to the write queue of a DCB while it is being drained. This
removes the need for the code that tracks whether a concurrent DCB write
attempt was made.
Because the high and low water callbacks weren't used by any module, it is
safe to remove them. They offer no real benefits over the drain callback.
Removed unused spinlocks from DCBs, sessions and the MySQL protocol
structs. They were used in a context where only one thread has access to
the structure.
Removed unused member variables from DCBs.
This allows modules to only expose one entry point with a consistent
signature. In the future, this could be used to implement declarations of
module parameters.
Making the lists of persistent DCBs thread specific is both a bug fix and
a performance enhancement. There was a small window where a non-owner
thread could receive events for a DCB. By partitioning the DCBs into
thread specific lists, this is avoided by removing the possibility of DCBs
moving between threads.
The code prevented scaling by imposing global spinlocks for the DCBs and
SESSIONs. Removing this list means that a thread-local list must be taken
into use to replace it.
The dcb_foreach allows a function to be mapped to all DCBs in
MaxScale. This allows the list of DCBs to be iterated in a safe manner
without having to worry about internal locking of the DCB mechanism.
Each DCB needs to be added to the owning thread's list so that they can be
iterated through. As sessions always have a client DCB, the sessions don't
need to be added to a similar per thread list.
This change fixes a problem with dcb_hangup_foreach that the removal of
the list manager introduced. Now the hangup events are properly injected
for the DCBs that connect to the server in question.
Because each thread has their own epoll file descriptor and only one
thread can process a DCB, it makes sense to move to a per thread zombie
queue. This removes one of the last restrictions on scalability.
Having a unique epoll instance for each thread allows a lot of the locking
from poll.c to be removed. The downside to this is that each session can
have only one thread processing events for it which might reduce
performance with very low client counts.