See script directory for method. The script to run in the top level
MaxScale directory is called maxscale-uncrustify.sh, which uses
another script, list-src, from the same directory (so you need to set
your PATH). The uncrustify version was 0.66.
- The ones that were not used were removed.
- The ones that were used were moved close to the actual type.
In most cases some values were missing and if the definition is
close to the type there is a remote chance that they will stay
in sync. If detached, they surely will not.
The LocalClient micro-client required a reference to the session that was
valid at construction time. This is the reason why the previous
implementation used dcb_foreach to first gather the targets and then
execute queries on them. By replacing this reference with pointers to the
raw data it requires, we lift the requirement of the orignating session
being alive at construction time.
Now that the LocalClient no longer holds a reference to the session, the
killing of the connection does not have to be done on the same thread that
started the process. This prevents the deadlock that occurred when
concurrect dcb_foreach calls were made.
Replaced the unused dcb_foreach_parallel with a version of dcb_foreach
that allows iteration of DCBs local to this worker. The dcb_foreach_local
is the basis upon which all DCB access outside of administrative tasks
should be built on.
This change will introduce a regression in functionality: The client will
no longer receive an error if no connections match the KILL query
criteria. This is done to avoid having to synchronize the workers after
they have performed the killing of their own connections.
The dcb_foreach function is not safe to use from multiple threads at the
same time. This should be asserted by checking that the function is called
only from the main worker.
The addition of this assertion also implies that only administrative
operations should use the dcb_foreach function. To accommodate this
change, the KILL command iteration needs to be adjusted.
The connections that relate to a particular session are now a part of the
sessions resource. Currently, only the generic information is stored for
each connection (id and server name).
The fact that a client dcb was immediately added to the epoll-
instance of the relevant worker (possible, since that is thread-
safe), but was added to the book-keeping via the message mechanism
(necessary, since that is not thread-safe), meant that if the
connection was closed before the message was delivered, the handling
of the message then caused an access error.
Now the fd is also added to the epoll-instance via the messaging
mechanism, so the problem can no longer occur. The only fds this
affects are connections made to maxadmin or maxinfo as they are
always handled by the main thread due to deadlock issues.
Now it is also possible to ensure that a DCB stays alive while
a task referring to it is posted from one worker to another.
That will be implemented in a subsequent commit.
If a fake event is added to the current dcb, we arrange things so
that it is delivered immediately when the handling of the event(s)
during which the fake event was added, has been performed.
Otherwise the event is delivered via the event loop.
The adminusers test did not properly initialize all subsystems in
MaxScale. The polling and DCB tests weren't updated with the changes to
the DCB closing.
dcb_readq_append()
dcb_readq_prepend()
dcb_readq_set()
dcb_readq_has()
dcb_readq_release()
dcb_readq_get()
dcb_readq_length()
No code but for DCB code itself should directly manipulate the
internals of a DCB. Thesse functions will be taken into use in
protocol modules.
This will be used by a subsequent `session_get_current()` and
`session_get_current_id()` for obtaining the current SESSION and
session id, respectively. The latter of those will be used by the
logging mechanism for logging the session id in conjunction with
messages.
When dcb_close() is called, the DCB is only marked for closing
and the actual closing takes place only after all event handlers
have been called. That way, the state of the DCB will not change
during event processing but only after.
From a handler perspective this should now be just like it was
when the zombie queue was present.
TODO: There are far too many state variables or variables akin to
state variables - dcb_role, state, persistentstart, n_close -
in DCB. A cleanup is warranted.
The server internal session id may be larger than 4 bytes (MariaDB uses 8)
but only 4 are sent in the handshake. The full value can be queried
from the server, but this query is not supported by MaxScale yet. In any
case, both the protocol and MXS_SESSION now have 64 bit counters. Only the
low 32 bits are sent in the handshake, similar to server.
Preparation for adding KILL syntax support.
Session id changed to uint32 everywhere. Added atomic op.
Session id can be acquired before session_alloc().
Added session_alloc_with_id(), which is given a session id number.
Worker object has a session_id->SESSION* mapping, not used yet.
The function was no longer thread-safe as it used the obsolete per-thread
spinlocks to iterate over the DCBs. Now the function uses the newly added
WorkerTask class to iterate over them.
Since the new WorkerTask mechanism is far superion to dcb_foreach, the
latter is now deprecated.
This is just a first step in a trial that will allow the addition
of any file descriptor to the general poll mechanism and hence
allow any i/o to be handled by the worker threads.
There is a structure
typedef struct mxs_poll_data
{
void (*handler)(struct mxs_poll_data *data, int wid, uint32_t events);
struct
{
int id;
} thread;
} MXS_POLL_DATA;
that any other structure (e.g. a DCB) encapsulating a file descriptor must
have as its first member (a C++ struct could basically derive from it).
That structure contains two members; 'handler' and 'thread.id'. Handler is a
pointer to a function taking a pointer to a struct mxs_poll_data, a worker thread
if and an epoll event mask as argument.
So, DCB is modified to have MXS_POLL_DATA as its first member and 'handler'
is initialized with a function that *knows* the passed MXS_POLL_DATA can
be downcast to a DCB.
process_pollq no longer exists, but is now called process_pollq_dcb. The
general stuff related to statistics etc. will be moved to poll_waitevents
itself after which the whole function is moved to dcb.c. At that point,
the handler pointer will be set in dcb_alloc().
Effectively poll.[h|c] will provide a generic mechanism for listening on
whatever descriptors and the dcb stuff will be part of dcb.[h|c].
Both the listeners and servers now support IPv6 addresses.
The namedserverfilter does not yet use the new structures and needs to be
fixed in a following commit.
Due to the changes in the threading model, the DCB write code can be
simplified by a great amount.
Since only one thread can write to a DCB, it's safe to assume that no new
data is added to the write queue of a DCB while it is being drained. This
removes the need for the code that tracks whether a concurrent DCB write
attempt was made.
Because the high and low water callbacks weren't used by any module, it is
safe to remove them. They offer no real benefits over the drain callback.
Removed unused spinlocks from DCBs, sessions and the MySQL protocol
structs. They were used in a context where only one thread has access to
the structure.
Removed unused member variables from DCBs.