Now it is also possible to ensure that a DCB stays alive while
a task referring to it is posted from one worker to another.
That will be implemented in a subsequent commit.
The polling mechanism can now optionally be used for managing
the lifetime of an object placed into the poll set.
If a MXS_POLL_DATA has a non-null 'free', then the reference count
of the data will be increased before calling the handler and
decreased after. In that case, if the reference count reaches 0,
the free function will be called.
Note that the reference counts of *all* MXS_POLL_DATAs returned
by 'epoll_wait' will be increased before the events are delivered
to the handlers individually for each MXS_POLL_DATA, and then once
all events have been delivered, the reference count of each
MXS_POLL_DATA will be decreased.
This ensure that if there are interdependencies between different
MXS_POLL_DATAs returned by one call to 'epoll_wait', the case that
an MXS_POLL_DATA is deleted before its events have been delivered
can be avoided by using the reference count for lifetime management.
In subsequent commits, the reference count will be taken into use
in the lifetime management of DCBs.
When something fails inside dcb_connect we rewind the situation
properly, without calling any of the close functions intended for
shutting down a properly created DCB. That way they can be simplified
and once the reference counting is taken into use it is sufficient to
call dcb_dec_ref(dcb) instead of dcb_free_all_memory().
If a fake event is added to the current dcb, we arrange things so
that it is delivered immediately when the handling of the event(s)
during which the fake event was added, has been performed.
Otherwise the event is delivered via the event loop.
The adminusers test did not properly initialize all subsystems in
MaxScale. The polling and DCB tests weren't updated with the changes to
the DCB closing.
dcb_readq_append()
dcb_readq_prepend()
dcb_readq_set()
dcb_readq_has()
dcb_readq_release()
dcb_readq_get()
dcb_readq_length()
No code but for DCB code itself should directly manipulate the
internals of a DCB. Thesse functions will be taken into use in
protocol modules.
Before each event handler is called, it is checked whether the
dcb has been closed. If it has been, then the event handler is
not called.
The check has to be made before each event handler, because any
event handler can close the dcb.
The original process should use _exit if the forking of the child process
is successful. This makes sure that the exit handlers are called only when
the daemon process exits.
The workers need to be destroyed only after services have been
to ensure that they are around in case the destruction of services
involves the closing of dcbs.
If the current worker id is -1, we do not insist that the dcb
is closed by the owning thread. That will happen only for dcbs
that are created before the workers have been started and hence
it is also ok to close them after the workers have exited.
The automatic reconnection feature of the Connector-C was not
enabled. Enabling it should reduce the amount of false positives that the
monitors pick up.
If the session id is known, it will be logged together with all
messages. If present, the session id appears, enclosed in paranthesis,
right after the message category. E.g.
2017-08-30 12:20:49 warning: (4711) [masking] The rule ...
Conceptually this is a cherry-pick of commit
67efd1daeabbc398b8a8fbc0cd02c2af26e4cb6c (2.2), but too much has
changed to actually be able to cherry-pick that commit.
If a dcb being closed is the dcb for which events are currently being
processed, the dcb is not closed immediately but only after all events
have been delivered.
If the session id is known, it will be logged together with all
messages. If present, the session id appears, enclosed in paranthesis,
right after the message category. E.g.
2017-08-30 12:20:49 warning: (4711) [masking] The rule ...
This will be used by a subsequent `session_get_current()` and
`session_get_current_id()` for obtaining the current SESSION and
session id, respectively. The latter of those will be used by the
logging mechanism for logging the session id in conjunction with
messages.
When dcb_close() is called, the DCB is only marked for closing
and the actual closing takes place only after all event handlers
have been called. That way, the state of the DCB will not change
during event processing but only after.
From a handler perspective this should now be just like it was
when the zombie queue was present.
TODO: There are far too many state variables or variables akin to
state variables - dcb_role, state, persistentstart, n_close -
in DCB. A cleanup is warranted.
As COM_QUIT would terminate the connection, there's no need to initiate
the session reset process. Also make sure all buffers are empty before
putting the DCB into the pool.
Added extra debug assertions for parts of the code that are related to the
COM_CHANGE_USER processing.
The function that printed all sessions assumed that all client DCBs had
valid, non-dummy sessions. It is possible that a client with a dummy
session is the list. These sessions should be ignored.
The EOF packet calculation function in modutil.cc didn't handle the case
where the payload exceeded maximum packet size and could mistake binary
data for a ERR packet.
The state of a multi-packet payload is now exposed by the
modutil_count_signal_packets function. This allows proper handling of
large multi-packet payloads.
Added minor improvements to mxs1110_16mb to handle testing of this change.
As each client/server connection will be handled by a specific
thread, all closing activity can take place directly when the
connection is closed and not later when the zombie queue is
processed.
In a subsequent commit the zombie queue will be removed.
The default users are now inserted into the admin users files if no
existing files are found. This removes the hard-coded checks for admin
user names and simplifies the admin user logic.
The default interface for the admin interface is the IPv6 address '::'
which corresponds to the IPv4 address '0.0.0.0'. If the system doesn't
support IPv6, then an attempt to bind on IPv4 should be made.
When a session is being closed in a controlled manner, i.e. a COM_QUIT is
received from the client, it is possible to deduce from this fact that the
backend connections are very likely to be idle. This can be used as an
additional qualification that must be met by all connections before they
can be candidates for connection pooling.
This assumption will not hold with batched and asynchronous queries. In
this case it is possible that the COM_QUIT is received from the client
before even the first result from the backend is read. For this to work,
the protocol module would need to track the number and state of expected
responses.