More of the DCB initialization is now done in the DCB constructor. This
makes the creation of new DCBs simpler but it can be even simpler. By
passing the file descriptor that the DCB should use into the constructor
almost all of the initialization would be done inside it.
Also removed the unused path member variable.
Moved the code into listener.cc as it's the only place where it is
used. Placed the DCB callback assignment into the DCB constructor as it
depended on static functions that were in dcb.cc.
Replaced the DCB with a single file descriptor that the listener listens
on and which is added to all of the workers. The Listener also extends the
MXB_POLL_DATA which allows it to handle epoll events.
Moved the code that creates the listening socket into listener.cc where it
belongs and did a minor cleanup of it.
By doing the actual accepting of the new DCB in the core, the protocol
modules can only do the actual protocol level work. This removes some of
the redundant code that was in the protocol modules.
By storing the reference in the DCB, the two-way dependency between the
listeners and services is severed. Now the services have no direct link to
listeners and after the destruction of a listener it will be freed once
all connections through it have closed.
Due to the fact that a listener itself has a DCB that must point to a
valid listener, a self-reference is stored in the listener DCB. This is
extremely confusing and is only here to keep the code functional until the
DCB part of the listener can be factored out.
Allocating DCB with new allows the use of C++ objects in the DCB
struct. Also the explicit poll field can be replaced by inheriting from
MXB_POLL_DATA.
Modified the functions to use a listener instead of a DCB in the accepting
process. This removes some of the dependenices that the listeners have on
the DCB system.
By loading the entry points required by a DCB when the Listener is
created, the extra cost of finding the module is removed. It also
simplifies DCB creation by removing the possibility of all failures to
load modules at DCB creation time.
The value would otherwise be assigned outside of it and in some cases not
at all. Now all DCBs (apart from internal ones) have a valid SERVICE
pointer.
The SERV_LISTENER pointer should not be in the DCBs but in the
session. This way the listener is an attribute of a session instead of a
connection. If this is implemented, the authenticator data can be more
easily shared.
Replaced raw pointers in function parameter with const SListener
references. This removes the need to pass raw pointers as arguments and
all access is done via smart pointers.
The iteration of listeners is now done via the global list of
listeners. This removes the need to have a service before a listener is
accessed which also reflects how the actual configuration is laid out. It
also guarantees that any results returned by the find functions will be
valid as long as the results are used.
The listeners are now stored in their own list which allows them to be a
component separate from the service. The next step is to remove the
listener iterator functionality and replace it with its STL counterpart.
The class is still mostly the same as the old C version but it now uses
std::string instead of char pointers. Changed configuration default values
so that the parameters passed to the listener allocation are always valid.
Some rearrangements to ensure that what should be private
can be kept private.
- WatchdogNotifier made a friend.
- WatchdogWorkaround defined in RoutingWorker and made a friend.
- mxs::WatchdogWorker defined with 'using'.
The systemd watchdog mechanism requries notifications at
regular intervals. If a synchronous operation of some kind
is performed by a worker, then those notfications will not
be generated.
This change provides each worker with a secondary thread that
can be used for triggering those notifications when the worker
itself is busy doing other stuff. The effect is that there will
be an additional thread for each worker, but most of the time
that thread will be idle.
Sofar only the mechanism; in subsequent changes the mechanism
will be taken into use.
If the server where a query is being executed is shutting down,
readwritesplit should treat it as an error to make retrying of the query
possible.
By treating server shutdowns as network errors, the same code path that is
used for actual network errors can be taken. This removes the need for any
extra retrying logic for this particular case.
The transaction replay could get mixed up with new queries if the client
managed to perform one while the delayed routing was taking place. A
proper way to solve this would be to cork the client DCB until the
transaction is fully replayed. As this change would be relatively more
complex compared to simply labeling queries that are being retried the
corking implementation is left for later when a more complete solution can
be designed.
This commit also adds some of the missing info logging for the transaction
replaying which makes analysis of failures easier.
Systemd wathdog notification at a little more than 2/3 of the
systemd configured time. In the service config (maxscale.service)
add e.g. WatchdogSec=30s to set and enable the watchdog.
For building: install libsystemd-dev.
The next commit will modify cmake configuration and code to
conditionally compile the new code based on existence of libsystemd-dev.
By exposing a (currently undocumented) debug endpoint that lets one
monitor interval pass, we make the reuse of the monitor waiting
functionality a lot easier. With it, when MaxScale is started by the test
framework it knows that at least one monitor interval will have passed for
all monitors and that the system is ready to accept queries.
This will simply cause a task to be posted to each worker.
If the workers are running normally, the task will reach the
workers and the associated semaphore posted, and the REST-API
call will return. If any worker is not running normally, the
task will not be processed and the REST-API call will hang.