Modified the functions to use a listener instead of a DCB in the accepting
process. This removes some of the dependenices that the listeners have on
the DCB system.
By loading the entry points required by a DCB when the Listener is
created, the extra cost of finding the module is removed. It also
simplifies DCB creation by removing the possibility of all failures to
load modules at DCB creation time.
The value would otherwise be assigned outside of it and in some cases not
at all. Now all DCBs (apart from internal ones) have a valid SERVICE
pointer.
The SERV_LISTENER pointer should not be in the DCBs but in the
session. This way the listener is an attribute of a session instead of a
connection. If this is implemented, the authenticator data can be more
easily shared.
Replaced raw pointers in function parameter with const SListener
references. This removes the need to pass raw pointers as arguments and
all access is done via smart pointers.
The iteration of listeners is now done via the global list of
listeners. This removes the need to have a service before a listener is
accessed which also reflects how the actual configuration is laid out. It
also guarantees that any results returned by the find functions will be
valid as long as the results are used.
The listeners are now stored in their own list which allows them to be a
component separate from the service. The next step is to remove the
listener iterator functionality and replace it with its STL counterpart.
The class is still mostly the same as the old C version but it now uses
std::string instead of char pointers. Changed configuration default values
so that the parameters passed to the listener allocation are always valid.
Some rearrangements to ensure that what should be private
can be kept private.
- WatchdogNotifier made a friend.
- WatchdogWorkaround defined in RoutingWorker and made a friend.
- mxs::WatchdogWorker defined with 'using'.
The systemd watchdog mechanism requries notifications at
regular intervals. If a synchronous operation of some kind
is performed by a worker, then those notfications will not
be generated.
This change provides each worker with a secondary thread that
can be used for triggering those notifications when the worker
itself is busy doing other stuff. The effect is that there will
be an additional thread for each worker, but most of the time
that thread will be idle.
Sofar only the mechanism; in subsequent changes the mechanism
will be taken into use.
If the server where a query is being executed is shutting down,
readwritesplit should treat it as an error to make retrying of the query
possible.
By treating server shutdowns as network errors, the same code path that is
used for actual network errors can be taken. This removes the need for any
extra retrying logic for this particular case.
The transaction replay could get mixed up with new queries if the client
managed to perform one while the delayed routing was taking place. A
proper way to solve this would be to cork the client DCB until the
transaction is fully replayed. As this change would be relatively more
complex compared to simply labeling queries that are being retried the
corking implementation is left for later when a more complete solution can
be designed.
This commit also adds some of the missing info logging for the transaction
replaying which makes analysis of failures easier.
Systemd wathdog notification at a little more than 2/3 of the
systemd configured time. In the service config (maxscale.service)
add e.g. WatchdogSec=30s to set and enable the watchdog.
For building: install libsystemd-dev.
The next commit will modify cmake configuration and code to
conditionally compile the new code based on existence of libsystemd-dev.
By exposing a (currently undocumented) debug endpoint that lets one
monitor interval pass, we make the reuse of the monitor waiting
functionality a lot easier. With it, when MaxScale is started by the test
framework it knows that at least one monitor interval will have passed for
all monitors and that the system is ready to accept queries.
This will simply cause a task to be posted to each worker.
If the workers are running normally, the task will reach the
workers and the associated semaphore posted, and the REST-API
call will return. If any worker is not running normally, the
task will not be processed and the REST-API call will hang.
As the router is the only one that knows what backends a particular
statement has been sent to, it is the responsibility of the router
to keep the session bookkeeping up to date. If it doesn't we will
know what statements a session has received (provided at least some
component in the routing chain has RCAP_TYPE_STMT_INPUT capability),
but not how long their processing took. Currently only readwritesplit
does that.
All queries are stored and not just COM_QUERY as that makes the
overall bookkeeping simpler; at clientReply() time we do not need to
know whether or not to bookkeep information, we can just do it.
When session information is queried for, we report as much information
we have available.
This commit introduces the plumbing support for obtaining
classification information of a statement using the REST-API.
It introduces a URL like
/v1/maxscale/query_classifier/classify?sql=SELECT+1
that in the response will return a JSON object with the
information. Subsequent commits will provide the actual
information.