Storing all the runtime errors makes it possible to return all of them
them via the REST API. MaxAdmin will still only show the latest error but
MaxCtrl will now show all errors if more than one error occurs.
Added an overload to execute_concurrently that takes an std::function as a
parameter and added a const version of operator* for rworker_local. Also
removed the std::move of the return value in rworker_local::values as it
can prevent RVO from taking place.
If the startup of the listeners requires communication with all of the
workers, the workers must be up and running for that to happen.
Due to the fact that the main thread is still a worker thread, the
initialization code is not extra straightforward. By queuing an event to
the main worker, the startup of all listeners is done at a fully
operational state with all workers fully functional.
The service initialization code was also flawed in the sense that it would
cause a deadlock if any of the threads would have to check for the user
permissions. This is mainly a problem with the authenticator modules but
the benefits of the per service pre-loading of users is most likely
superficial. In theory startup will be faster as each thread now queries
the users in parallel.
Allocating DCB with new allows the use of C++ objects in the DCB
struct. Also the explicit poll field can be replaced by inheriting from
MXB_POLL_DATA.
Some rearrangements to ensure that what should be private
can be kept private.
- WatchdogNotifier made a friend.
- WatchdogWorkaround defined in RoutingWorker and made a friend.
- mxs::WatchdogWorker defined with 'using'.
The systemd watchdog mechanism requries notifications at
regular intervals. If a synchronous operation of some kind
is performed by a worker, then those notfications will not
be generated.
This change provides each worker with a secondary thread that
can be used for triggering those notifications when the worker
itself is busy doing other stuff. The effect is that there will
be an additional thread for each worker, but most of the time
that thread will be idle.
Sofar only the mechanism; in subsequent changes the mechanism
will be taken into use.
Exclude systemd usage if the library is not installed.
Only excluding what is necessary. This keeps the object size the
same and still compiles most of the code.
Systemd wathdog notification at a little more than 2/3 of the
systemd configured time. In the service config (maxscale.service)
add e.g. WatchdogSec=30s to set and enable the watchdog.
For building: install libsystemd-dev.
The next commit will modify cmake configuration and code to
conditionally compile the new code based on existence of libsystemd-dev.
This will simply cause a task to be posted to each worker.
If the workers are running normally, the task will reach the
workers and the associated semaphore posted, and the REST-API
call will return. If any worker is not running normally, the
task will not be processed and the REST-API call will hang.
The removed statistics variables have no meaning anymore and
were not updated.
Decided to simply drop the variable from the JSON output. It
gets far too rigid if fields of objects cannot be changed without
bumping the REST-API version.
See script directory for method. The script to run in the top level
MaxScale directory is called maxscale-uncrustify.sh, which uses
another script, list-src, from the same directory (so you need to set
your PATH). The uncrustify version was 0.66.
Given that worker.hh was public, it made sense to make routingworker.hh
public as well. This removes the need to include private headers in
modules and allows C++ constructs to be used in C++ code when previously
only the C API was available.
This is the first step in some cleanup of the Worker interface.
The execution mode must now be explicitly specified, but that is
just a temporary step. Further down the road, _posting_ will
*always* mean via the message loop while _executing_ will optionally
and by default mean direct execution if the calling thread is that
of the worker.
In principle it would be better if the qc information were
obtained via a specific query_classifier resource. However,
there are multiple problems with that (e.g. the qc has no way
of safely accessing information of another thread) and hence
the worker specific qc cache statistics is reported as part of
the worker statistics.
Replaced the previous RESULTSET with the new implementation. As the new
ResultSet doesn't have a JSON streaming capability, the MaxInfo JSON
interface has been removed. This should not be a big problem as the REST
API offers the same information in a more secure and structured way.
Data can now be stored on thread-local storage of the worker. By acquiring
a unique handle from the worker, a module can store a thread-local
value.
This functionality will be used to store configurations that are sometimes
updated at runtime but are largely read-only. By avoiding shared data
altogether, performance is not affected. The only synchronization that is
done is on update.
Also added a helper functions for broadcasting tasks on all routing
workers. With the old mxs_rworker_broadcast_message function, if a
function call was broadcasted it was always queued for execution. The
mxs_rworker_broadcast will immediately execute the task on the local
worker and queue it for execution of other routing workers.
The evq_length file held the returned number of descriptors from
the last epoll_wait() call. As such it is highly temporal and not
particularly meaningful.
That has now been removed and the instead the average number of
returned descriptors is maintained. That information changes slowly
and thus carries some meaning.