A new class mxs::Worker will be introduced and mxs::RoutingWorker
will be inherited from that. mxs::Worker will basically only be a
thread with a message-loop.
Once available, all current non-worker threads (but the one
implicitly created by microhttpd) can be creating by inheriting
from that; in practice that means the housekeeping thread, all
monitor threads and possibly the logging thread.
The benefit of this arrangement is that there then will be a general
mechanism for cross thread communication without having to use any
shared data structures.
With a granularity of 1 second, the load will from a human
perspective reflect the current situation. That also means
that the maxadmin output shows "natural" steps; 1s, 1m and 1h.
By definition, the load is calculated using the following formula:
L = 100 * ((T - t) / T)
where T is a time period and t the time of that period that the worker
spends in epoll_wait(). So, if there is so much work that epoll_wait()
always returns immediately, then the load is 100 and if the thread
spends the entire period in epoll_wait(), then the load is 0.
The basic idea is that the timeout given to epoll_wait() is adjusted
so that epoll_wait() will always return roughly at 10 seconds interval.
By making a note of when we are about to enter epoll_wait() and when we
return from it, we have all the information we need for calculating the
load.
Due to the nature of things, we will not be able to calculate the load
at exact 10-second boundaries, but it will be pretty close. And the load
is always calculated using the true length of the period.
We will then calculate 1 minute load by averaging the load value for 6
consecutive 10-second periods and the 1 hour load by averaging the load
value of 60 consecutive 1 minute loads.
So, while the 10-second load represents the load of the most recently
measured 10-second period (and not the load of the most recent 10
seconds), the 1 minute load and the 1 hour load represents the load of
the most recent minute and hour respectively.
Pre-loading users for all threads at startup significantly reduces the
chance for failures caused by the lazy initialization of the user database
done by the authenticators.
If users are not loaded at startup and the connection limit for all
servers is reached, authentication in MaxScale will fail not due to too
many connections but due to the lack of authentication data. This causes
repeated reloading of users, which floods the log with messages, and
unnecessary stress on the cluster itself.
The internal header directory conflicted with in-source builds causing a
build failure. This is fixed by renaming the internal header directory to
something other than maxscale.
The renaming pointed out a few problems in a couple of source files that
appeared to include internal headers when the headers were in fact public
headers.
Fixed maxctrl in-source builds by making the copying of the sources
optional.