The configuration system that modules use allows the SSL parameter
validation to be simplified. It should also provide more consistent error
messages for similar types of errors.
The SSL_LISTENER initialization is now done in one step. There was no good
reason to do it in two separate steps for listeners but in one step for
servers.
The `ssl` parameter now also accepts boolean values. As the parameter
behaves like a boolean and looks like a boolean, it ought to be a
boolean. It still accepts the custom `required` and `disabled` values
simply for backwards compatibility.
Also added the missing freeing functions for the SSL_LISTENER type. This
prevents failed SSL_LISTENER creations from leaking memory.
The same mechanism that is used for modules can be used for the
configuration of the core objects. This removes the need for the redundant
code that validates various values that is already present in the code
that modules use.
Relaced router_options with configuration parameters in the createInstance
router entry point. The same needs to be done for the filter API as barely
any filters use the feature.
Some routers (binlogrouter) still support router_options but using it is
deprecated. This had to be done as their use wasn't deprecated in 2.2.
If a router parameter has no default value, the previous value would be
returned as an empty string. A debug assertion would be triggered when a
parameter of this type was altered.
When a new router parameter is encountered and the alteration fails, the
modified value in the service need to be removed. Previously, the new
value would have been stored in the service with an empty value which
would have caused problems.
The state of each individual listener is now displayed in the REST
API. Created common functions for printing the listener state and took
them into use. Added the new state into MaxCtrl output.
Worker is now the base class of all workers. It has a message
queue and can be run in a thread of its own, or in the calling
thread. Worker can not be used as such, but a concrete worker
class must be derived from it. Currently there is only one
concrete class RoutingWorker.
There is some overlapping in functionality between Worker and
RoutingWorker, as there is e.g. a need for broadcasting a
message to all routing workers, but not to other workers.
Currently other workers can not be created as the array for
holding the pointers to the workers is exactly as large as
there will be RoutingWorkers. That will be changed so that
the maximum number of threads is hardwired to some ridiculous
value such as 128. That's the first step in the path towards
a situation where the number of worker threads can be changed
at runtime.
The tasks themselves now control whether they are executed again. To
compare it to the old system, oneshot tasks now return `false` and
repeating tasks return `true`.
Letting the housekeeper remove the tasks makes the code simpler and
removes the possibility of the task being removed while it is being
executed. It does introduce a deadlock possibility if a housekeeper
function is called inside a housekeeper task.
Add missing listener JSON diagnostics call. Check that the
diagnostics_json function exists before calling it.
As the protocol modules don't have diagnostics functions, they aren't
called.
Replace hard-coded strings with constant parameters. This makes it
slightly cleaner.
If two services referred to the same filter instance, it would
cause the filter to deleted twice at MaxScale shutdown with a
crash as the result.
Now when the services are deleted we just collect the unique
filter instances and then delete them after all services have
been deleted.
Earlier, if a service had multiple listeners you would have had
MaxScale> show dbusers MyService
User names: alice@% ...
User names: bob@% ...
That is, no indication of which listener is reporting what. With
this commit the result will be
User names (MyListener1): alice@% ...
User names (MyListener2): bob@% ...
Further, the diagnostics function of an authenticator is now expected
to write the list of users to the provided DCB, without performing any
other formatting. The formatting (printing "User names" and appending
a line-feed) is now handled by the handler for the MaxAdmin command
"show dbusers".
It is now impossible to create two listeners for a service that
would listen on the same port/socket (as before), but the error
message is now sensible and provides detailed information to the
user.
When MaxScale is starting, the loading of the listeners can take a while
if there are a large number of services and users to load. To signal this
to the user, progress messages should be logged after every service is
started.
Only asynchronous authenticators require the thread-specific loading of
users as the synchronous ones all share the same data. If the service does
not declare asynchronous capabilities at startup, the users are not
seeded. This prevents unnecessary loading of users at startup.
All modules now have an 8-bit range for capability flags. Currently only
the client side authenticator and protocol capability bits are loaded due
to the fact that backend versions of these modules don't relate to a
particular service.
Pre-loading users for all threads at startup significantly reduces the
chance for failures caused by the lazy initialization of the user database
done by the authenticators.
If users are not loaded at startup and the connection limit for all
servers is reached, authentication in MaxScale will fail not due to too
many connections but due to the lack of authentication data. This causes
repeated reloading of users, which floods the log with messages, and
unnecessary stress on the cluster itself.
The memory for the rate limit struct was allocated but it was not assigned
for the service. Also corrected a false debug assertion in
service_refresh_users.
The internal header directory conflicted with in-source builds causing a
build failure. This is fixed by renaming the internal header directory to
something other than maxscale.
The renaming pointed out a few problems in a couple of source files that
appeared to include internal headers when the headers were in fact public
headers.
Fixed maxctrl in-source builds by making the copying of the sources
optional.
Only used in conjunction with queued connections, which are not
enabled anyway. Once that comes on the table again, better to use
some standard data structures.
Only the protocol, port and address of the listener were used to check if
a listener exists. The check should also use the name of the listener to
be sure that each name is unique.
Expanded tests to check that the creation of duplicate listeners is
detected. Did minor improvements to related test code.
The listeners are now a proper sub-resource of the service resource. This
means that it acts like a normal resource and can be queried both as a
collection of resources and as an individual resource.
The feedback system wasn't used and was starting to cause problems on
Debian 9 where the libcurl required different version of OpenSSL than what
MaxScale was linked against.
That allows the version to be updated and read atomically. If
major/minor/patch are stored as separate variables, you can get an
inconsistent set. Now it may be out of date by the time it is used,
but it will never be internally inconsistent.
The behaviour of the query classifier needs to change depending
on the version of the servers of a service. With this function
"some" version of the service can be obtained.