Add missing listener JSON diagnostics call. Check that the
diagnostics_json function exists before calling it.
As the protocol modules don't have diagnostics functions, they aren't
called.
Replace hard-coded strings with constant parameters. This makes it
slightly cleaner.
If two services referred to the same filter instance, it would
cause the filter to deleted twice at MaxScale shutdown with a
crash as the result.
Now when the services are deleted we just collect the unique
filter instances and then delete them after all services have
been deleted.
Earlier, if a service had multiple listeners you would have had
MaxScale> show dbusers MyService
User names: alice@% ...
User names: bob@% ...
That is, no indication of which listener is reporting what. With
this commit the result will be
User names (MyListener1): alice@% ...
User names (MyListener2): bob@% ...
Further, the diagnostics function of an authenticator is now expected
to write the list of users to the provided DCB, without performing any
other formatting. The formatting (printing "User names" and appending
a line-feed) is now handled by the handler for the MaxAdmin command
"show dbusers".
It is now impossible to create two listeners for a service that
would listen on the same port/socket (as before), but the error
message is now sensible and provides detailed information to the
user.
When MaxScale is starting, the loading of the listeners can take a while
if there are a large number of services and users to load. To signal this
to the user, progress messages should be logged after every service is
started.
Only asynchronous authenticators require the thread-specific loading of
users as the synchronous ones all share the same data. If the service does
not declare asynchronous capabilities at startup, the users are not
seeded. This prevents unnecessary loading of users at startup.
All modules now have an 8-bit range for capability flags. Currently only
the client side authenticator and protocol capability bits are loaded due
to the fact that backend versions of these modules don't relate to a
particular service.
Pre-loading users for all threads at startup significantly reduces the
chance for failures caused by the lazy initialization of the user database
done by the authenticators.
If users are not loaded at startup and the connection limit for all
servers is reached, authentication in MaxScale will fail not due to too
many connections but due to the lack of authentication data. This causes
repeated reloading of users, which floods the log with messages, and
unnecessary stress on the cluster itself.
The memory for the rate limit struct was allocated but it was not assigned
for the service. Also corrected a false debug assertion in
service_refresh_users.
The internal header directory conflicted with in-source builds causing a
build failure. This is fixed by renaming the internal header directory to
something other than maxscale.
The renaming pointed out a few problems in a couple of source files that
appeared to include internal headers when the headers were in fact public
headers.
Fixed maxctrl in-source builds by making the copying of the sources
optional.
Only used in conjunction with queued connections, which are not
enabled anyway. Once that comes on the table again, better to use
some standard data structures.
Only the protocol, port and address of the listener were used to check if
a listener exists. The check should also use the name of the listener to
be sure that each name is unique.
Expanded tests to check that the creation of duplicate listeners is
detected. Did minor improvements to related test code.
The listeners are now a proper sub-resource of the service resource. This
means that it acts like a normal resource and can be queried both as a
collection of resources and as an individual resource.
The feedback system wasn't used and was starting to cause problems on
Debian 9 where the libcurl required different version of OpenSSL than what
MaxScale was linked against.
That allows the version to be updated and read atomically. If
major/minor/patch are stored as separate variables, you can get an
inconsistent set. Now it may be out of date by the time it is used,
but it will never be internally inconsistent.
The behaviour of the query classifier needs to change depending
on the version of the servers of a service. With this function
"some" version of the service can be obtained.
The top level resource self links pointed to the collection instead of the
resource itself. The individual resoures now also have a links field that
contains the self link to the resource. This should make navigation of the
API easier as all objects have valid links in them.
The listener iterator hides the details of the listener iteration behind a
small set of functions.
The following for-loop demonstrates the main use-case for the iterator:
LISTENER_ITERATOR iter;
for (SERV_LISTENER *listener = listener_iterator_init(service, &iter);
listener; listener = listener_iterator_next(&iter))
{
/** Do something with the listener */
}
As the listeners are mostly iterated to either check a fact about them or
print their information, the functions cater to that use-case. For this
reason, they should always be allocated off the stack.
The service port list is now iterated in a safe and lock-free manner. This
makes the handling of service ports somewhat simpler since once an item
has been added to the list it will never be removed. It also removes the
need to lock the service when checking whether a service listens on a port
which caused potential deadlocks.
The listeners under the /services/:service/listeners collection are now
fully JSON API compliant resources.
The listeners could also be exposed as a /listeners collection to easily
group all listener type resources in one place. This approach does has
some semantical and practical problems, namely the fact that each listener
has a many-to-one relationship with its service and listeners by
themselves can't exist alone.
The JSON objects that are created from the various core MaxScale objects
share a lot of common code. Moving this into a separate files removes the
redundant code.
The filter names for the service parameter `filters` weren't converted
into the new format. This caused a configuration error when a filter with
significant whitespace in its name was used in a service.
When a persisted configuration file is read, the values in it are
considered to be more up-to-date than the ones in the main configuration
file. This allows all objects to be persisted in a more complete form
making it easier to change configuration values at runtime.
This change is intended to help make runtime alterations to services
possible.
The `user`, `password`, `version_string` and `weightby` values should be
allocated as a part of the service structure. This allows them to be
modified at runtime without having to worry about memory allocation
problems.
Although this removes the problem of reallocation, it still does not make
the updating of the strings thread-safe. This can cause invalid values to
be read from the service strings.
The service, filter and monitor resources now have a "parameters" value
which contains a set of all configuration parameters for that object. This
set contains both standard and non-standard parameters.
Also fixed a mistake in the constant name definitions for the monitor
parameters "events" and "script".
The service resource now exposes the parameters that were given to
it. This allows general service parameters to be changed at runtime
through the REST API.
A self link to the resource itself provides a convenient way for the
client to request a resource, modify it and call the self link to update
it. This removes some of the burden on the client to keep track of the
resource links by placing these in the resource itself.