The service would not be in the list if it failed before it was placed
there. Moving the actual freeing of memory into the Service destructor
allows it to be called directly when we know the service is not in the
list. This also only allows valid services to be placed into the global
list of services.
To prevent freeing a partially constructed service, the memory allocation
checks were replaced with a runtime assertion. This can be changed when
the creation of the service is done only at a point where we know it can't
fail. Currently, the createInstance call expects the service as a
parameter which prevents this.
The signal handler no longer acquires the service list lock which removes
a number of deadlock possibilities from the shutdown process. Instead, a
global shutdown flag is set that serves the same purpose as the individual
service shutdown flags did.
Replaced the previous RESULTSET with the new implementation. As the new
ResultSet doesn't have a JSON streaming capability, the MaxInfo JSON
interface has been removed. This should not be a big problem as the REST
API offers the same information in a more secure and structured way.
The debug assertion that asserts that services are destroyed only on the
main worker would be triggered on shutdown as there is no current worker
at that point in time. In addition to this, it is wrong to call
service_destroy at shutdown as that will remove persisted
configurations. The service_free function can be called directly as we
know no other thread are running when the services are being torn down.
Also added the missing check that the destroyInstance function is
implemented before calling it.
Comparing two fixed std::strings would have equal C strings but comparing
with operator== they would be different. This was a result of the string
modification done by fix_object_name.
Converted the internal header into a C++ header, added std::string
overload and fixed use of the function.
The filter implementation is now fully hidden. Also converted it to a C++
struct allocated with new and stored the filters in a global list instead
of embedding the list in the object itself.
When a session is closed, it releases a reference on the service and
checks if it was the last session for a destroyed service. The state of
the service was loaded after the reference count was decremented. This
behavior introduced a race condition where it was possible for a service
to be freed twice, first by the thread that marked the service as
destroyed and again by the last session for that service. By always
loading the service state before decrementing the reference count, we
avoid this race condition.
Currently, the memory ordering used for the reference counting is too
strict and could be relaxed. By default, all atomic operations use
sequentially consistent memory ordering. This guarantees correct behavior
but imposes a performance penalty. Incrementing the reference counts could
be done with a relaxed memory order as long as as we know the reference
we're incrementing is valid. Releasing a reference must use an
acquire-release order to guarantee the read-modify-write operation is
successful.
The previous implementation did not destroy filters that were not used by
services. With the full initialization of filters in filter_alloc, we can
simply traverse the list of created filters and destroy them knowing that
they are all valid.
Changed the filter_alloc function to fully initialize the filter. This
means that if filter_alloc returns a non-NULL pointer, the filter was
successfully loaded and an instance was successfully created.
MaxScale can now be started with an empty configuration file and services
can be created at runtime. Filters cannot yet be created at runtime so
complete runtime creation of configurations is not yet possible.
The test failed because router instances are now created when the service
is allocated. In addition to this, a debug assertion was hit when a
service was freed if the router instance creation failed.
Services can now be destroyed if they have no active listeners and they
are not linked to servers. When these conditions are met, the service will
be destroyed when the last session for the service is closed.
The closing of a service will close all listeners that were once assigned
to the service. This allows closing of the ports at runtime which
previously was done only on shutdown.
Exposed the command through the REST API but not through MaxAdmin as it is
deprecated.
When a service is freed, it will free all of its listeners causing their
respective DCBs to be closed. This requires that listeners can be removed
from the worker DCB list.
When a listener is removed from a service, it should also be removed from
any workers it has been added to. This guarantees that if the opening of
the listener was successful, no requests will be accepted on it after the
removal of the listener.
As all connections should be accepted via dcb_accept, it is the optimal
place to calculate how many open client connections per service there
are. The decrementation should be done when the session is closed instead
of when the call to dcb_close for the client DCB is done. This allows the
client count to be the absolute reference count that sessions have to a
service.
The current client count is a duplicate counter that should match the
n_current value in SERVICE_STATS. The former does differ from the latter
in that it does the incrementation when the client DCB is accepted instead
of when the session is created.
By creating the router instance as a part of the service allocation
process, we are guaranteed that either the creation of the service is
completely successful or it fails. This should make runtime creation of
services easier.
Spaces must be considered a part of the object name in tokenization. This
ensures that the name normalization process generates correct names and
that tokens are split at correct places.
The configuration system that modules use allows the SSL parameter
validation to be simplified. It should also provide more consistent error
messages for similar types of errors.
The SSL_LISTENER initialization is now done in one step. There was no good
reason to do it in two separate steps for listeners but in one step for
servers.
The `ssl` parameter now also accepts boolean values. As the parameter
behaves like a boolean and looks like a boolean, it ought to be a
boolean. It still accepts the custom `required` and `disabled` values
simply for backwards compatibility.
Also added the missing freeing functions for the SSL_LISTENER type. This
prevents failed SSL_LISTENER creations from leaking memory.
The same mechanism that is used for modules can be used for the
configuration of the core objects. This removes the need for the redundant
code that validates various values that is already present in the code
that modules use.
Relaced router_options with configuration parameters in the createInstance
router entry point. The same needs to be done for the filter API as barely
any filters use the feature.
Some routers (binlogrouter) still support router_options but using it is
deprecated. This had to be done as their use wasn't deprecated in 2.2.
If a router parameter has no default value, the previous value would be
returned as an empty string. A debug assertion would be triggered when a
parameter of this type was altered.
When a new router parameter is encountered and the alteration fails, the
modified value in the service need to be removed. Previously, the new
value would have been stored in the service with an empty value which
would have caused problems.
The state of each individual listener is now displayed in the REST
API. Created common functions for printing the listener state and took
them into use. Added the new state into MaxCtrl output.
Worker is now the base class of all workers. It has a message
queue and can be run in a thread of its own, or in the calling
thread. Worker can not be used as such, but a concrete worker
class must be derived from it. Currently there is only one
concrete class RoutingWorker.
There is some overlapping in functionality between Worker and
RoutingWorker, as there is e.g. a need for broadcasting a
message to all routing workers, but not to other workers.
Currently other workers can not be created as the array for
holding the pointers to the workers is exactly as large as
there will be RoutingWorkers. That will be changed so that
the maximum number of threads is hardwired to some ridiculous
value such as 128. That's the first step in the path towards
a situation where the number of worker threads can be changed
at runtime.
The tasks themselves now control whether they are executed again. To
compare it to the old system, oneshot tasks now return `false` and
repeating tasks return `true`.
Letting the housekeeper remove the tasks makes the code simpler and
removes the possibility of the task being removed while it is being
executed. It does introduce a deadlock possibility if a housekeeper
function is called inside a housekeeper task.
Add missing listener JSON diagnostics call. Check that the
diagnostics_json function exists before calling it.
As the protocol modules don't have diagnostics functions, they aren't
called.
Replace hard-coded strings with constant parameters. This makes it
slightly cleaner.
If two services referred to the same filter instance, it would
cause the filter to deleted twice at MaxScale shutdown with a
crash as the result.
Now when the services are deleted we just collect the unique
filter instances and then delete them after all services have
been deleted.
Earlier, if a service had multiple listeners you would have had
MaxScale> show dbusers MyService
User names: alice@% ...
User names: bob@% ...
That is, no indication of which listener is reporting what. With
this commit the result will be
User names (MyListener1): alice@% ...
User names (MyListener2): bob@% ...
Further, the diagnostics function of an authenticator is now expected
to write the list of users to the provided DCB, without performing any
other formatting. The formatting (printing "User names" and appending
a line-feed) is now handled by the handler for the MaxAdmin command
"show dbusers".
It is now impossible to create two listeners for a service that
would listen on the same port/socket (as before), but the error
message is now sensible and provides detailed information to the
user.
When MaxScale is starting, the loading of the listeners can take a while
if there are a large number of services and users to load. To signal this
to the user, progress messages should be logged after every service is
started.
Only asynchronous authenticators require the thread-specific loading of
users as the synchronous ones all share the same data. If the service does
not declare asynchronous capabilities at startup, the users are not
seeded. This prevents unnecessary loading of users at startup.