Converted the internal service header to a C++ header and moved all
functions there that are for internal use only.
Added the new Service type that inherits the SERVICE struct. This is to
distinct the opaque external C interface from the C++ internals.
The filter implementation is now fully hidden. Also converted it to a C++
struct allocated with new and stored the filters in a global list instead
of embedding the list in the object itself.
When a session is closed, it releases a reference on the service and
checks if it was the last session for a destroyed service. The state of
the service was loaded after the reference count was decremented. This
behavior introduced a race condition where it was possible for a service
to be freed twice, first by the thread that marked the service as
destroyed and again by the last session for that service. By always
loading the service state before decrementing the reference count, we
avoid this race condition.
Currently, the memory ordering used for the reference counting is too
strict and could be relaxed. By default, all atomic operations use
sequentially consistent memory ordering. This guarantees correct behavior
but imposes a performance penalty. Incrementing the reference counts could
be done with a relaxed memory order as long as as we know the reference
we're incrementing is valid. Releasing a reference must use an
acquire-release order to guarantee the read-modify-write operation is
successful.
The previous implementation did not destroy filters that were not used by
services. With the full initialization of filters in filter_alloc, we can
simply traverse the list of created filters and destroy them knowing that
they are all valid.
Changed the filter_alloc function to fully initialize the filter. This
means that if filter_alloc returns a non-NULL pointer, the filter was
successfully loaded and an instance was successfully created.
The relationships of a service are handled by the service alteration
code. Currently, only server relationships are handled by the code in
question and filter relationships are ignored.
If an invalid value or type is given to the REST API, having the expected
type as well as the given type make problem resolution easier.
Also added a value check into MaxCtrl for listener ports.
MaxScale can now be started with an empty configuration file and services
can be created at runtime. Filters cannot yet be created at runtime so
complete runtime creation of configurations is not yet possible.
The causal read queries were performed also when the target server was the
master. The extra functionality of the causal reads is only needed on
slaves.
Adjusted the test case to require GTID replication.
The configuration used the wrong parameter name. The test also did not
explicitly enable tracking of the last_gtid variable which caused it to
fail if it wasn't already on.
By reading the configuration as late as possible, we allow the objects to
be created in an environment which is nearly identical to the environment
that is present at runtime.
This change makes it possible to execute worker tasks in the instance
creation functions of various modules. The avrorouter in particular
depended on being able to queue worker tasks on startup.
The initialization and starting of the housekeeper is now done
separately. This allows housekeeper tasks to be created when the services
are being created while still preventing the execution of the task before
the startup is complete.
The MariaDB 10.3.8 release fixes the use of last_gtid as a value for
session_track_system_variables. This means that the test can be enabled if
the backends are new enough.
The test failed because router instances are now created when the service
is allocated. In addition to this, a debug assertion was hit when a
service was freed if the router instance creation failed.
Added commands for creating and destroying services. The create command
allows server and filter relationships to be defined but they are not yet
processed by MaxScale. This will be done once the use of filters is made
dynamic.
Services can now be destroyed if they have no active listeners and they
are not linked to servers. When these conditions are met, the service will
be destroyed when the last session for the service is closed.
The closing of a service will close all listeners that were once assigned
to the service. This allows closing of the ports at runtime which
previously was done only on shutdown.
Exposed the command through the REST API but not through MaxAdmin as it is
deprecated.
When a service is freed, it will free all of its listeners causing their
respective DCBs to be closed. This requires that listeners can be removed
from the worker DCB list.
The runtime configuration JSON validation now allows multiple
relationships to be verified at one time. This makes it easier to validate
all objects using the same framework.
When a listener is removed from a service, it should also be removed from
any workers it has been added to. This guarantees that if the opening of
the listener was successful, no requests will be accepted on it after the
removal of the listener.
As all connections should be accepted via dcb_accept, it is the optimal
place to calculate how many open client connections per service there
are. The decrementation should be done when the session is closed instead
of when the call to dcb_close for the client DCB is done. This allows the
client count to be the absolute reference count that sessions have to a
service.
The current client count is a duplicate counter that should match the
n_current value in SERVICE_STATS. The former does differ from the latter
in that it does the incrementation when the client DCB is accepted instead
of when the session is created.
By creating the router instance as a part of the service allocation
process, we are guaranteed that either the creation of the service is
completely successful or it fails. This should make runtime creation of
services easier.
As we know that the test doesn't rely on absolute binlog positions, we can
force synchronization by creating a table on the master and waiting until
that table is replicated to all slaves.
Add support for binary protocol prepared statements for schemarouter.
This implementation doesn't yet attempt to handle all the edge cases.
Prepared statements are routed to the server that contains the affected
tables, the internal id from the server is then mapped to the session
command id that is inceremented for each prepared statement. This unique
session command id is returned to the client because internal id given
by server might be same around different servers and this way it is
possible to keep track of them and route them to the right servers when
executed.