The cache now enforces the defined maximum size by evicting some
entries in case the insertion of a new entry would cause the max
size to be exceeded. Currently the eviction algorithm simply
removes a random element.
Replaced the previous RESULTSET with the new implementation. As the new
ResultSet doesn't have a JSON streaming capability, the MaxInfo JSON
interface has been removed. This should not be a big problem as the REST
API offers the same information in a more secure and structured way.
The result set mechanism was ill-suited for iteration over
lists. Converting it into a class and inverting it by pushing rows into
the result set instead the result set asking for rows makes it very easy
to use with lists. It also solves some of the consistency problems that
existed with the previous implementation.
The debug assertion that asserts that services are destroyed only on the
main worker would be triggered on shutdown as there is no current worker
at that point in time. In addition to this, it is wrong to call
service_destroy at shutdown as that will remove persisted
configurations. The service_free function can be called directly as we
know no other thread are running when the services are being torn down.
Also added the missing check that the destroyInstance function is
implemented before calling it.
When the configuration was loaded, the dependencies were resolved at the
time the objects were constructed. To remove the need to check whether a
dependency exists at object creation time, the dependencies can be
resolved by the core as a part of the configuration processing.
The circular dependency resolution uses a template function to solve the
problem in a generic fashion. This might be slightly overkill but it is a
good way to test the waters and see whether the functon would be usable in
as a utility function.
Only explicit object dependencies are resolved. If a module declares a
parameter as a string but uses it like an object name, the dependency is
not resolved and can fail.
As the dependency resolution uses Tarjan's algorithm, it has a side-effect
of creating the correct order of objects to meet their dependencies.
Comparing two fixed std::strings would have equal C strings but comparing
with operator== they would be different. This was a result of the string
modification done by fix_object_name.
Converted the internal header into a C++ header, added std::string
overload and fixed use of the function.
Data can now be stored on thread-local storage of the worker. By acquiring
a unique handle from the worker, a module can store a thread-local
value.
This functionality will be used to store configurations that are sometimes
updated at runtime but are largely read-only. By avoiding shared data
altogether, performance is not affected. The only synchronization that is
done is on update.
Also added a helper functions for broadcasting tasks on all routing
workers. With the old mxs_rworker_broadcast_message function, if a
function call was broadcasted it was always queued for execution. The
mxs_rworker_broadcast will immediately execute the task on the local
worker and queue it for execution of other routing workers.
There exists a dependency on the configuration for the workers: the total
number of worker thread is defined by the `threads` parameter. This means
that the global configuration section must be read before workers are
started.
Commit 411b70e25656317909e54f748f8012593120041f broke MaxScale and turned
it into a single threaded application as the default configuration value
of one worker was used.
Converted the internal service header to a C++ header and moved all
functions there that are for internal use only.
Added the new Service type that inherits the SERVICE struct. This is to
distinct the opaque external C interface from the C++ internals.
The filter implementation is now fully hidden. Also converted it to a C++
struct allocated with new and stored the filters in a global list instead
of embedding the list in the object itself.
When a session is closed, it releases a reference on the service and
checks if it was the last session for a destroyed service. The state of
the service was loaded after the reference count was decremented. This
behavior introduced a race condition where it was possible for a service
to be freed twice, first by the thread that marked the service as
destroyed and again by the last session for that service. By always
loading the service state before decrementing the reference count, we
avoid this race condition.
Currently, the memory ordering used for the reference counting is too
strict and could be relaxed. By default, all atomic operations use
sequentially consistent memory ordering. This guarantees correct behavior
but imposes a performance penalty. Incrementing the reference counts could
be done with a relaxed memory order as long as as we know the reference
we're incrementing is valid. Releasing a reference must use an
acquire-release order to guarantee the read-modify-write operation is
successful.
The previous implementation did not destroy filters that were not used by
services. With the full initialization of filters in filter_alloc, we can
simply traverse the list of created filters and destroy them knowing that
they are all valid.
Changed the filter_alloc function to fully initialize the filter. This
means that if filter_alloc returns a non-NULL pointer, the filter was
successfully loaded and an instance was successfully created.
The relationships of a service are handled by the service alteration
code. Currently, only server relationships are handled by the code in
question and filter relationships are ignored.
If an invalid value or type is given to the REST API, having the expected
type as well as the given type make problem resolution easier.
Also added a value check into MaxCtrl for listener ports.
MaxScale can now be started with an empty configuration file and services
can be created at runtime. Filters cannot yet be created at runtime so
complete runtime creation of configurations is not yet possible.
By reading the configuration as late as possible, we allow the objects to
be created in an environment which is nearly identical to the environment
that is present at runtime.
This change makes it possible to execute worker tasks in the instance
creation functions of various modules. The avrorouter in particular
depended on being able to queue worker tasks on startup.
The initialization and starting of the housekeeper is now done
separately. This allows housekeeper tasks to be created when the services
are being created while still preventing the execution of the task before
the startup is complete.
The test failed because router instances are now created when the service
is allocated. In addition to this, a debug assertion was hit when a
service was freed if the router instance creation failed.
Services can now be destroyed if they have no active listeners and they
are not linked to servers. When these conditions are met, the service will
be destroyed when the last session for the service is closed.
The closing of a service will close all listeners that were once assigned
to the service. This allows closing of the ports at runtime which
previously was done only on shutdown.
Exposed the command through the REST API but not through MaxAdmin as it is
deprecated.
When a service is freed, it will free all of its listeners causing their
respective DCBs to be closed. This requires that listeners can be removed
from the worker DCB list.
The runtime configuration JSON validation now allows multiple
relationships to be verified at one time. This makes it easier to validate
all objects using the same framework.
When a listener is removed from a service, it should also be removed from
any workers it has been added to. This guarantees that if the opening of
the listener was successful, no requests will be accepted on it after the
removal of the listener.
As all connections should be accepted via dcb_accept, it is the optimal
place to calculate how many open client connections per service there
are. The decrementation should be done when the session is closed instead
of when the call to dcb_close for the client DCB is done. This allows the
client count to be the absolute reference count that sessions have to a
service.
The current client count is a duplicate counter that should match the
n_current value in SERVICE_STATS. The former does differ from the latter
in that it does the incrementation when the client DCB is accepted instead
of when the session is created.
By creating the router instance as a part of the service allocation
process, we are guaranteed that either the creation of the service is
completely successful or it fails. This should make runtime creation of
services easier.
-Wunused-result warning in test_logthrottling.cc was causing error when
trying to build MaxScale from source. This warning can be silenced with by
putting the function triggering the warning in if-clause.
When the query queue does not contain a complete packet
(i.e. modutil_get_next_MySQL_packet return NULL), an informative dump of
how many bytes and what is stored is logged.
By aborting the process if memory runs out when a buffer needs to be made
contiguous, we rule out other, more subtle, errors. Failing as soon as a
possible when memory allocation fails gives better error messages.
The LocalClient micro-client required a reference to the session that was
valid at construction time. This is the reason why the previous
implementation used dcb_foreach to first gather the targets and then
execute queries on them. By replacing this reference with pointers to the
raw data it requires, we lift the requirement of the orignating session
being alive at construction time.
Now that the LocalClient no longer holds a reference to the session, the
killing of the connection does not have to be done on the same thread that
started the process. This prevents the deadlock that occurred when
concurrect dcb_foreach calls were made.
Replaced the unused dcb_foreach_parallel with a version of dcb_foreach
that allows iteration of DCBs local to this worker. The dcb_foreach_local
is the basis upon which all DCB access outside of administrative tasks
should be built on.
This change will introduce a regression in functionality: The client will
no longer receive an error if no connections match the KILL query
criteria. This is done to avoid having to synchronize the workers after
they have performed the killing of their own connections.