The service would not be in the list if it failed before it was placed
there. Moving the actual freeing of memory into the Service destructor
allows it to be called directly when we know the service is not in the
list. This also only allows valid services to be placed into the global
list of services.
To prevent freeing a partially constructed service, the memory allocation
checks were replaced with a runtime assertion. This can be changed when
the creation of the service is done only at a point where we know it can't
fail. Currently, the createInstance call expects the service as a
parameter which prevents this.
The function has use outside of the monitors as it makes execution of
worker tasks much more convenient. Currently, this change only moves the
code and takes it into use: there should be no functional changes.
Uses mostly the status functions for reading the flags. Strickly
speaking this breaks the REST API since in some cases (status combinations)
the printed string is different from what was printed before.
The signal handler no longer acquires the service list lock which removes
a number of deadlock possibilities from the shutdown process. Instead, a
global shutdown flag is set that serves the same purpose as the individual
service shutdown flags did.
get_shard_target had become little bloated with the recent changes so
some routing cases were moved to their own functions. Also removed
some code that was not needed.
The cache now enforces the defined maximum size by evicting some
entries in case the insertion of a new entry would cause the max
size to be exceeded. Currently the eviction algorithm simply
removes a random element.
Replaced the previous RESULTSET with the new implementation. As the new
ResultSet doesn't have a JSON streaming capability, the MaxInfo JSON
interface has been removed. This should not be a big problem as the REST
API offers the same information in a more secure and structured way.
The result set mechanism was ill-suited for iteration over
lists. Converting it into a class and inverting it by pushing rows into
the result set instead the result set asking for rows makes it very easy
to use with lists. It also solves some of the consistency problems that
existed with the previous implementation.
The debug assertion that asserts that services are destroyed only on the
main worker would be triggered on shutdown as there is no current worker
at that point in time. In addition to this, it is wrong to call
service_destroy at shutdown as that will remove persisted
configurations. The service_free function can be called directly as we
know no other thread are running when the services are being torn down.
Also added the missing check that the destroyInstance function is
implemented before calling it.
When the configuration was loaded, the dependencies were resolved at the
time the objects were constructed. To remove the need to check whether a
dependency exists at object creation time, the dependencies can be
resolved by the core as a part of the configuration processing.
The circular dependency resolution uses a template function to solve the
problem in a generic fashion. This might be slightly overkill but it is a
good way to test the waters and see whether the functon would be usable in
as a utility function.
Only explicit object dependencies are resolved. If a module declares a
parameter as a string but uses it like an object name, the dependency is
not resolved and can fail.
As the dependency resolution uses Tarjan's algorithm, it has a side-effect
of creating the correct order of objects to meet their dependencies.
The monitor can now differentiate between slaves with a running
series of slave connections to the master from slaves with broken
links. Both still get the SERVER_SLAVE-flag if 'detect_stale_slave'
is on.
Also, relay servers must be running.
Comparing two fixed std::strings would have equal C strings but comparing
with operator== they would be different. This was a result of the string
modification done by fix_object_name.
Converted the internal header into a C++ header, added std::string
overload and fixed use of the function.
By using the worker local data mechanism, data can be efficiently cached
on the local worker. This avoids all synchronization on reads and only
requires synchronization on a configuration update.
As an additional observation, the testing of std::mutex and SPINLOCK shows
that std::mutex far outperforms the MaxScale SPINLOCK even on
non-conflicting workloads.
Data can now be stored on thread-local storage of the worker. By acquiring
a unique handle from the worker, a module can store a thread-local
value.
This functionality will be used to store configurations that are sometimes
updated at runtime but are largely read-only. By avoiding shared data
altogether, performance is not affected. The only synchronization that is
done is on update.
Also added a helper functions for broadcasting tasks on all routing
workers. With the old mxs_rworker_broadcast_message function, if a
function call was broadcasted it was always queued for execution. The
mxs_rworker_broadcast will immediately execute the task on the local
worker and queue it for execution of other routing workers.
There exists a dependency on the configuration for the workers: the total
number of worker thread is defined by the `threads` parameter. This means
that the global configuration section must be read before workers are
started.
Commit 411b70e25656317909e54f748f8012593120041f broke MaxScale and turned
it into a single threaded application as the default configuration value
of one worker was used.
Converted the internal service header to a C++ header and moved all
functions there that are for internal use only.
Added the new Service type that inherits the SERVICE struct. This is to
distinct the opaque external C interface from the C++ internals.
The filter implementation is now fully hidden. Also converted it to a C++
struct allocated with new and stored the filters in a global list instead
of embedding the list in the object itself.
When a session is closed, it releases a reference on the service and
checks if it was the last session for a destroyed service. The state of
the service was loaded after the reference count was decremented. This
behavior introduced a race condition where it was possible for a service
to be freed twice, first by the thread that marked the service as
destroyed and again by the last session for that service. By always
loading the service state before decrementing the reference count, we
avoid this race condition.
Currently, the memory ordering used for the reference counting is too
strict and could be relaxed. By default, all atomic operations use
sequentially consistent memory ordering. This guarantees correct behavior
but imposes a performance penalty. Incrementing the reference counts could
be done with a relaxed memory order as long as as we know the reference
we're incrementing is valid. Releasing a reference must use an
acquire-release order to guarantee the read-modify-write operation is
successful.