This will simply cause a task to be posted to each worker.
If the workers are running normally, the task will reach the
workers and the associated semaphore posted, and the REST-API
call will return. If any worker is not running normally, the
task will not be processed and the REST-API call will hang.
This commit introduces the plumbing support for obtaining
classification information of a statement using the REST-API.
It introduces a URL like
/v1/maxscale/query_classifier/classify?sql=SELECT+1
that in the response will return a JSON object with the
information. Subsequent commits will provide the actual
information.
See script directory for method. The script to run in the top level
MaxScale directory is called maxscale-uncrustify.sh, which uses
another script, list-src, from the same directory (so you need to set
your PATH). The uncrustify version was 0.66.
Given that worker.hh was public, it made sense to make routingworker.hh
public as well. This removes the need to include private headers in
modules and allows C++ constructs to be used in C++ code when previously
only the C API was available.
The log manager now uses the logger type for logging. Removed some of the
code that depended on the renamed functions to make it compile. The next
step is to remove all unused code in the log manager.
This is the first step in some cleanup of the Worker interface.
The execution mode must now be explicitly specified, but that is
just a temporary step. Further down the road, _posting_ will
*always* mean via the message loop while _executing_ will optionally
and by default mean direct execution if the calling thread is that
of the worker.
The service now uses a std::vector<SFilterDef> to store the filters it
uses. Most internal parts deal with the SFilterDef but debugcmd.cc still
moves raw pointers around (needs to be changed).
The filter implementation is now fully hidden. Also converted it to a C++
struct allocated with new and stored the filters in a global list instead
of embedding the list in the object itself.
Services can now be destroyed if they have no active listeners and they
are not linked to servers. When these conditions are met, the service will
be destroyed when the last session for the service is closed.
The closing of a service will close all listeners that were once assigned
to the service. This allows closing of the ports at runtime which
previously was done only on shutdown.
Exposed the command through the REST API but not through MaxAdmin as it is
deprecated.
The id has now been moved from mxs::Worker to mxs::RoutingWorker
and the implications are felt in many places.
The primary need for the id was to be able to access worker specfic
data, maintained outside of a routing worker, when given a worker
(the id is used to index into an array). Slightly related to that
was the need to be able to iterate over all workers. That obviously
implies some kind of collection.
That causes all sorts of issues if there is a need for being able
to create and destroy a worker at runtime. With the id removed from
mxs::Worker all those issues are gone, and its perfectly ok to create
and destory mxs::Workers as needed.
Further, while there is a need to broadcast a particular message to
all _routing_ workers, it hardly makes sense to broadcast a particular
message too _all_ workers. Consequently, only routing workers are kept
in a collection and all static member functions dealing with all
workers (e.g. broadcast) have now been moved to mxs::RoutingWorker.
Now, instead of passing the id around we instead deal directly
with the worker pointer. Later the data in all those external arrays
will be moved into mxs::[Worker|RoutingWorker] so that worker related
data is maintained in exactly one place.
Worker is now the base class of all workers. It has a message
queue and can be run in a thread of its own, or in the calling
thread. Worker can not be used as such, but a concrete worker
class must be derived from it. Currently there is only one
concrete class RoutingWorker.
There is some overlapping in functionality between Worker and
RoutingWorker, as there is e.g. a need for broadcasting a
message to all routing workers, but not to other workers.
Currently other workers can not be created as the array for
holding the pointers to the workers is exactly as large as
there will be RoutingWorkers. That will be changed so that
the maximum number of threads is hardwired to some ridiculous
value such as 128. That's the first step in the path towards
a situation where the number of worker threads can be changed
at runtime.
A new class mxs::Worker will be introduced and mxs::RoutingWorker
will be inherited from that. mxs::Worker will basically only be a
thread with a message-loop.
Once available, all current non-worker threads (but the one
implicitly created by microhttpd) can be creating by inheriting
from that; in practice that means the housekeeping thread, all
monitor threads and possibly the logging thread.
The benefit of this arrangement is that there then will be a general
mechanism for cross thread communication without having to use any
shared data structures.
The output generated by a failed call to a module command was previously
overwritten with the error messages stored in the module command
subsystem.
In the case of a failure, the proper procedure is to check if the output
generated by the module command conforms to the JSON API error
specification and if it does, combine it with any system generated error
messages. If the output does not conform, it is stored in the "meta" field
of the returned object. This allows all of the generated output to be
saved.
All internal code is now inside an anonymous namespace to prevent their
use outside of the compilation unit.
Also fixed the wrong return type of ResourceWatcher::etag.
Executing the commands inside a worker thread allows further improvements
to job queuing but mainly it fixes the problem of loading users when
listeners are allocated at runtime.
When a runtime listener was being created, it was allocated in the admin
thread whereas the listeners created at startup were allocated in the
"main" thread. This caused a minor difference in how administrative
functions were handled by the REST API and MaxAdmin. The only real problem
is that listener allocation depends on being done inside a worker thread
as it lazily initializes some resources.
The internal header directory conflicted with in-source builds causing a
build failure. This is fixed by renaming the internal header directory to
something other than maxscale.
The renaming pointed out a few problems in a couple of source files that
appeared to include internal headers when the headers were in fact public
headers.
Fixed maxctrl in-source builds by making the copying of the sources
optional.
The JSON API specification states that all resources must support direct
modification of resource relationships by providing only the definition
for a particular relationship type to a /:type/:id/relationships/:type
endpoint.
The relevant part of the JSON API specification:
http://jsonapi.org/format/#crud-updating-to-many-relationships
Since the module command interface was expanded to include a JSON output
parameter, there is no longer a need for an output DCB. As the JSON can be
printed by both maxadmin and the REST API, this allows the removal of
explicit output formatting in module commands.
Cleaned up the documentation and reformatted the parameter list of
listener creation to be made out of links.
Added some comments to the resource declarations in resource.cc to prevent
the order of them from being altered.
The resource collection modification times weren't updated when an
individual resource was updated. By tracking back the path of a resource
and updating the modification times, all nodes in the path will get the
correct modification time.
The checks whether a request body is present are now done at a higher
level. This removes the need to do the checks at the resource handler
callback, removing duplicated code.
The checks are done by adding constraints to resources that must be
fulfilled by each request.
Added debug assertions to make sure that the core logic of the REST API
resource system works.
The listeners are now a proper sub-resource of the service resource. This
means that it acts like a normal resource and can be queried both as a
collection of resources and as an individual resource.
All resoures now use the `state` member to describe their internal
state. This includes servers, services and monitors. This means that the
`status` keyword can be reserved for something else and it can be removed
until it is needed again.
Changed the module maturity field to `maturity` to better describe its
purpose.
The module commands can now produce JSON formatted output which is passed
to the caller. The output should conform to the JSON API as closely as
possible.
Currently, the REST API wraps all JSON produced by module commands inside
a meta-object of the following type:
{
"meta": <output of module command>
}
This allows the output to be JSON API conformant without modifying the
modules and allows incremental updates to code.
The PATCH method should now be used instead the PUT method to update
resources. As PUT request bodies should represent complete resources, the
use of PUT to update resources is no longer supported.
Altered tests to use PATCH instead of PUT for updating resources.
The module command self links now point to an endpoint that executes the
module command. Depending on the type of the module command, either a GET
or a POST request must be made.
The /maxscale/ resource now supports PUT requests which modify core
parameters. As not all parameters can be changed at runtime, only
modifications to parameters that support runtime configuration are
allowed.