The resources now properly process parts of the uri. This allows, for
example, certain sessions to be inspected. The current functionality is
only intended for testing and provides no useful functionality.
The actions taken by the resource manager are not done via the
inter-thread messaging system. When the implementation of the messages and
the JSON representation of the resources is done, the REST API resource
can actually be used.
The resource handler system is now usable but it doesn't perform anything
useful. Although, this will allows it to be tested for correctness.
Minor fixes to HttpResponse output and renaming of functions.
The Resource class is intended to be an abstraction of a resource
tree. Each node in the tree can perform actions. The tree is traversed
depth first so that deeper command paths resolve to the correct nodes.
Currently all the base resources defined in the REST API documents are
implemented in a way that they return a 200 OK response to all
requests. When the internal data can be represented as JSON, the resources
can be hooked up to functions that generate JSON.
When a client requests a resource, the HttpRequest class now splits the
requested resource into parts. This should help with the resource
validation and navigation.
Added test that checks that the resources are correctly split into the
correct number of arguments and that the argument contents are correct.
The options of a request are now parsed and exposed by the HttpRequest
class.
Added tests for the request options.
Also added missing error handling of invalid requests.
The class now generates default headers. The ETag and Last-Modified tags
do not represent any actual modification time or resource hash.
The basic functionality of the HTTP responses is tested by the core test
suite. More advanced testing of the whole REST API is still required.
Removed the static `create` functions as only the JSON parsing version
could generated errors and even then the errors were unlikely. By
replacing the static creator function with a normal constructor, the
HttpResponse class can now also be created on the stack making its use
easier.
The HTTP response class simplifies the response creation. The next step is
to add generation of all the default headers that are needed by the REST
API.
The HTTP request body is expected to be a valid JSON object. All other
requests are considered malformed requests and result in a HTTP 400 error.
Added the Jansson license to the LICENSE-THIRDPARTY.TXT file. Imported
some of the tests from the Jansson test suite to the HttpParser test.
The HttpParser class was renamed to HttpRequest as it parses and processes
only HTTP requests. A second class that creates a HTTP response needs to
be created to handle the response generation.
Moved some of the HTTP constants and helper functions to a separate
http.hh header.
The admin requests are now processed in blocking mode. The timing out of
connecttions is handled by a specific timeout thread that checks the state
of each admin request.
The simplification will help with the JSON parsing with PUT/POST
commands. If non-blocking IO is used, the network reading code and JSON
parsing needs a lot more work to handle partial reads.
If the administrative interface requires higher performance and
concurrency, a multi-threaded solution could be created.
The HTTP parser parses HTTP/1.1 messages into easily manageable data
structures. This should make it easier to map the HTTP requests into
actual commands in MaxScale.
When MaxScale is started, a separate thread for the administrative
interface is started. This allows the worker threads to handle client
requests while the administrative thread handles the lower priority
administrative requests.
The administrative interface responds to all request with a 200 OK HTTP
response. This allows the administrative interface itself to be tested.
The old polling message system is obsolete now that the worker messages
are implemented. The old system was only used to clean up the persistent
connection pool of a server.
It is now possible to define whether tasks are executed immediately or put
on the event queue of the worker thread. If task execution is in automatic
mode and the current executing thread is a worker thread, the
Task->execute method is called immediately.
This allows tasks to be posted from within worker threads. This is
intended to be used when purging of stale persistent connections and
printing diagnostic output via MaxAdmin. All of these actions are done
from within a worker thread.
- Posting a task to a worker for execution (without implicit wait)
is called "post".
- Posting a task to every worker for execution (without implicit wait)
is called "broadcast".
In these cases the task must be provided as a pointer or auto_ptr, to
indicate that the provided pointer must remain alive for longer than
the duration of the function call.
- Posting a task to a worker for execution *and* waiting for all workers
to have executed the task is called "execute" and the two variants are
now called "execute_concurrently" and "execute_serially".
In these cases the task is provided as a reference, since the functions
will return only when all workers have (in concurrent or serial fashion)
executed the task. That is, it need not remain alive for longer than the
duration of the function call.
Concurrently executing a task on all workers *and* waiting until
all workers have executed the task seems to be common enough to
warrant a helper function for that purpose.
Preparation for adding KILL syntax support.
Session id changed to uint32 everywhere. Added atomic op.
Session id can be acquired before session_alloc().
Added session_alloc_with_id(), which is given a session id number.
Worker object has a session_id->SESSION* mapping, not used yet.
Adds a server-specific parameter, "use_proxy_protocol". If enabled,
a header string is sent to the backend when a routing session connection
changes state to MXS_AUTH_STATE_CONNECTED. The string contains the real
client IP and port.
As 'client' is the fake DCB that emulates a client session,
poll.thread.id for the "dummy client" must be set to the current
thread_id that calls blr_start_master()
This affects both startup and START SLAVE (via mysql client)
When the Worker mechanism has been initialized the current_worker_id
of the calling thread is set to 0. That way, connections can be created
after Worker::init() has been called, but before the workers have been
started. Such connections will be handled by the worker that is running
in the main thread.
The function was no longer thread-safe as it used the obsolete per-thread
spinlocks to iterate over the DCBs. Now the function uses the newly added
WorkerTask class to iterate over them.
Since the new WorkerTask mechanism is far superion to dcb_foreach, the
latter is now deprecated.