The options of a request are now parsed and exposed by the HttpRequest
class.
Added tests for the request options.
Also added missing error handling of invalid requests.
The class now generates default headers. The ETag and Last-Modified tags
do not represent any actual modification time or resource hash.
The basic functionality of the HTTP responses is tested by the core test
suite. More advanced testing of the whole REST API is still required.
Removed the static `create` functions as only the JSON parsing version
could generated errors and even then the errors were unlikely. By
replacing the static creator function with a normal constructor, the
HttpResponse class can now also be created on the stack making its use
easier.
The HTTP response class simplifies the response creation. The next step is
to add generation of all the default headers that are needed by the REST
API.
The HTTP request body is expected to be a valid JSON object. All other
requests are considered malformed requests and result in a HTTP 400 error.
Added the Jansson license to the LICENSE-THIRDPARTY.TXT file. Imported
some of the tests from the Jansson test suite to the HttpParser test.
The HttpParser class was renamed to HttpRequest as it parses and processes
only HTTP requests. A second class that creates a HTTP response needs to
be created to handle the response generation.
Moved some of the HTTP constants and helper functions to a separate
http.hh header.
The admin requests are now processed in blocking mode. The timing out of
connecttions is handled by a specific timeout thread that checks the state
of each admin request.
The simplification will help with the JSON parsing with PUT/POST
commands. If non-blocking IO is used, the network reading code and JSON
parsing needs a lot more work to handle partial reads.
If the administrative interface requires higher performance and
concurrency, a multi-threaded solution could be created.
The HTTP parser parses HTTP/1.1 messages into easily manageable data
structures. This should make it easier to map the HTTP requests into
actual commands in MaxScale.
When MaxScale is started, a separate thread for the administrative
interface is started. This allows the worker threads to handle client
requests while the administrative thread handles the lower priority
administrative requests.
The administrative interface responds to all request with a 200 OK HTTP
response. This allows the administrative interface itself to be tested.
The old polling message system is obsolete now that the worker messages
are implemented. The old system was only used to clean up the persistent
connection pool of a server.
It is now possible to define whether tasks are executed immediately or put
on the event queue of the worker thread. If task execution is in automatic
mode and the current executing thread is a worker thread, the
Task->execute method is called immediately.
This allows tasks to be posted from within worker threads. This is
intended to be used when purging of stale persistent connections and
printing diagnostic output via MaxAdmin. All of these actions are done
from within a worker thread.
- Posting a task to a worker for execution (without implicit wait)
is called "post".
- Posting a task to every worker for execution (without implicit wait)
is called "broadcast".
In these cases the task must be provided as a pointer or auto_ptr, to
indicate that the provided pointer must remain alive for longer than
the duration of the function call.
- Posting a task to a worker for execution *and* waiting for all workers
to have executed the task is called "execute" and the two variants are
now called "execute_concurrently" and "execute_serially".
In these cases the task is provided as a reference, since the functions
will return only when all workers have (in concurrent or serial fashion)
executed the task. That is, it need not remain alive for longer than the
duration of the function call.
Concurrently executing a task on all workers *and* waiting until
all workers have executed the task seems to be common enough to
warrant a helper function for that purpose.
Preparation for adding KILL syntax support.
Session id changed to uint32 everywhere. Added atomic op.
Session id can be acquired before session_alloc().
Added session_alloc_with_id(), which is given a session id number.
Worker object has a session_id->SESSION* mapping, not used yet.
Adds a server-specific parameter, "use_proxy_protocol". If enabled,
a header string is sent to the backend when a routing session connection
changes state to MXS_AUTH_STATE_CONNECTED. The string contains the real
client IP and port.
As 'client' is the fake DCB that emulates a client session,
poll.thread.id for the "dummy client" must be set to the current
thread_id that calls blr_start_master()
This affects both startup and START SLAVE (via mysql client)