The HTTP request body is expected to be a valid JSON object. All other
requests are considered malformed requests and result in a HTTP 400 error.
Added the Jansson license to the LICENSE-THIRDPARTY.TXT file. Imported
some of the tests from the Jansson test suite to the HttpParser test.
The HttpParser class was renamed to HttpRequest as it parses and processes
only HTTP requests. A second class that creates a HTTP response needs to
be created to handle the response generation.
Moved some of the HTTP constants and helper functions to a separate
http.hh header.
The admin requests are now processed in blocking mode. The timing out of
connecttions is handled by a specific timeout thread that checks the state
of each admin request.
The simplification will help with the JSON parsing with PUT/POST
commands. If non-blocking IO is used, the network reading code and JSON
parsing needs a lot more work to handle partial reads.
If the administrative interface requires higher performance and
concurrency, a multi-threaded solution could be created.
The HTTP parser parses HTTP/1.1 messages into easily manageable data
structures. This should make it easier to map the HTTP requests into
actual commands in MaxScale.
When MaxScale is started, a separate thread for the administrative
interface is started. This allows the worker threads to handle client
requests while the administrative thread handles the lower priority
administrative requests.
The administrative interface responds to all request with a 200 OK HTTP
response. This allows the administrative interface itself to be tested.
The old polling message system is obsolete now that the worker messages
are implemented. The old system was only used to clean up the persistent
connection pool of a server.
It is now possible to define whether tasks are executed immediately or put
on the event queue of the worker thread. If task execution is in automatic
mode and the current executing thread is a worker thread, the
Task->execute method is called immediately.
This allows tasks to be posted from within worker threads. This is
intended to be used when purging of stale persistent connections and
printing diagnostic output via MaxAdmin. All of these actions are done
from within a worker thread.
- Posting a task to a worker for execution (without implicit wait)
is called "post".
- Posting a task to every worker for execution (without implicit wait)
is called "broadcast".
In these cases the task must be provided as a pointer or auto_ptr, to
indicate that the provided pointer must remain alive for longer than
the duration of the function call.
- Posting a task to a worker for execution *and* waiting for all workers
to have executed the task is called "execute" and the two variants are
now called "execute_concurrently" and "execute_serially".
In these cases the task is provided as a reference, since the functions
will return only when all workers have (in concurrent or serial fashion)
executed the task. That is, it need not remain alive for longer than the
duration of the function call.
Concurrently executing a task on all workers *and* waiting until
all workers have executed the task seems to be common enough to
warrant a helper function for that purpose.
Preparation for adding KILL syntax support.
Session id changed to uint32 everywhere. Added atomic op.
Session id can be acquired before session_alloc().
Added session_alloc_with_id(), which is given a session id number.
Worker object has a session_id->SESSION* mapping, not used yet.
Adds a server-specific parameter, "use_proxy_protocol". If enabled,
a header string is sent to the backend when a routing session connection
changes state to MXS_AUTH_STATE_CONNECTED. The string contains the real
client IP and port.
When the Worker mechanism has been initialized the current_worker_id
of the calling thread is set to 0. That way, connections can be created
after Worker::init() has been called, but before the workers have been
started. Such connections will be handled by the worker that is running
in the main thread.
The function was no longer thread-safe as it used the obsolete per-thread
spinlocks to iterate over the DCBs. Now the function uses the newly added
WorkerTask class to iterate over them.
Since the new WorkerTask mechanism is far superion to dcb_foreach, the
latter is now deprecated.
A Worker::Task is an object that can be sent to a worker for
execution. The task is sent to the worker using the messaging
mechanism where the `execute` function of the task will be
called in the thread context of the worker.
There are two kinds of tasks; regular tasks and disposable tasks.
The former are just sent to the worker for execution while the
latter are sent and subsequently disposed of, once the task has
been executed.
A disposable task can be sent to either one worker or to all
workers. In the latter case, the task will be deleted once it
has been executed by all workers.
A semaphore can be associated with a regular task. Once the task
has been executed by the worker, the semaphore will automatically
be posted. That way, it is trivial to send a task for execution
to a worker and wait until the task has been executed. For instance:
Semaphore sem;
MyTask task;
pWorker->execute(&task, &sem);
sem.wait();
const MyResult& result = task.result();
The low level mechanism for posting and broadcasting messages will
be removed.
All workers share an epoll instance that is added level-triggered
to the epoll instance of each Worker. This is intended to be used
together with listening sockets.
When a listening socket is added to the shared epoll instance the
effect is that EPOLLIN will be active for it whenever there is a
connection pending on a listening socket added to that epoll
instance.
When that occurs all workers in their epoll_wait()-calls will return.
When the workers subsequently call epoll_wait() on the shared epoll
instance, that will return with an event provided some other thread(s)
has not yet called accept() on the listening socket.
As each worker extracts just one event at a time and calls accept just
once before calling epoll_wait(), it means that the client connections
will be distributed evenly across all workers, provided the load on
the workers is roughly the same. If it isn't then a worker with less
load will get more connections to handle (which will even out the load).
All debug messages from dcb.cc were prefixed with the pthread ID of the
current thread. If the thread ID is needed, it should be logged by the log
manager.
The functions used to track the resultset EOF packets now expose the
position of the end of the result set. This allows the modules that use
them to check if more results exist in the same buffer.
Added the status bits for OK and EOF packets to the mysql.h protocol
header. This can be used to check for various state changes that happen in
the session. Currently the status bits are only used to detect if more
results are expected.
Now the statistics is in a single structure and the property of the
Worker instance in question. Methods are provided for obtaining the
statistics of all workers in one go.
The existing load calculation does not fit the 2.0 thread approach
that well. So it is removed entirely now, to be replaced with some
new approach later.
Showing dcb addresses, the number of fds and the events is
meaningless as the information is completely transient and
is likely to have changed the moment is was displayed.