- Posting a task to a worker for execution (without implicit wait)
is called "post".
- Posting a task to every worker for execution (without implicit wait)
is called "broadcast".
In these cases the task must be provided as a pointer or auto_ptr, to
indicate that the provided pointer must remain alive for longer than
the duration of the function call.
- Posting a task to a worker for execution *and* waiting for all workers
to have executed the task is called "execute" and the two variants are
now called "execute_concurrently" and "execute_serially".
In these cases the task is provided as a reference, since the functions
will return only when all workers have (in concurrent or serial fashion)
executed the task. That is, it need not remain alive for longer than the
duration of the function call.
Concurrently executing a task on all workers *and* waiting until
all workers have executed the task seems to be common enough to
warrant a helper function for that purpose.
Preparation for adding KILL syntax support.
Session id changed to uint32 everywhere. Added atomic op.
Session id can be acquired before session_alloc().
Added session_alloc_with_id(), which is given a session id number.
Worker object has a session_id->SESSION* mapping, not used yet.
Adds a server-specific parameter, "use_proxy_protocol". If enabled,
a header string is sent to the backend when a routing session connection
changes state to MXS_AUTH_STATE_CONNECTED. The string contains the real
client IP and port.
When the Worker mechanism has been initialized the current_worker_id
of the calling thread is set to 0. That way, connections can be created
after Worker::init() has been called, but before the workers have been
started. Such connections will be handled by the worker that is running
in the main thread.
The function was no longer thread-safe as it used the obsolete per-thread
spinlocks to iterate over the DCBs. Now the function uses the newly added
WorkerTask class to iterate over them.
Since the new WorkerTask mechanism is far superion to dcb_foreach, the
latter is now deprecated.
A Worker::Task is an object that can be sent to a worker for
execution. The task is sent to the worker using the messaging
mechanism where the `execute` function of the task will be
called in the thread context of the worker.
There are two kinds of tasks; regular tasks and disposable tasks.
The former are just sent to the worker for execution while the
latter are sent and subsequently disposed of, once the task has
been executed.
A disposable task can be sent to either one worker or to all
workers. In the latter case, the task will be deleted once it
has been executed by all workers.
A semaphore can be associated with a regular task. Once the task
has been executed by the worker, the semaphore will automatically
be posted. That way, it is trivial to send a task for execution
to a worker and wait until the task has been executed. For instance:
Semaphore sem;
MyTask task;
pWorker->execute(&task, &sem);
sem.wait();
const MyResult& result = task.result();
The low level mechanism for posting and broadcasting messages will
be removed.
All workers share an epoll instance that is added level-triggered
to the epoll instance of each Worker. This is intended to be used
together with listening sockets.
When a listening socket is added to the shared epoll instance the
effect is that EPOLLIN will be active for it whenever there is a
connection pending on a listening socket added to that epoll
instance.
When that occurs all workers in their epoll_wait()-calls will return.
When the workers subsequently call epoll_wait() on the shared epoll
instance, that will return with an event provided some other thread(s)
has not yet called accept() on the listening socket.
As each worker extracts just one event at a time and calls accept just
once before calling epoll_wait(), it means that the client connections
will be distributed evenly across all workers, provided the load on
the workers is roughly the same. If it isn't then a worker with less
load will get more connections to handle (which will even out the load).
All debug messages from dcb.cc were prefixed with the pthread ID of the
current thread. If the thread ID is needed, it should be logged by the log
manager.
The functions used to track the resultset EOF packets now expose the
position of the end of the result set. This allows the modules that use
them to check if more results exist in the same buffer.
Added the status bits for OK and EOF packets to the mysql.h protocol
header. This can be used to check for various state changes that happen in
the session. Currently the status bits are only used to detect if more
results are expected.
Now the statistics is in a single structure and the property of the
Worker instance in question. Methods are provided for obtaining the
statistics of all workers in one go.
The existing load calculation does not fit the 2.0 thread approach
that well. So it is removed entirely now, to be replaced with some
new approach later.
Showing dcb addresses, the number of fds and the events is
meaningless as the information is completely transient and
is likely to have changed the moment is was displayed.
Just like the thread stats and poll stats earlier, the queue stats
are now moved to worker.
A litte refactoring still, and the polling will only work on local
data.
Each worker now has a separate structure for collecting the
polling statistics that is passed to epoll_waitevents(). When
the stats are asked for, we loop over all separate stats and
combine them. So, instead of having every statistics of each
thread one cacheline apart, each thread has all its statistics
in one lump that, for obvious reasons, are going to be apart.
The primary purpose of this excersize is to remove the hardwired
nature of the statistics collection. For instance, the admin
thread will be doing I/O but that I/O should not be included
in the statistics of the workers.
The Worker no longer creates a pipe and implements the cross
worker/thread message mechanism itself. Instead it has a
MessageQueue instance variable for that purpose.
MessageQueue encapsulates a message queue built on top of a
pipe. The message queue needs a handler for receiving messages
and must be added to a worker for pumping messages through the
pipe.
Each Worker will have an instance of MessageQueue.
Now the epoll instance of the Worker is used when polling. The
work is still done in poll.cc and the worker provides the descriptor
and thread id.
The comments of poll_waitevents() have been removed; they were not
accurate anymore so better to let the code speak for itself.
And add the Worker header...
The epoll instance is not used yet, but the common creation of epoll
instances in poll.cc will be removed and the epoll instances of each
worker used instead.