By moving the initialization into Worker::run, all threads, including the
main thread, are properly initialized. This was not noticed before as
qc_sqlite initialized the main thread in the process initialization
callback.
The polling mechanism can now optionally be used for managing
the lifetime of an object placed into the poll set.
If a MXS_POLL_DATA has a non-null 'free', then the reference count
of the data will be increased before calling the handler and
decreased after. In that case, if the reference count reaches 0,
the free function will be called.
Note that the reference counts of *all* MXS_POLL_DATAs returned
by 'epoll_wait' will be increased before the events are delivered
to the handlers individually for each MXS_POLL_DATA, and then once
all events have been delivered, the reference count of each
MXS_POLL_DATA will be decreased.
This ensure that if there are interdependencies between different
MXS_POLL_DATAs returned by one call to 'epoll_wait', the case that
an MXS_POLL_DATA is deleted before its events have been delivered
can be avoided by using the reference count for lifetime management.
In subsequent commits, the reference count will be taken into use
in the lifetime management of DCBs.
It is now possible to specify the thread stack size to be used,
when a new thread is created. This will subsequently be used
for allowing the stack size to be specified for worker threads.
The template class wraps a HashMap such that only a few operations
are allowed. Usage requires specializing a RegistryTraits class
template for each entry type.
The top level resource self links pointed to the collection instead of the
resource itself. The individual resoures now also have a links field that
contains the self link to the resource. This should make navigation of the
API easier as all objects have valid links in them.
Various small changes to part2, as suggested by comments and otherwise.
Mostly renaming, working logic should not change.
Exception: session id changed to 64bit in the container and associated
functions. Another commit will change it to 64bit in the session itself.
MySQL sessions are added to a hasmap when created, removed when closed.
MYSQL_COM_PROCESS_KILL is now detected, the thread_id is read and the kill
command sent to all worker threads to find the correct session. If found, a
fake hangup even is created for the client dcb.
As is, this function is of little use since the client could just disconnect
itself instead. Later on, additional commands of this nature will be added.
The old polling message system is obsolete now that the worker messages
are implemented. The old system was only used to clean up the persistent
connection pool of a server.
It is now possible to define whether tasks are executed immediately or put
on the event queue of the worker thread. If task execution is in automatic
mode and the current executing thread is a worker thread, the
Task->execute method is called immediately.
This allows tasks to be posted from within worker threads. This is
intended to be used when purging of stale persistent connections and
printing diagnostic output via MaxAdmin. All of these actions are done
from within a worker thread.
- Posting a task to a worker for execution (without implicit wait)
is called "post".
- Posting a task to every worker for execution (without implicit wait)
is called "broadcast".
In these cases the task must be provided as a pointer or auto_ptr, to
indicate that the provided pointer must remain alive for longer than
the duration of the function call.
- Posting a task to a worker for execution *and* waiting for all workers
to have executed the task is called "execute" and the two variants are
now called "execute_concurrently" and "execute_serially".
In these cases the task is provided as a reference, since the functions
will return only when all workers have (in concurrent or serial fashion)
executed the task. That is, it need not remain alive for longer than the
duration of the function call.
Concurrently executing a task on all workers *and* waiting until
all workers have executed the task seems to be common enough to
warrant a helper function for that purpose.
When the Worker mechanism has been initialized the current_worker_id
of the calling thread is set to 0. That way, connections can be created
after Worker::init() has been called, but before the workers have been
started. Such connections will be handled by the worker that is running
in the main thread.
A Worker::Task is an object that can be sent to a worker for
execution. The task is sent to the worker using the messaging
mechanism where the `execute` function of the task will be
called in the thread context of the worker.
There are two kinds of tasks; regular tasks and disposable tasks.
The former are just sent to the worker for execution while the
latter are sent and subsequently disposed of, once the task has
been executed.
A disposable task can be sent to either one worker or to all
workers. In the latter case, the task will be deleted once it
has been executed by all workers.
A semaphore can be associated with a regular task. Once the task
has been executed by the worker, the semaphore will automatically
be posted. That way, it is trivial to send a task for execution
to a worker and wait until the task has been executed. For instance:
Semaphore sem;
MyTask task;
pWorker->execute(&task, &sem);
sem.wait();
const MyResult& result = task.result();
The low level mechanism for posting and broadcasting messages will
be removed.
All workers share an epoll instance that is added level-triggered
to the epoll instance of each Worker. This is intended to be used
together with listening sockets.
When a listening socket is added to the shared epoll instance the
effect is that EPOLLIN will be active for it whenever there is a
connection pending on a listening socket added to that epoll
instance.
When that occurs all workers in their epoll_wait()-calls will return.
When the workers subsequently call epoll_wait() on the shared epoll
instance, that will return with an event provided some other thread(s)
has not yet called accept() on the listening socket.
As each worker extracts just one event at a time and calls accept just
once before calling epoll_wait(), it means that the client connections
will be distributed evenly across all workers, provided the load on
the workers is roughly the same. If it isn't then a worker with less
load will get more connections to handle (which will even out the load).
Now the statistics is in a single structure and the property of the
Worker instance in question. Methods are provided for obtaining the
statistics of all workers in one go.
Just like the thread stats and poll stats earlier, the queue stats
are now moved to worker.
A litte refactoring still, and the polling will only work on local
data.
Each worker now has a separate structure for collecting the
polling statistics that is passed to epoll_waitevents(). When
the stats are asked for, we loop over all separate stats and
combine them. So, instead of having every statistics of each
thread one cacheline apart, each thread has all its statistics
in one lump that, for obvious reasons, are going to be apart.
The primary purpose of this excersize is to remove the hardwired
nature of the statistics collection. For instance, the admin
thread will be doing I/O but that I/O should not be included
in the statistics of the workers.
The Worker no longer creates a pipe and implements the cross
worker/thread message mechanism itself. Instead it has a
MessageQueue instance variable for that purpose.
Now the epoll instance of the Worker is used when polling. The
work is still done in poll.cc and the worker provides the descriptor
and thread id.
The comments of poll_waitevents() have been removed; they were not
accurate anymore so better to let the code speak for itself.
And add the Worker header...
The epoll instance is not used yet, but the common creation of epoll
instances in poll.cc will be removed and the epoll instances of each
worker used instead.
This is the first step in turning the worker mechanism and everything
around it into a set of C++ classes. In this change, the original C
API is still present, but in subsequent changes that will be removed.