Maxadmin can now create and destroy monitors. The created monitors are not
started as they would be useless without added servers and configuration
parameters.
When the monitor credentials were being written with snprintf, the source
and destination overlapped.
The serialization didn't add a 'type=monitor' line into the configuration.
The statistics of the polling system no longer match the implementation it
measures. Modified the statistics to better represent the new system by
calculating the number of epoll events each thread receives.
The polling system now has a concept of messages. This can be used to send
a synchronous message to the polling system which waits for all threads to
process the message before returning.
Currently this is used to flush unused DCBs when server persistent
statistics are reported.
Making the lists of persistent DCBs thread specific is both a bug fix and
a performance enhancement. There was a small window where a non-owner
thread could receive events for a DCB. By partitioning the DCBs into
thread specific lists, this is avoided by removing the possibility of DCBs
moving between threads.
When an epoll error occurs for a listener, the errno variable must be
stored in another variable while the listener is removed from all of the
epoll instances.
The `thread.next` pointer refers to active DCBs in the current thread's
list and the `memdata.next` pointer refers to DCBs about to be freed. The
latter was mixed up with the former due to some changes in the naming.
The routing capabilities now define the type of output the reply
processing chain expects. Currently, this only consists of two
capabilities; complete packet output and contiguous buffer output. The
latter implies the former.
The code prevented scaling by imposing global spinlocks for the DCBs and
SESSIONs. Removing this list means that a thread-local list must be taken
into use to replace it.
The dcb_foreach allows a function to be mapped to all DCBs in
MaxScale. This allows the list of DCBs to be iterated in a safe manner
without having to worry about internal locking of the DCB mechanism.
Using internal DCBs for query routing wasn't needed as the client DCB
could be used. This could also be done by simply routing the query again
with routeQuery.
Each DCB needs to be added to the owning thread's list so that they can be
iterated through. As sessions always have a client DCB, the sessions don't
need to be added to a similar per thread list.
This change fixes a problem with dcb_hangup_foreach that the removal of
the list manager introduced. Now the hangup events are properly injected
for the DCBs that connect to the server in question.
The MySQLBackend protocol now only checks for complete packets if the
service requires statement based routing. This should remove unnecessary
processing when data is only streamed from the backend to the client.
Since the listmanager code isn't used, the debug assertions will always
fail. They should be disabled until the listmanager code can converted to
the per-thread model.
Because each thread has their own epoll file descriptor and only one
thread can process a DCB, it makes sense to move to a per thread zombie
queue. This removes one of the last restrictions on scalability.
The thread-specific spinlock needs to be acquired before a fake event is
inserted from a non-polling thread. The usual situation is when a monitor
thread inserts a hangup event for a DCB.
Other, less common cases are when session timeouts have been enabled and
the DCB needs to be closed. Here it is better to insert a fake hangup
event instead of directly closing the DCB from an external thread.
Having a unique epoll instance for each thread allows a lot of the locking
from poll.c to be removed. The downside to this is that each session can
have only one thread processing events for it which might reduce
performance with very low client counts.
Unnecessary methods were also removed from CachePT and CacheMT
as it does not make sense to create more than one single instance
of those per filter instance. Consequently there is no need for
them to be able to use an existing StorageFactory (and CacheRules).
The maximum count and maximum size of the cache can now be
specified and a storage can declare what capabilities it has.
If a storage modile cannot enforce the maximum count or maximum
size limits, the storage is decorated with an LRU storage that
can.