Allocating the session before a DCB guarantees that at no point will a DCB
have a null session. This further clarifies the concept of the session and
also allows the listener reference to be moved there.
Ideally, the session itself would allocate and assign the client DCB but
since the Listener is the only one who does it, it's acceptable for now.
As each connection now immediately gets a session the dummy session is no
longer required. The next step would be to combine parts of the session
and the client DCB into one entity. This would prevent the possibility of
a client DCB with no associated session. Backend DCBs are different as
they can move from one session to another when the persistent connection
pool is in use.
Whenever a client DCB is accepted, a session for it is allocated. This
simplifies the handling of shared data between DCBs by allowing it to be
placed inside the session object. Currently, the data is stashed away in
the client DCB.
The session allocation now has two distinct parts: the initialization of
the session itself and the creation of the router and filter
sessions. This allows sessions to be allocated the moment a client DCB is
created instead of only after the authentication is complete. With this
change, the need for the dummy session is removed.
If the startup of the listeners requires communication with all of the
workers, the workers must be up and running for that to happen.
Due to the fact that the main thread is still a worker thread, the
initialization code is not extra straightforward. By queuing an event to
the main worker, the startup of all listeners is done at a fully
operational state with all workers fully functional.
The service initialization code was also flawed in the sense that it would
cause a deadlock if any of the threads would have to check for the user
permissions. This is mainly a problem with the authenticator modules but
the benefits of the per service pre-loading of users is most likely
superficial. In theory startup will be faster as each thread now queries
the users in parallel.
The code that waited for some time if a system resource limit was reached
made the lower level code more complex than it should be. If such a
situation would be encountered, it is unlikely that the code would make a
noticeable difference.
Also renamed the functions to better represent what they do.
More of the DCB initialization is now done in the DCB constructor. This
makes the creation of new DCBs simpler but it can be even simpler. By
passing the file descriptor that the DCB should use into the constructor
almost all of the initialization would be done inside it.
Also removed the unused path member variable.
Moved the code into listener.cc as it's the only place where it is
used. Placed the DCB callback assignment into the DCB constructor as it
depended on static functions that were in dcb.cc.
Replaced the DCB with a single file descriptor that the listener listens
on and which is added to all of the workers. The Listener also extends the
MXB_POLL_DATA which allows it to handle epoll events.
Moved the code that creates the listening socket into listener.cc where it
belongs and did a minor cleanup of it.
By doing the actual accepting of the new DCB in the core, the protocol
modules can only do the actual protocol level work. This removes some of
the redundant code that was in the protocol modules.
The epoll event flags are now fully controlled by the caller of the
Worker::add_fd function. This makes the mechanism more generic and allows
both edge triggered and level triggered behavior.
By storing the reference in the DCB, the two-way dependency between the
listeners and services is severed. Now the services have no direct link to
listeners and after the destruction of a listener it will be freed once
all connections through it have closed.
Due to the fact that a listener itself has a DCB that must point to a
valid listener, a self-reference is stored in the listener DCB. This is
extremely confusing and is only here to keep the code functional until the
DCB part of the listener can be factored out.
Allocating DCB with new allows the use of C++ objects in the DCB
struct. Also the explicit poll field can be replaced by inheriting from
MXB_POLL_DATA.
Modified the functions to use a listener instead of a DCB in the accepting
process. This removes some of the dependenices that the listeners have on
the DCB system.
By loading the entry points required by a DCB when the Listener is
created, the extra cost of finding the module is removed. It also
simplifies DCB creation by removing the possibility of all failures to
load modules at DCB creation time.
By storing a shared pointer to the listeners in the services, they will be
available as long as the service using them exists. This enables clean
destruction of listeners that still have open sessions.
The listener creation code now separately creates the listener and links
it to the service. Also replaced relevant parts of the related code with
the listener implemented versions of it.
The value would otherwise be assigned outside of it and in some cases not
at all. Now all DCBs (apart from internal ones) have a valid SERVICE
pointer.
The SERV_LISTENER pointer should not be in the DCBs but in the
session. This way the listener is an attribute of a session instead of a
connection. If this is implemented, the authenticator data can be more
easily shared.
Replaced raw pointers in function parameter with const SListener
references. This removes the need to pass raw pointers as arguments and
all access is done via smart pointers.
The iteration of listeners is now done via the global list of
listeners. This removes the need to have a service before a listener is
accessed which also reflects how the actual configuration is laid out. It
also guarantees that any results returned by the find functions will be
valid as long as the results are used.
The listeners are now stored in their own list which allows them to be a
component separate from the service. The next step is to remove the
listener iterator functionality and replace it with its STL counterpart.