The old behavior is to not start MaxScale if any listeners fail to
bind. This behavior is convenient when there are conflicts with other
applications so it should remain. This change prevents the internal
service restarts from functioning which might have already been broken.
The "restarting" of services after a failure to bind to an interface is
somewhat questionable. Almost no transient errors are expected at startup
with the exception of running out of sockets. This should probably be the
only case when the internal service restart is done and in other cases it
causes more harm than good.
Since we know the worst-case size of a canonical statement is the size of
the query string, we can reduce the number of memory allocations to one in
the get_canonical function.
Minor renaming of the session state enum values. Also exposed the session
state stringification function in the public header and removed the
stringification macro.
The error flag was set before the function was called which caused the
function to never be used. As the core should handle the filtering of
multiple errors on the same DCB, the protocol modules should not check it.
When an attempt to accept a client DCB fails, the session should only be
deleted directly if the allocation of the client DCB fails. Otherwise the
closing of the DCB triggers the session deletion.
When doing a loop over each DCB, don't process DCBs without sessions. For
now this is correct behavior as only DCBs in the persistent pool have no
session.
When a listener was created at runtime but it failed to start, it would
not be automatically removed from the system. This caused the MaxCtrl
cluster sync test to fail.
Fixed the use of DCBs and sessions in the mock testing framework and
adapted them to the changes done to the objects in question. Extended the
testing utility functions to allow preloading modules as well as making it
possible to only partially initialize the query classifier.
The REST API and MaxCtrl tests relied upon the implicit sessions that were
created by the listeners. This can be corrected and improved by creating
an actual connection to MaxScale to check that a client session is indeed
created.
Allocating the session before a DCB guarantees that at no point will a DCB
have a null session. This further clarifies the concept of the session and
also allows the listener reference to be moved there.
Ideally, the session itself would allocate and assign the client DCB but
since the Listener is the only one who does it, it's acceptable for now.
As each connection now immediately gets a session the dummy session is no
longer required. The next step would be to combine parts of the session
and the client DCB into one entity. This would prevent the possibility of
a client DCB with no associated session. Backend DCBs are different as
they can move from one session to another when the persistent connection
pool is in use.
Whenever a client DCB is accepted, a session for it is allocated. This
simplifies the handling of shared data between DCBs by allowing it to be
placed inside the session object. Currently, the data is stashed away in
the client DCB.
The session allocation now has two distinct parts: the initialization of
the session itself and the creation of the router and filter
sessions. This allows sessions to be allocated the moment a client DCB is
created instead of only after the authentication is complete. With this
change, the need for the dummy session is removed.
If the startup of the listeners requires communication with all of the
workers, the workers must be up and running for that to happen.
Due to the fact that the main thread is still a worker thread, the
initialization code is not extra straightforward. By queuing an event to
the main worker, the startup of all listeners is done at a fully
operational state with all workers fully functional.
The service initialization code was also flawed in the sense that it would
cause a deadlock if any of the threads would have to check for the user
permissions. This is mainly a problem with the authenticator modules but
the benefits of the per service pre-loading of users is most likely
superficial. In theory startup will be faster as each thread now queries
the users in parallel.