The class MonitorManager contains monitor-related functions that should not
be called from modules. MonitorManager can access private fields and methods
of the monitor.
MaxScale server objects are now created for all Clustrix nodes.
Currently the name is "Clustrix-Server-N" where N is the number
of the node.
The server is created using runtime_create_server() that has been
modified so that it optionally will not persist the created server.
That is probably just a temporary solution as a monitor should not
need to include .../core/internal-stuff.
Most of the ones still remaining outside are special cases.
Also, removed locking from status manipulation functions as it
has not been required for quite some time.
All global parameters are now handled by the runtime configuration
modification code. The parameters that are trivial to update can now be
updated at runtime. All other global parameters cause a new error message
to be returned stating that the parameter in question cannot be modified
at runtime.
Also updated the list of modifiable parameters in MaxCtrl. This list
should not be stored in MaxCtrl and should be created by MaxScale at
runtime.
Minor renaming of the session state enum values. Also exposed the session
state stringification function in the public header and removed the
stringification macro.
Allocating the session before a DCB guarantees that at no point will a DCB
have a null session. This further clarifies the concept of the session and
also allows the listener reference to be moved there.
Ideally, the session itself would allocate and assign the client DCB but
since the Listener is the only one who does it, it's acceptable for now.
As each connection now immediately gets a session the dummy session is no
longer required. The next step would be to combine parts of the session
and the client DCB into one entity. This would prevent the possibility of
a client DCB with no associated session. Backend DCBs are different as
they can move from one session to another when the persistent connection
pool is in use.
If the startup of the listeners requires communication with all of the
workers, the workers must be up and running for that to happen.
Due to the fact that the main thread is still a worker thread, the
initialization code is not extra straightforward. By queuing an event to
the main worker, the startup of all listeners is done at a fully
operational state with all workers fully functional.
The service initialization code was also flawed in the sense that it would
cause a deadlock if any of the threads would have to check for the user
permissions. This is mainly a problem with the authenticator modules but
the benefits of the per service pre-loading of users is most likely
superficial. In theory startup will be faster as each thread now queries
the users in parallel.
By storing the reference in the DCB, the two-way dependency between the
listeners and services is severed. Now the services have no direct link to
listeners and after the destruction of a listener it will be freed once
all connections through it have closed.
Due to the fact that a listener itself has a DCB that must point to a
valid listener, a self-reference is stored in the listener DCB. This is
extremely confusing and is only here to keep the code functional until the
DCB part of the listener can be factored out.
By storing a shared pointer to the listeners in the services, they will be
available as long as the service using them exists. This enables clean
destruction of listeners that still have open sessions.
The listener creation code now separately creates the listener and links
it to the service. Also replaced relevant parts of the related code with
the listener implemented versions of it.