Allocating the session before a DCB guarantees that at no point will a DCB
have a null session. This further clarifies the concept of the session and
also allows the listener reference to be moved there.
Ideally, the session itself would allocate and assign the client DCB but
since the Listener is the only one who does it, it's acceptable for now.
As each connection now immediately gets a session the dummy session is no
longer required. The next step would be to combine parts of the session
and the client DCB into one entity. This would prevent the possibility of
a client DCB with no associated session. Backend DCBs are different as
they can move from one session to another when the persistent connection
pool is in use.
Whenever a client DCB is accepted, a session for it is allocated. This
simplifies the handling of shared data between DCBs by allowing it to be
placed inside the session object. Currently, the data is stashed away in
the client DCB.
The session allocation now has two distinct parts: the initialization of
the session itself and the creation of the router and filter
sessions. This allows sessions to be allocated the moment a client DCB is
created instead of only after the authentication is complete. With this
change, the need for the dummy session is removed.
Allocating DCB with new allows the use of C++ objects in the DCB
struct. Also the explicit poll field can be replaced by inheriting from
MXB_POLL_DATA.
Whether or not a session should retain its statements is now
a property of the session. This in preparation for making the
whole functionality a property that can be enabled and disabled
at runtime, of the service.
As the router is the only one that knows what backends a particular
statement has been sent to, it is the responsibility of the router
to keep the session bookkeeping up to date. If it doesn't we will
know what statements a session has received (provided at least some
component in the routing chain has RCAP_TYPE_STMT_INPUT capability),
but not how long their processing took. Currently only readwritesplit
does that.
All queries are stored and not just COM_QUERY as that makes the
overall bookkeeping simpler; at clientReply() time we do not need to
know whether or not to bookkeep information, we can just do it.
When session information is queried for, we report as much information
we have available.
See script directory for method. The script to run in the top level
MaxScale directory is called maxscale-uncrustify.sh, which uses
another script, list-src, from the same directory (so you need to set
your PATH). The uncrustify version was 0.66.
Given that worker.hh was public, it made sense to make routingworker.hh
public as well. This removes the need to include private headers in
modules and allows C++ constructs to be used in C++ code when previously
only the C API was available.
By storing the data gathere by readwritesplit inside the session, the
protocol will be aware of the state of the LOAD DATA LOCAL INFILE
execution. This prevents misinterpretation of the data which previously
led to closed connections, effectively rendering LOAD DATA LOCAL INFILE
unusable.
This change is a temporary solution to a problem that needs to be solved
at the protocol level. The changes required to implement this are too big
to add into a bug fix release.
The dummy session was unnecessarily modified by multiple threads at the
same time. The state of the dummy session must not change once it's been
set at startup.
This is the first step in some cleanup of the Worker interface.
The execution mode must now be explicitly specified, but that is
just a temporary step. Further down the road, _posting_ will
*always* mean via the message loop while _executing_ will optionally
and by default mean direct execution if the calling thread is that
of the worker.
The Session class now contains all of the C++ objects that were previously
in the MXS_SESSION struct. It is also allocated with new but all
initialization is still done outside of the Session in session_alloc_body.
This commit will not compile as it is a part of a set of commits that make
parts of the session private.
The filters can now be altered at runtime. Currently, the filter usage
uses a service level lock. The next is to allocate a per-service key to
the worker local data and use that to orchestrate the storage of the
filter lists.
The service now uses a std::vector<SFilterDef> to store the filters it
uses. Most internal parts deal with the SFilterDef but debugcmd.cc still
moves raw pointers around (needs to be changed).
Replaced the previous RESULTSET with the new implementation. As the new
ResultSet doesn't have a JSON streaming capability, the MaxInfo JSON
interface has been removed. This should not be a big problem as the REST
API offers the same information in a more secure and structured way.
The filter implementation is now fully hidden. Also converted it to a C++
struct allocated with new and stored the filters in a global list instead
of embedding the list in the object itself.
When a session is closed, it releases a reference on the service and
checks if it was the last session for a destroyed service. The state of
the service was loaded after the reference count was decremented. This
behavior introduced a race condition where it was possible for a service
to be freed twice, first by the thread that marked the service as
destroyed and again by the last session for that service. By always
loading the service state before decrementing the reference count, we
avoid this race condition.
Currently, the memory ordering used for the reference counting is too
strict and could be relaxed. By default, all atomic operations use
sequentially consistent memory ordering. This guarantees correct behavior
but imposes a performance penalty. Incrementing the reference counts could
be done with a relaxed memory order as long as as we know the reference
we're incrementing is valid. Releasing a reference must use an
acquire-release order to guarantee the read-modify-write operation is
successful.