By storing the data gathere by readwritesplit inside the session, the
protocol will be aware of the state of the LOAD DATA LOCAL INFILE
execution. This prevents misinterpretation of the data which previously
led to closed connections, effectively rendering LOAD DATA LOCAL INFILE
unusable.
This change is a temporary solution to a problem that needs to be solved
at the protocol level. The changes required to implement this are too big
to add into a bug fix release.
The dummy session was unnecessarily modified by multiple threads at the
same time. The state of the dummy session must not change once it's been
set at startup.
This is the first step in some cleanup of the Worker interface.
The execution mode must now be explicitly specified, but that is
just a temporary step. Further down the road, _posting_ will
*always* mean via the message loop while _executing_ will optionally
and by default mean direct execution if the calling thread is that
of the worker.
The Session class now contains all of the C++ objects that were previously
in the MXS_SESSION struct. It is also allocated with new but all
initialization is still done outside of the Session in session_alloc_body.
This commit will not compile as it is a part of a set of commits that make
parts of the session private.
The filters can now be altered at runtime. Currently, the filter usage
uses a service level lock. The next is to allocate a per-service key to
the worker local data and use that to orchestrate the storage of the
filter lists.
The service now uses a std::vector<SFilterDef> to store the filters it
uses. Most internal parts deal with the SFilterDef but debugcmd.cc still
moves raw pointers around (needs to be changed).
Replaced the previous RESULTSET with the new implementation. As the new
ResultSet doesn't have a JSON streaming capability, the MaxInfo JSON
interface has been removed. This should not be a big problem as the REST
API offers the same information in a more secure and structured way.
The filter implementation is now fully hidden. Also converted it to a C++
struct allocated with new and stored the filters in a global list instead
of embedding the list in the object itself.
When a session is closed, it releases a reference on the service and
checks if it was the last session for a destroyed service. The state of
the service was loaded after the reference count was decremented. This
behavior introduced a race condition where it was possible for a service
to be freed twice, first by the thread that marked the service as
destroyed and again by the last session for that service. By always
loading the service state before decrementing the reference count, we
avoid this race condition.
Currently, the memory ordering used for the reference counting is too
strict and could be relaxed. By default, all atomic operations use
sequentially consistent memory ordering. This guarantees correct behavior
but imposes a performance penalty. Incrementing the reference counts could
be done with a relaxed memory order as long as as we know the reference
we're incrementing is valid. Releasing a reference must use an
acquire-release order to guarantee the read-modify-write operation is
successful.
Services can now be destroyed if they have no active listeners and they
are not linked to servers. When these conditions are met, the service will
be destroyed when the last session for the service is closed.
The closing of a service will close all listeners that were once assigned
to the service. This allows closing of the ports at runtime which
previously was done only on shutdown.
Exposed the command through the REST API but not through MaxAdmin as it is
deprecated.
As all connections should be accepted via dcb_accept, it is the optimal
place to calculate how many open client connections per service there
are. The decrementation should be done when the session is closed instead
of when the call to dcb_close for the client DCB is done. This allows the
client count to be the absolute reference count that sessions have to a
service.
The current client count is a duplicate counter that should match the
n_current value in SERVICE_STATS. The former does differ from the latter
in that it does the incrementation when the client DCB is accepted instead
of when the session is created.
To get rid of the need that a Worker must have an id, we store
in the MXS_POLL_DATA structure a pointer to the owning worker
instead of the id of the owning worker. This also allows some
further cleanup as the need for switching back and forth between
the id and the worker disappears.
The id will be moved from Worker to RoutingWorker as there
currently is a fair amount of code that assumes that the id of
routing workers start from 0.
When a client connection is closed by MaxScale before the client initiates
a controlled closing of the connection, an error message is sent. This
error message now also explains why the connection was closed to make
problem resolution easier.
The connections that relate to a particular session are now a part of the
sessions resource. Currently, only the generic information is stored for
each connection (id and server name).
By storing a link to the backend DCBs in the session object itself, we can
reach all related objects from the session. This removes the need to
iterate over all DCBs to find the set of related DCBs.
A new class mxs::Worker will be introduced and mxs::RoutingWorker
will be inherited from that. mxs::Worker will basically only be a
thread with a message-loop.
Once available, all current non-worker threads (but the one
implicitly created by microhttpd) can be creating by inheriting
from that; in practice that means the housekeeping thread, all
monitor threads and possibly the logging thread.
The benefit of this arrangement is that there then will be a general
mechanism for cross thread communication without having to use any
shared data structures.
The worker task should never be immediately executed to allow the task to
be executed on the next "tick" of the worker. This prevents recursive
calls to e.g. routeQuery in readwritesplit when errors are handled.
Now that the readwritesplit uses the same mechanism for both
retry_failed_reads and delayed query retries, the re-routing function
should accept a delay of 0 seconds. This makes the mechanism more suitable
for other uses e.g. delaying of queries in filters.
The `error` variable was never used. Also added a more convenient typedef
for both the downstream and upstream functions and updated filter API
version.
The tasks themselves now control whether they are executed again. To
compare it to the old system, oneshot tasks now return `false` and
repeating tasks return `true`.
Letting the housekeeper remove the tasks makes the code simpler and
removes the possibility of the task being removed while it is being
executed. It does introduce a deadlock possibility if a housekeeper
function is called inside a housekeeper task.
The class now does all of the work and the API wraps the calls to the
member methods. Using an STL container makes the list management a lot
more convenient.
The old hkheartbeat variable was changed to the mxs_clock() function that
simply wraps an atomic load of the variable. This allows it to be
correctly read by MaxScale as well as opening up the possibility of
converting the value load to a relaxed memory order read.
Renamed the header and associated macros. Removed inclusion of the
heartbeat header from the housekeeper header and added it to the files
that were missing it.
This is a proof-of-concept that validates the query retrying method. The
actual implementation of the query retrying mechanism needs more thought
as using the housekeeper is not very efficient.
With the configuration entry
dump_last_statements=[never|on_close|on_error]
you can now specify when and if to dump the last statements
of of a session.
With the configuration entry
retain_last_statements=<unsigned>
or the debug flag '--debug=retain-last-statements=<unsigned>',
MaxScale will store the specified number of last statements
for each session. By calling
session_dump_statements(session);
MaxScale will dump the last statements as NOTICE messages.
For debugging purposes.
- session_set_response() made const correct
- set_response() function added to mxs::FilterSession; calls
session_set_response().
- Cache uses set_response() for delivering the cache result
to the client.
Filter that terminate the response processing now have a mechanism
using which a response can be provided in such a way that previous
filters will see the response.