The intention was to send the lowest backend version string automatically
to the client instead of the default handshake version. This did not work
as the service version string was used instead of the server version.
By splitting the processing and state querying into two separate
functions, the result can be inspected multiple times without triggering
the result processing.
The collection of resultsets needs to be disabled by default when a
response is received to cover the cases where an error is returned.
The collection of results should also not be set for queries that do not
generate any responses.
See script directory for method. The script to run in the top level
MaxScale directory is called maxscale-uncrustify.sh, which uses
another script, list-src, from the same directory (so you need to set
your PATH). The uncrustify version was 0.66.
Given that worker.hh was public, it made sense to make routingworker.hh
public as well. This removes the need to include private headers in
modules and allows C++ constructs to be used in C++ code when previously
only the C API was available.
By storing the data gathere by readwritesplit inside the session, the
protocol will be aware of the state of the LOAD DATA LOCAL INFILE
execution. This prevents misinterpretation of the data which previously
led to closed connections, effectively rendering LOAD DATA LOCAL INFILE
unusable.
This change is a temporary solution to a problem that needs to be solved
at the protocol level. The changes required to implement this are too big
to add into a bug fix release.
If a large packet is received, the stack would overflow when the username
size was determined from the packet size. The code must not assume
anything about the size of the packet being read.
The ssl parameters were defined as strings even thought they were actually
enums. The events parameter was also a string even though it was an enum.
Also added the missing "all" value to the events enum. This fixes the
regression of scripts not being launched on all events by default.
Moved the definition of the default version string where it should be and
removed the empty value check.
As all connections should be accepted via dcb_accept, it is the optimal
place to calculate how many open client connections per service there
are. The decrementation should be done when the session is closed instead
of when the call to dcb_close for the client DCB is done. This allows the
client count to be the absolute reference count that sessions have to a
service.
The current client count is a duplicate counter that should match the
n_current value in SERVICE_STATS. The former does differ from the latter
in that it does the incrementation when the client DCB is accepted instead
of when the session is created.
The LocalClient micro-client required a reference to the session that was
valid at construction time. This is the reason why the previous
implementation used dcb_foreach to first gather the targets and then
execute queries on them. By replacing this reference with pointers to the
raw data it requires, we lift the requirement of the orignating session
being alive at construction time.
Now that the LocalClient no longer holds a reference to the session, the
killing of the connection does not have to be done on the same thread that
started the process. This prevents the deadlock that occurred when
concurrect dcb_foreach calls were made.
Replaced the unused dcb_foreach_parallel with a version of dcb_foreach
that allows iteration of DCBs local to this worker. The dcb_foreach_local
is the basis upon which all DCB access outside of administrative tasks
should be built on.
This change will introduce a regression in functionality: The client will
no longer receive an error if no connections match the KILL query
criteria. This is done to avoid having to synchronize the workers after
they have performed the killing of their own connections.
If a client is executing a COM_CHANGE_USER command and the
reauthentication of the client fails, no error message would be logged
about the failure of the reauthentication process and only a routing
failure message would be logged.
The result collection now covers more cases, including the use of
COM_CHANGE_USER. The addition of COM_STMT_EXECUTE to the list of commands
that generate result set responses is needed in order for the code to take
the correct branch.
With the removal of the old session command implementation, the code that
used it can be removed or replaced with newer constructs. As a result, the
backend protocol no longer does any session command processing.
The three buffer types, GWBUF_TYPE_SESCMD_RESPONSE,
GWBUF_TYPE_RESPONSE_END and GWBUF_TYPE_SESCMD as well as their related
macros are no longer used and can be removed.
The protocol could leak memory in rare cases where several commands were
queued at the same time. Readwritesplit also didn't free the memory it
acquired via qc_get_table_names.
The id has now been moved from mxs::Worker to mxs::RoutingWorker
and the implications are felt in many places.
The primary need for the id was to be able to access worker specfic
data, maintained outside of a routing worker, when given a worker
(the id is used to index into an array). Slightly related to that
was the need to be able to iterate over all workers. That obviously
implies some kind of collection.
That causes all sorts of issues if there is a need for being able
to create and destroy a worker at runtime. With the id removed from
mxs::Worker all those issues are gone, and its perfectly ok to create
and destory mxs::Workers as needed.
Further, while there is a need to broadcast a particular message to
all _routing_ workers, it hardly makes sense to broadcast a particular
message too _all_ workers. Consequently, only routing workers are kept
in a collection and all static member functions dealing with all
workers (e.g. broadcast) have now been moved to mxs::RoutingWorker.
Now, instead of passing the id around we instead deal directly
with the worker pointer. Later the data in all those external arrays
will be moved into mxs::[Worker|RoutingWorker] so that worker related
data is maintained in exactly one place.