When backend SSL connections were created, the connection creation was
done twice. This was due to the lacking detection of an already
established SSL connection.
If a DCB is closed before a response to the handshake packet is received,
the DCB's session will point to the dummy session. In this case no error
should be written to the DCB.
This is a cherry-pick of commit f53e112bf49766f1cc55516c2d7ee571461d483f
from the 2.2 branch.
The tests are now built by default. This should make it easier for users
to verify that they have a working MaxScale.
Also made the building of test_parse_kill conditional like the rest of the
tests.
By always starting the session shutdown process by stopping the client
DCB, the manipulation of the session state can be removed from the backend
protocol modules and replaced with a fake hangup event.
Delivering this event via the core allows the actual dcb_close call on the
client DCB to be done only when the client DCB is being handled by a
worker.
Directly closing the client DCB in the backend protocol modules is not
correct anymore as the state of the session doesn't change when the client
DCB is closed. By propagating the shutdown of the session with a fake
hangup to the client DCB, the closing of the DCB is done only once.
Added debug assertions that make sure all DCBs are closed only
once. Removed redundant code in the backend protocol error handling code.
If a DCB is closed before a response to the handshake packet is received,
the DCB's session will point to the dummy session. In this case no error
should be written to the DCB.
Make all modules lowercase and make module loading case
insensitive. Further, make command invocation case insensitive,
as far as the module name is conserned.
If a shallow copy of the buffer is made, any modifications that are made
to the data after it has been queued will affect the queued query of the
LocalClient.
A copy-on-write mechanism would save the relatively expensive process of
copying the data but since the LocalClient is not often used, it is not
the most critical performance problem.
The service for a dummy session will be NULL. If authentication fails for
a dummy session, then no service level actions should be taken.
Only the binlogrouter can trigger authentication failure with a dummy
session as it creates connections before the service itself has started.
The readqueue should never be explicitly assigned and should only ever be
appended to. This guarantees that the packets are read and processed in
the correct order.
Also removed an unused function that deals with the readqueue
manipulation.
When packets were routed individually, the qualification for the
persistent pool was done before the current command was updated. In
addition to this, the previous commit doesn't seem like it can even build.
The statement tracking was given the same buffer multiple times when a
large packet was spread across multiple buffers when a router with
RCAP_TYPE_STMT_INPUT was present.
The command tracking packet processing is redundant for routers that
require RCAP_TYPE_STMT as the same processing is done later when the
buffer is split into packets and routed. By using the split packets, the
protocol module can simplify the command tracking by a great deal for most
routers.
When backend authentication failed due to errors other than wrong
credentials, the users were unconditionally reloaded. This caused a spike
of activity whenever authentication failed for other reasons.
Also fixed the test that checks for this to look for the correct error
message.
The internal header directory conflicted with in-source builds causing a
build failure. This is fixed by renaming the internal header directory to
something other than maxscale.
The renaming pointed out a few problems in a couple of source files that
appeared to include internal headers when the headers were in fact public
headers.
Fixed maxctrl in-source builds by making the copying of the sources
optional.
Sending an error to the client allows the connector to show more
information to the user when the DCB is closed due to a reason internal to
MaxScale.
The error message states that the connection was killed by MaxScale to
distinct it from the error sent by the server. The error number and SQL
state are still the same as both errors should be treated the same way.
This builds on commit 1287b0e595a5f99026f66df7eeaef091b8ffc774 and cleans
up the original code. This fixes a bug introduced in the aforementioned
commit and cleans up the code.
The authentication code assumed that the initial request only had
authentication related data. This is not true if the client library
predicts that the authentication will succeed and it sends a query right
after it sends the authentication data.
The authentication phase expects full packets. If the packets aren't
complete a debug assertion would get hit. To detect this, the result of
the extracted buffer needs to be checked.
Returning the results of a query as a set of packets is currently more
efficient. This is mainly due to the fact that each individual packet for
single packet routing is allocated from the heap which causes a
significant loss in performance.
Took the new capability into use in readwritesplit and modified the
reply_is_complete function to work with non-contiguous results.
When the router requires statement based output, the gathering of complete
packets can be skipped as the process of splitting the complete packets
into individual packets implies that only complete packets are handled.
Also added a quicker check for stored protocol commands than a call to
protocol_get_srv_command.
The backend protocol command tracking didn't check whether the session was
the dummy session. The DCB's session is always set to this value when it
is put into the persistent pool.
If a query was processed in the client protocol module when a prepared
statement was being executed by the backend module, the current command
would get overwritten. This caused a debug assertion in readwritesplit to
trigger as the result was neither a single packet nor a collected result.
The RCAP_TYPE_STMT_INPUT capability guarantees that a buffer contains a
complete packet. This information can be used to track the currently
executed command based on the buffer contents which allows asynchronicity
betweent the client and backend protocol. In practice this only comes in
play when routers queue queries for later execution.
The result collection did not reset properly when a non-resultset was
returned for a request. As collected result need to be distinguishable
from single packet responses, a new buffer type was added.
The new buffer type is used by readwritesplit which uses result collection
for preparation of prepared statements.
Moved the current command tracking to the RWBackend class as the command
tracked by the protocol is can change before a response to the executed
command is received.
Removed a false debug assertion in the mxs_mysql_extract_ps_response
function that was triggered when a very large prepared statement response
was processed in multiple parts.
KILL commands are now sent to the backends in an asynchronous manner. As
the LocalClient class is used to connect to the servers, this will cause
an extra connection to be created on top of the original connections
created by the session.
If the user does not have the permissions to execute the KILL, the error
message is currently lost. This could be solved by adding a "result
handler" into the LocalClient class which is called with the result.
When the LocalClient is constructed, it is possible to extract all the
needed information at that time. The only obstacle is the fact that the
LocalClient is constructed at the same time the session is. Since the
client DCB is created before the session, it is safe to extract the shared
data directly from it.
If the client sends data before authentication is complete, it must not be
discarded and it needs to be processed like as if it was sent in a
separate network packet.
When readwritesplit is routing any queued queries, the currently executed
command of the protocol modules needs to be adjusted by
readwritesplit. This is not a true fix but more of a workaround to fix the
problems of queued query execution.
The correct solution would be to move the queued query handling into the
client protocol module so that all components see the same state.
Asserting that only a complete COM_STMT_PREPARE is returned when the
prepared statement preparation is extracted will guarantee that the
protocol works as expected.