The backend DCBs do not have to send hangups in the close protocol API
method. If the conditions for the hangup were true, the session was
already stopping meaning that the client DCB was already closed.
All COM_STMT_SEND_LONG_DATA commands and the COM_STMT_EXECUTE that follows
it must be sent to the same server. This implicitly works for masters but
with multiple slave servers the data could be sent to the wrong server.
By using the code added for MXS-2521, this problem can now be easily
solved by checking what the previous command was.
If a transaction replay fails, no queries must be routed before the
connection is closed. This could happen if the client received the error
from the replay failure and closes the connection before the fake hangup
generated by the replay failure is processed.
When fake hangup events are delivered via DCBs, the current DCB would not
be updated. This would cause error messages without a session ID which
makes failure analysis harder.
If code that may remove items from m_nodes_by_id (Clustrix nodes
keyed by id) succeeds, we must update the vector of health check
URLs also in the case that code that _may_ add items to m_nodes_by_id
fails.
By injecting messages into the maxscale.log from the test, the reader of
the log can easily see the "synchronization" with the test case. This does
affect the test timing so it can only be used to see whether non-timing
related functionality is correct.
Systemd provides the facilities to run commands before startup which can
be used to prevent the problem that fixing MXS-2578 caused: upon upgrading
from 2.3.8 to 2.3.9 the /var/lib/maxscale directory would be removed if it
was empty.
The test appears to fail when the throttling is unable to keep the QPS
high enough for the test to pass. To reduce the likelihood of this, lower
the limit to 500 QPS.
In theory, the minimum delay of one millisecond in the delayed_call limits
the filter to a maximum QPS of 1000 as each query would wait for at least
a millisecond before being routed. This is yet to be proven but it would
explain why the tests are having a hard time approaching that level of
QPS.
The logic of MDBCI 'install_product' command was changed: now it works in the same way as product installation during initial VM start
(using Chef). Everything moved to Chef recipe and there is no need in 'setup_repo' command
The error was only generated for COM_STMT_EXECUTE commands when all PS
commands should trigger it. In addition, large packets would get sent two
errors upon the arrival of the trailing end.
Syncing the slaves should prevent replication lag from affecting the
test. The added logging will help determine what the error was that caused
the failure.
Very simple, creates 10 threads that concurrently starts making
simple INSERTs and SELECTs. The purpose is to test that the basic
router to router mechanism of smartrouter works.
If an error is generated while a COM_CHANGE_USER is being done, it would
always use the sequence number 1. To properly handle this case and send
the correct sequence number, the COM_CHANGE_USER progress needs to be
tracked at the session level.
The information needs to be shared between the backend and client
protocols as the final OK to the COM_CHANGE_USER, with the sequence number
3, is the one that the backend server returns. Only after this response
has been received and routed to the client can the COM_CHANGE_USER
processing stop.