The test appears to fail when the throttling is unable to keep the QPS
high enough for the test to pass. To reduce the likelihood of this, lower
the limit to 500 QPS.
In theory, the minimum delay of one millisecond in the delayed_call limits
the filter to a maximum QPS of 1000 as each query would wait for at least
a millisecond before being routed. This is yet to be proven but it would
explain why the tests are having a hard time approaching that level of
QPS.
The logic of MDBCI 'install_product' command was changed: now it works in the same way as product installation during initial VM start
(using Chef). Everything moved to Chef recipe and there is no need in 'setup_repo' command
The error was only generated for COM_STMT_EXECUTE commands when all PS
commands should trigger it. In addition, large packets would get sent two
errors upon the arrival of the trailing end.
Syncing the slaves should prevent replication lag from affecting the
test. The added logging will help determine what the error was that caused
the failure.
Very simple, creates 10 threads that concurrently starts making
simple INSERTs and SELECTs. The purpose is to test that the basic
router to router mechanism of smartrouter works.
If an error is generated while a COM_CHANGE_USER is being done, it would
always use the sequence number 1. To properly handle this case and send
the correct sequence number, the COM_CHANGE_USER progress needs to be
tracked at the session level.
The information needs to be shared between the backend and client
protocols as the final OK to the COM_CHANGE_USER, with the sequence number
3, is the one that the backend server returns. Only after this response
has been received and routed to the client can the COM_CHANGE_USER
processing stop.
If a server fails mid-resultset, there's not a lot we can do to recover
the situation. A few cases could be handled (e.g. generate an ERR if the
resultset has proceeded to the row processing stage) but these fall
outside the scope of the original issue.
As is explained in MDEV-19893:
Client reads from socket, gets the packet from 1. with seqno=0, which
it does not expect, since seqno is supposed to be incremented. Client
complains, throws tantrums and exceptions.
To cater for clients that do not expect out-of-bound messages
(i.e. server-initiated packets with seqno 0), all messages generated by
MaxScale should use at least sequence number 1.
Deep-copying prevents subsequent modifications done by the caller from
affecting the data that can be potentially stored in the write queue of
the backend's DCB.
In Ubuntu Bionic there is need to update libssl which causes system services restart
and apt asks user to allow this restart. It cases build script to hang and build fails due to timeout.
Additional dpkg options and DEBIAN_FRONTEND=noninteractive solves the problem
It was possible that a one-second outage that caused immediate rejection
of network connections would cause all of the query retry attempts to fail
within a very short period of time. By preventing rapid reconnections,
query_retries is more effective as an error filtering mechanism.
In Ubuntu Bionic there is need to update libssl which causes system services restart
and apt asks user to allow this restart. It cases build script to hang and build fails due to timeout.
Additional dpkg options and DEBIAN_FRONTEND=noninteractive solves the problem
Centos6 uses a very old version of SQLite without support for URI filenames.
PAM authenticator must use a file-based database.
Commit cherry-picked to 2.4.0 from 2.3.
If a COM_STMT_EXECUTE has no metadata in it and it has more than one
parameter, it must be routed to the same backend where the previous
COM_STMT_EXECUTE with the same ID was routed to. This prevents MDEV-19811
that is triggered by MaxScale routing the queries to different backends.
The directory was installed as the root user but later on in the
installation process the owner would be changed to the maxscale user. This
causes some validation programs to fail as they expect installed files to
retain the original ownership.