Basic tests can be executed with 15 machines Master/slave backend. Tests have label BIG_REPL_BACKEND Default template modified to support big backend. Tests temporaraly labeled as UNSTABLE to prevent their execution nightly
For big test maxscale.cnf is automatically generated for any number of nodes
Basic tests can be executed with 15 machines Master/slave backend. Tests have label BIG_REPL_BACKEND Default template modified to support big backend. Tests temporaraly labeled as UNSTABLE to prevent their execution nightly
The masking_user test creates a database over a masked connection.
As 'CREATE DATABASE DB' is not fully parsed the test will fail since
it creates a database.
To allow the test to pass, we turn off the strict requirement that
all statements must be fully parsed.
MXS-2236 Add own long test and possibility to run tests under Valgrind
Long test executes INSERT queries, transactions, prepared statements in parallel to create weird load on Maxscale to catch crashes and leaks
Test is not included into ctest scope. Test should be executed manually. For BuildBot (and also for run_test.sh) 'test_set' should be set 'NAME# ./long_test'
Time to run test is defined by 'long_test_time' variable (in seconds)
Possibility to run Maxscale under Valgrind is also added. To run Maxscale under Vaslgrind 'use_valgrind=yes' variable have to be defined
By repeatedly doing reads instead of one read per second, it is more
likely that MXS-2311 is reproduced. This is still not a deterministic
process but in theory it can reproduce the problem.
Added a test case that does a set of sanity checks on the monitor. As the
monitor is very simple, there are not a lot of things to test without
access to the actual instances (e.g. ExeMgr failures need to be tested).
Currently the test always passes as ColumnStore clusters aren't
implemented for the test framework.
Removed the tests obsoleted by the sanity_check test case. This shortens
the test time by about a minute and a half and removes about 2500 lines of code.
This should help prevent network disconnections and make the test more
stable. If the connection is lost, the automatic failover is disabled and
the test will fail.
The test doesn't work when ASAN is used as it increases the memory use of
the process. With the addition of more caches in 2.3, the test is also
more likely to fail. Due to the test being quite useless with ASAN, it is
better to remove it.
Now the test program will
1) Write to each node in a Galera cluster and verify that the data
ends up in the slave.
2) At the end of 1) execute STOP SLAVE and START SLAVE to check that
replication can be stopped and started again (won't work unless
each node has the same server_id and value for @@log_bin_basename).
3) Block the node BLR is replicating from and expect it to connect
to the next configured master and that replication continues to
work. Do that for all nodes.
4) Stop MaxScale and restart it and expect 3) to work. That checks
that BLR saves all necessary information in master.ini and is
capable of reading it.
It should be possible to START SLAVE and STOP SLAVE irrespective
of which Galera node updates are mode to.
That will be the case if @@log_slave_updates is on and each node
in the Galera cluster have the same server id. Otherwise it will
fail with the current incarnation of BLR.