Now that the connector resides in the same repository, it can be built as
a library for the tests. Installing the development package is one option
but it would unnecessarily complicate the build process.
As the function documentation states, the expected value must be read
again after a call to atomic_cas_ptr. This is due to the fact that if the
values are not the same, the __atomic builtin version will store the
current value into the expected value.
The new value given to the atomic_cas_ptr function was the address of the
new value, not the new value itself. The behavior of the atomic_cas_ptr is
what caused the test to pass on systeems that implement the __atomic
builtins. On older systems that do not implement it, the expected value
was never modified which caused the test to hang.
The monitor will now also create the database if it is missing. Since it
already creates the table, also creating the database is not a large
addition.
Cleaned up some of the related checking code and combined them into a
simple utility function.
As you can create regular expressions that have a fixed length,
e.g. "....$", it makes perfect sense to replace using 'value' if
the length of the string matches exactly.
The sprintf calls failed due to a warning about possible buffer
overflow. Curiously enough, the same warnings do appear on Fedora 26 but
only when the calls are changed to snprintf.
The pre-existing users in the MaxScale test environment have
the super privilege so they are not affected by the database
being set in read-only mode.
Consequently, a custom user without the super privilege must
be used by the client threads.
The config parameter 'newline_replacement' (defaults to 1 space " ") now defines
what to write to the log file when the sql-query has a newline sequence (\n, \r or
\r\n). If 'newline_replacement' is the empty string "", no replacing is done and
newlines are printed to file.
Also, adds the config parameter 'separator', which defines the string
printed between elements. Default value is ",".
- Start 4 threads where each thread sits in a loop and performs
20% updates and 80% selects. Each thread has a table of its own.
- The main thread executes the following in a loop.
- Perform a switchover from the current master to the next (which is
simply the next node % all nodes).
- Keep on doing that for 1.5 minutes.
The expectation is that the switchover will succeed, that is, after the
operation there will be a new master.