The configuration processing requires that all parameters for monitors
exist before they are used. This is wrong if we are aiming for a modular
system but is a necessary evil for the time being.
When a Galera cluster loses a member, it will recalculate the
wsrep_local_index values. As the index is zero-based, we can be certain
that in a valid cluster there will always be a node with an index of 0.
If the galeramon can't find a node with an index of 0, it means that
either the cluster hasn't stabilized and there's a pending recalculation
of the index or that there's no connectivity between MaxScale and the node
with the index value 0.
With this change and default settings, active-active MaxScale setups with
Galera clusters should always choose the same node as the master.
The source and destination strings for snprintf must not overlap. A simple
check for the address of the source and destination should solve the case
where they are the same. Behavior is undefined if the pointers aren't the
same but the memory overlaps.
Now that the cache key can be generated using the StorageFactory
there is no need for calling back into the derived class from Tester
to get hold of one. Instead the preparatory work is performed by
the abstract base classes, then the control is moved back to the
derived concrete class that decides what to actually do.
A Tester class that simplifies the creation of various cache tests.
It provides the basic mechanisms for reading statements from MySQL/
MariaDB test files, for converting statements into equivalent
cache items (key + statement) and for running various tasks in
separate threads for a specific amount of time.
RocksDB returns success when deleting a non-existing key. To deal
with that the book-keeping of LRUStorage is used and the real
underlying storage is used only if LRUStorage thinks a particular
key exists.
If no message is logged, it will be very hard to figure out where some
configurations are coming from. For this reason, it's good to log a
message whenever a persistent configuration change is taken into use.
Readwritesplit should route all queries from cloned sessions to the
master. This allows batch statements to be safely routed.
Native readwritesplit sessions only support batched writes as batched
reads aren't very common. Once readwritesplit supports batched reads, the
special handling for cloned DCBs can be removed.
The memory barrier is not needed here. Declaring poll_msg as volatile
should help with preveting some unwanted optimizations. Most likely
even that is unnecessary.
This came up when testing maxadmin "show servers", which is
surprisingly slow compared to the other commands. This change does
not help because the slowness is caused by the polling loop sleeping.
In a more busy environment the command would probably complete faster.
This should be looked at later.
Now also rudimentary tests the LRU mechanism, which at the same
time makes the name a misnomer. These will be split into separate
programs to allow tests to be run individually.
With the addition of filter capabilities, the tee filter should work with
all sorts of routers that require at most the RCAP_TYPE_CONTIGUOUS_INPUT
capability.
Due to a recent discovery of the server's capability to process multiple
requests, the filter can safely send data from one service to another
without waiting for the earlier replies.
This also fixes a minor problem with the cloning of DCBs where the backend
DCBs could end up in the wrong thread's pool.
The detailed output of the new help command was a bit too densely packed
for some commands.
Added missing values for the `alter server` command help output.
The document wraps paragraphs at 80 characters and uses triple backtick
bloks for code sections instead of indentation.
Also fixed a few typos and reworded some parts.
MXS-1060. In MaxAdmin, running "list services" will now list the
backends of each service. When running "show services", the backend
names are now printed (previously just the addresses and protocols).
Binlog server option ‘encryption_key_file=’ can now use the same key
file the MariaDB 10.1 server might have in my.cnf:
‘file_key_management_filename=‘
Note: the file content must be in clear, no key encryption.
In order to be able to test the LRU mechanism properly you need
to be able to access the head and tail from the outside. The same
is regarding the size and items in the cache. In order to be able
to test that the guarantees are upheld, you need to be able to
access those values from the outside.
It would seem that the likelyhood of different threads accessing
the same items at the same time is greater if each thread continuosly
loops across all items from beginning to finish. That will also ensure
that head and tail surely are accessed. In addition, some function names
have been disambiguated.
If the underlying storage does not support max_count and/or max_size
and it accordinly is decorated with LRUStorage, then create the real
storage as single-threaded. Since LRUStorage will do locking it is of
no use to do locking in the real storage as well.
Here we create a number of threads and then randomly start getting
putting and deleting values. The intent is to test that the locking
behaviour of the storage modules is correctly implemented.
0 is now the default of all cache configuration parameters and in
all cases the meaning is the same; that is, no limit. Internally
all limits but ttl are now for the sake of consistency 64-bit.
Smoke test for detecting errors in key generation. The input file
is a number of .test-files from the server combined into a single
one. We simply check that a unique key is generated for each
statement.
Diagnostics can be required on both instance and session level,
so both cases need to be handled explicitly.
In addition, some reinterpret_casts were changed into static_casts.
Reinterpret_cast needs to be used with the instance, which is a
void** but static_cast is sufficient for the session, which is void*.
The capability for reading MySQL/MariaDB .test-files has now been
factored out from the compare.cc test program. That way, the
functionality can be used from other test programs as well.
When the listeners were created twice, MaxScale would crash when the users
were loaded. This is caused by the fact that the DCB for the listener is
NULL for failed listeners.
If a slave fails while a non-critical read is being executed, the read is
retried on a different server. This is controlled by the new
`retry_failed_reads` option.
Only selects done that are done outside of a transaction and with
autocommit enabled are retried.
It is possible that a session is in the dummy state (a transient state)
when a backend write occurs. The check for the client protocol NULL-ness
should extend to the client DCB itself.