After upgrades, it is usually useful to see which version of MaxScale is
running. By adding a command, we can see the actual version of the running
daemon process instead of the version of the current binary.
The routeQuery() in readconnroute now checks for maintenance mode. If
the server is in maintenance, the session is closed, since this router
has no backend swapping capability.
gwbuf_clone cloned only the first buffer of a chain of buffers,
which never can be the desired outcome, while gwbuf_clone_all
cloned all buffers.
Now, gwbuf_clone behaves the way gwbuf_clone_all used to behave
and gwbuf_clone_all has been removed.
The luafilter exposes two of the main functions provided by the query
classifier API; the type and operation classification.
The functions can be used by the Lua script with minimal overhead as the
current query being executed is stored only as a pointer. The functions
should only be called inside the `routeQuery` entry point of a Lua script.
The functionality to disable setting of master and slave status values to
Galera nodes was broken by the change to the monitoring algorithm.
The disable_master_role_setting option removed the master node which would
trigger the clearing of the replication status values from the node. A
more direct, and arguably better, way is to check that this option is
disabled before processing the status values for the nodes.
The DECIMAL value type is now properly handled in Avrorouter. It is
processed into an Avro double value when before it was ignored and
replaced with a zero integer.
Previously, server status changes from MaxAdmin would be set immediately
as long as the server lock could be acquired. This meant that it might take
several seconds until the next monitor pass is executed. Usually, this was
fine but in some situations we would want the monitor to run immediately
after the change (MXS-740 and Galera). This patch changes the logic of
setting and clearing status bits to a delayed mode: changes are first applied
to a "status_pending"-variable, and only once the monitor runs will the
setting be applied. To reduce the delay, the monitor now has a flag
which is checked during sleep (between short 0.1s naps). If set, the
sleep is cut short.
If a server is not monitored, the status bits are set directly.
There is a small possibility of a race condition: If a monitor is stopped or
destroyed before the pending change is applied, the change is forgotten.
When a Galera cluster loses a member, it will recalculate the
wsrep_local_index values. As the index is zero-based, we can be certain
that in a valid cluster there will always be a node with an index of 0.
If the galeramon can't find a node with an index of 0, it means that
either the cluster hasn't stabilized and there's a pending recalculation
of the index or that there's no connectivity between MaxScale and the node
with the index value 0.
With this change and default settings, active-active MaxScale setups with
Galera clusters should always choose the same node as the master.
Now that the cache key can be generated using the StorageFactory
there is no need for calling back into the derived class from Tester
to get hold of one. Instead the preparatory work is performed by
the abstract base classes, then the control is moved back to the
derived concrete class that decides what to actually do.
A Tester class that simplifies the creation of various cache tests.
It provides the basic mechanisms for reading statements from MySQL/
MariaDB test files, for converting statements into equivalent
cache items (key + statement) and for running various tasks in
separate threads for a specific amount of time.
RocksDB returns success when deleting a non-existing key. To deal
with that the book-keeping of LRUStorage is used and the real
underlying storage is used only if LRUStorage thinks a particular
key exists.
Readwritesplit should route all queries from cloned sessions to the
master. This allows batch statements to be safely routed.
Native readwritesplit sessions only support batched writes as batched
reads aren't very common. Once readwritesplit supports batched reads, the
special handling for cloned DCBs can be removed.
Now also rudimentary tests the LRU mechanism, which at the same
time makes the name a misnomer. These will be split into separate
programs to allow tests to be run individually.
With the addition of filter capabilities, the tee filter should work with
all sorts of routers that require at most the RCAP_TYPE_CONTIGUOUS_INPUT
capability.
Due to a recent discovery of the server's capability to process multiple
requests, the filter can safely send data from one service to another
without waiting for the earlier replies.
This also fixes a minor problem with the cloning of DCBs where the backend
DCBs could end up in the wrong thread's pool.
The detailed output of the new help command was a bit too densely packed
for some commands.
Added missing values for the `alter server` command help output.
Binlog server option ‘encryption_key_file=’ can now use the same key
file the MariaDB 10.1 server might have in my.cnf:
‘file_key_management_filename=‘
Note: the file content must be in clear, no key encryption.
In order to be able to test the LRU mechanism properly you need
to be able to access the head and tail from the outside. The same
is regarding the size and items in the cache. In order to be able
to test that the guarantees are upheld, you need to be able to
access those values from the outside.
It would seem that the likelyhood of different threads accessing
the same items at the same time is greater if each thread continuosly
loops across all items from beginning to finish. That will also ensure
that head and tail surely are accessed. In addition, some function names
have been disambiguated.
If the underlying storage does not support max_count and/or max_size
and it accordinly is decorated with LRUStorage, then create the real
storage as single-threaded. Since LRUStorage will do locking it is of
no use to do locking in the real storage as well.
Here we create a number of threads and then randomly start getting
putting and deleting values. The intent is to test that the locking
behaviour of the storage modules is correctly implemented.