With this change, the cache will be aware of which default database
is being used. That will remove the need for the cache parameter
'allowed_references' and thus make the cache easier to configure
and manage.
Encryption Context and Encryption Setup structures have been added to
ROUTER_INSTANCE
Replication doesn’t start if binlog file has START_ENCRYPTION_EVENT but
router_option ‘encrypt_binlog’ is Off
The return values of pcre2_match are now properly handled. A positive
match is a return value which is greater than or equal to zero. This fix
should give a small performance boost to as memory is no longer needlessly
allocated.
The backend reference states should be cleared when a reconnection attempt
is made. Should the creation of a new DCB succeed, the backend should no
longer be closed.
The MYSQL_* authentication return codes are now in gw_authenticator.h so
that all authenticators can use them. Also dropped the MYSQL_ prefix from
the return codes and added AUTH_INCOMPLETE for a generic
authentication-in-progress return code.
The authenticators can now declare the authentication plugin name. Right
now this is only relevant for MySQL authentication but for example the
HTTP module could implement both Basic and Digest authentication.
When a query has been sent to a backend, the response is now
processed to the extent that the cache is capable of figuring
out how many rows are being returned, so that the cache setting
`max_resultset_rows` can be processed.
The code is now also written in such a manner that it should be
insensitive to how a package has been split up into a chain of
GWBUFs.
Addition of START_ENCRYPTION_EVENT when encrypt_binlog=1 in Binlog
Server option.
Event is not sent to any slave.
MaxBinlogCheck understands the new event added in MariaDB 10.1.7: the
number of events = 164 as reported by FormatDescriptionEvent
Session command responses with multiple packets could be spread across
multiple, non-contiguous buffers. If a buffer contained a complete packet
and some extra data but it wasn't contiguous, the debug assertion in
gwbuf_clone_portion would fail. With release builds, it would cause
eventual out-of-bounds memory access when the response would be sent to
the client.
An IGNORABLE event is added into binlog when a gap between two events
is detected.
New routines create and write special events.
Special events are not sent to slaves.
When a backend causes an error and it should be sent to the client, the
backend reference was closed but the waiting results state was not
cleared. This caused a debug assertion to be hit.
The active operation counters are now closed every time a backend referece
is taken out of use. This should fix a few debug assertions that were hit
in tests.
The RocksDB TTL database only honours the TTL when the database
is compacted. If the database is not compacted, stale values will
be returned until the end of time.
Here we utilize the knowledge that the TTL is stored after the
actual value and use the root database for getting the value,
thereby getting access to the timestamp.
It's still worthwhile using the TTL database as that'll give
us compaction and the removal of stale items.
RocksDB is cloned from github and version v4.9 (latest at the time of
this writing) is checked out.
RocksDB can only be compiled as C++11, which means that RocksDB and hence
storage_rocksdb can be built only if the GCC version is >= 4.7.
The actual storage implementation is quite straightforward.
- The key is a SHA512 of the entire query. That will be changed so that
the database/table name is stored in the beginning of the key unhashed
as that will cause cached items from the same table to be stored
together. Assumption is that if you access something from a particular
table, chances are you will access something else as well.
- When the SO is loaded, the initialization function will created a
subdirectory storage_rocksdb under the MaxScale cache directory.
- For each instance, the RocksDB cache is created into a directory
whose name is the same as the cache filter name in the configuration
file, under that directory.
- The storage API's get and put functions are then mapped directly on
top of RockDB's equivalent functions.
The `detect_stale_slave` functionality used to only work when MaxScale had
the knowledge that a master server has existed and that replication was
working at some point in time. This might be a "safe" way to do it in
regards to staleness of the data but in practice it is preferrable to
always allow slave to be used for reads.
This change adds the missing functionality to the monitor by assigning
slave status to all servers which are configured as replication slaves
when no master can be found.
The new member variable that was added to the SERVER should be removed in
2.1 where the server_info offers the same functionalty without "polluting"
the SERVER type.