The concept of 'allowed_references' was removed from the
documentation and the code. Now that COM_INIT_DB is tracked,
we will always know what the default database is and hence
we can create a cache key that distinguises between identical
queries targeting different default database (that is not
implemented yet in this change).
The rules for the cache is expressed using a JSON object.
There are two decisions to be made; when to store data to the
cache and when to use data from the cache. The latter is
obviously dependent on the former.
In this change, the 'store' handling is implemented; 'use'
handling will be in a subsequent change.
The mysqlmon simple failover mode allows it to direct write traffic to a
secondary node. This enables a very simple failover mode with MaxScale
when it is used in a two node master-slave setup.
With this change, the cache will be aware of which default database
is being used. That will remove the need for the cache parameter
'allowed_references' and thus make the cache easier to configure
and manage.
The return values of pcre2_match are now properly handled. A positive
match is a return value which is greater than or equal to zero. This fix
should give a small performance boost to as memory is no longer needlessly
allocated.
The backend reference states should be cleared when a reconnection attempt
is made. Should the creation of a new DCB succeed, the backend should no
longer be closed.
The canonical form of the query should ignore changes in whitespace as the
semantics of the query stays the same regardless of the amount of
whitespace.
The MYSQL_* authentication return codes are now in gw_authenticator.h so
that all authenticators can use them. Also dropped the MYSQL_ prefix from
the return codes and added AUTH_INCOMPLETE for a generic
authentication-in-progress return code.
The authenticators can now declare the authentication plugin name. Right
now this is only relevant for MySQL authentication but for example the
HTTP module could implement both Basic and Digest authentication.
When a query has been sent to a backend, the response is now
processed to the extent that the cache is capable of figuring
out how many rows are being returned, so that the cache setting
`max_resultset_rows` can be processed.
The code is now also written in such a manner that it should be
insensitive to how a package has been split up into a chain of
GWBUFs.
MaxScale shouldn't require the service and monitor user checks. It makes
sense to disable the checks to speed up the startup process when the user
knows that the permissions are OK.
Session command responses with multiple packets could be spread across
multiple, non-contiguous buffers. If a buffer contained a complete packet
and some extra data but it wasn't contiguous, the debug assertion in
gwbuf_clone_portion would fail. With release builds, it would cause
eventual out-of-bounds memory access when the response would be sent to
the client.
When a backend causes an error and it should be sent to the client, the
backend reference was closed but the waiting results state was not
cleared. This caused a debug assertion to be hit.
The active operation counters are now closed every time a backend referece
is taken out of use. This should fix a few debug assertions that were hit
in tests.
MaxScale shouldn't require the service and monitor user checks. It makes
sense to disable the checks to speed up the startup process when the user
knows that the permissions are OK.
- Single entry/single exit.
- Variables declared as they are needed.
- The GWBUF_EMPTY check removed as it only looks at the first buffer
in a chain. That is, if there had been a non-empty chain where the
first buffer is empty, the function would incorrectly have reported
that the buffer contains no packet.
- Documentation updated.
The RocksDB TTL database only honours the TTL when the database
is compacted. If the database is not compacted, stale values will
be returned until the end of time.
Here we utilize the knowledge that the TTL is stored after the
actual value and use the root database for getting the value,
thereby getting access to the timestamp.
It's still worthwhile using the TTL database as that'll give
us compaction and the removal of stale items.