The concept of 'allowed_references' was removed from the
documentation and the code. Now that COM_INIT_DB is tracked,
we will always know what the default database is and hence
we can create a cache key that distinguises between identical
queries targeting different default database (that is not
implemented yet in this change).
The rules for the cache is expressed using a JSON object.
There are two decisions to be made; when to store data to the
cache and when to use data from the cache. The latter is
obviously dependent on the former.
In this change, the 'store' handling is implemented; 'use'
handling will be in a subsequent change.
The mysqlmon simple failover mode allows it to direct write traffic to a
secondary node. This enables a very simple failover mode with MaxScale
when it is used in a two node master-slave setup.
With this change, the cache will be aware of which default database
is being used. That will remove the need for the cache parameter
'allowed_references' and thus make the cache easier to configure
and manage.
The backend reference states should be cleared when a reconnection attempt
is made. Should the creation of a new DCB succeed, the backend should no
longer be closed.
The canonical form of the query should ignore changes in whitespace as the
semantics of the query stays the same regardless of the amount of
whitespace.
The MYSQL_* authentication return codes are now in gw_authenticator.h so
that all authenticators can use them. Also dropped the MYSQL_ prefix from
the return codes and added AUTH_INCOMPLETE for a generic
authentication-in-progress return code.
The authenticators can now declare the authentication plugin name. Right
now this is only relevant for MySQL authentication but for example the
HTTP module could implement both Basic and Digest authentication.
When a query has been sent to a backend, the response is now
processed to the extent that the cache is capable of figuring
out how many rows are being returned, so that the cache setting
`max_resultset_rows` can be processed.
The code is now also written in such a manner that it should be
insensitive to how a package has been split up into a chain of
GWBUFs.
MaxScale shouldn't require the service and monitor user checks. It makes
sense to disable the checks to speed up the startup process when the user
knows that the permissions are OK.
Session command responses with multiple packets could be spread across
multiple, non-contiguous buffers. If a buffer contained a complete packet
and some extra data but it wasn't contiguous, the debug assertion in
gwbuf_clone_portion would fail. With release builds, it would cause
eventual out-of-bounds memory access when the response would be sent to
the client.
When a backend causes an error and it should be sent to the client, the
backend reference was closed but the waiting results state was not
cleared. This caused a debug assertion to be hit.
The active operation counters are now closed every time a backend referece
is taken out of use. This should fix a few debug assertions that were hit
in tests.
- Single entry/single exit.
- Variables declared as they are needed.
- The GWBUF_EMPTY check removed as it only looks at the first buffer
in a chain. That is, if there had been a non-empty chain where the
first buffer is empty, the function would incorrectly have reported
that the buffer contains no packet.
- Documentation updated.
The RocksDB TTL database only honours the TTL when the database
is compacted. If the database is not compacted, stale values will
be returned until the end of time.
Here we utilize the knowledge that the TTL is stored after the
actual value and use the root database for getting the value,
thereby getting access to the timestamp.
It's still worthwhile using the TTL database as that'll give
us compaction and the removal of stale items.
RocksDB is cloned from github and version v4.9 (latest at the time of
this writing) is checked out.
RocksDB can only be compiled as C++11, which means that RocksDB and hence
storage_rocksdb can be built only if the GCC version is >= 4.7.
The actual storage implementation is quite straightforward.
- The key is a SHA512 of the entire query. That will be changed so that
the database/table name is stored in the beginning of the key unhashed
as that will cause cached items from the same table to be stored
together. Assumption is that if you access something from a particular
table, chances are you will access something else as well.
- When the SO is loaded, the initialization function will created a
subdirectory storage_rocksdb under the MaxScale cache directory.
- For each instance, the RocksDB cache is created into a directory
whose name is the same as the cache filter name in the configuration
file, under that directory.
- The storage API's get and put functions are then mapped directly on
top of RockDB's equivalent functions.
The `detect_stale_slave` functionality used to only work when MaxScale had
the knowledge that a master server has existed and that replication was
working at some point in time. This might be a "safe" way to do it in
regards to staleness of the data but in practice it is preferrable to
always allow slave to be used for reads.
This change adds the missing functionality to the monitor by assigning
slave status to all servers which are configured as replication slaves
when no master can be found.
The new member variable that was added to the SERVER should be removed in
2.1 where the server_info offers the same functionalty without "polluting"
the SERVER type.
The different server monitoring functions all did similar work and
combining them into one function makes the whole process of monitoring a
server simpler.
If a relay master server is found in the replication tree, it should not
get the master status. Previously all master servers were assigned the
master status regardless of their depth in the replication tree.
By comparing the depth value of each potential master, the monitor can
find the right master at the root of the replication tree.