The resultset of SELECTs that use functions whose result will
always vary or whose result depend upon the user executing the
query should not be cached. The list of functions is the same
as that specified for the query cache of MariaDB:
https://mariadb.com/kb/en/mariadb/query-cache/
If user or system variables are used in a SELECT statement, then
the result will not be cached. That ensures that the wrong result
will not be returned.
As before, the cache will be used if there is no ongoing transaction
(includes autocommit being on), or if there is an explicitly read-
only transaction.
In addition, the cache will be used and populated during any other
transaction as long as only pure read statements are executed. After
first non-read statement, the use of the cache is disabled.
- Add note about filters having become available in 2.1.
- Add insertstream to release notes and change log.
- Add complete example to cache documentation.
The firewall filter should allow COM_PING and other similar commands to
pass through as they are mainly used to check the status of the backend
server or to display statistics. The COM_PROCESS_KILL is the exception as
it affects the state of the backend server. This is better controlled with
permissions in the server than in the firewall filter.
Commands that require special grants aren't allowed to pass as they are
mainly for maintenance purposes and these should not be done through the
firewall.
The masking filter will assume payloads less than 2^24 - 1. The
behaviour if payloads larger than than are encountered can be
configured.
The actual implementation follows in a subsequent change.
Added documentation for dbfwfilter and mentioned the new directory in the
release notes. The configuration guide also gives an example of how the
path parameters are resolved.
Moved the qlafilter parameters to module options. This removes the need to
parse the options in the filter.
Split the options into separate parameters. This allows common options to
be combined as enumerations under common parameters.
- Hard TTL; the maximum time a value will be used from the cache.
- Soft TLL; the time after which the cache value should be updated
from the server.
So as not to unnecessarily fetch the same value multiple times, when
the soft TTL has been reached, the value will be updated for the first
client, while all other clients will use the stale value until it has
become updated.
With different soft and hard TTLs there is a definite upper bound for
how old a value can be used.
The luafilter exposes two of the main functions provided by the query
classifier API; the type and operation classification.
The functions can be used by the Lua script with minimal overhead as the
current query being executed is stored only as a pointer. The functions
should only be called inside the `routeQuery` entry point of a Lua script.
0 is now the default of all cache configuration parameters and in
all cases the meaning is the same; that is, no limit. Internally
all limits but ttl are now for the sake of consistency 64-bit.
The documentation listed the rules as a comma separated list when they
were parsed as a whitespace separated list. The match specifiers were also
defined as optional when in fact they were mandatory.
MXS-848 (partially). The QLA-filter now has additional options
to control the printing.
1. "append"
This toggles append-mode, where the filter opens the log files in
update mode (if file already existed) and only adds text to the end.
2. "print_service"
This toggles writing the service name onto each row. Mostly useful
with the unified_file-setting.
3. "print_session"
This toggles writing the session number onto each row. Mostly useful
with the unified_file-setting.
Also, the filter now writes a header to the beginning of the file
when creating it.
The printing has been separated to its own helper-function, in case
more accurate control will be added in the future.
The maximum count and maximum size of the cache can now be
specified and a storage can declare what capabilities it has.
If a storage modile cannot enforce the maximum count or maximum
size limits, the storage is decorated with an LRU storage that
can.
Added a document that describes the module command system and added the
necessary information in the dbfwfilter documentation.
The release notes also point to the newly created document.
Since it's a cache, we do not need to retain the data from one
MaxScale invocation to the next. Consequently, we can turn off
write ahead logging completely if we simply wipe the RocksDB
database at each startup. In addition, the location of the
cache directory can now be specified explicitly, so it can be
placed, for instance, on a RAM disk.
With the advent of qc_get_field_info, columns can now be matched.
However, there is still some undeterminism caused by the table
information not containing contextual information (exactly where
is the table used).
Further, suppose table X contains the column A and table Y contains
the column B, then given a statement like
SELECT a, b from X, Z;
we cannot know whether a is in X or Z, or b in X or Z, without being
aware of the schema, which we currently are not.
Consequently, as long as MaxScale is not aware of the schema, some
heuristics must be applied. For instance, if exactly one table is
referred to, then we can assume that columns that are not explicitly
qualified are from that table.
The rule tests are currently rather rudimentary and need to be
expanded.