- Hard TTL; the maximum time a value will be used from the cache.
- Soft TLL; the time after which the cache value should be updated
from the server.
So as not to unnecessarily fetch the same value multiple times, when
the soft TTL has been reached, the value will be updated for the first
client, while all other clients will use the stale value until it has
become updated.
With different soft and hard TTLs there is a definite upper bound for
how old a value can be used.
The luafilter exposes two of the main functions provided by the query
classifier API; the type and operation classification.
The functions can be used by the Lua script with minimal overhead as the
current query being executed is stored only as a pointer. The functions
should only be called inside the `routeQuery` entry point of a Lua script.
When a Galera cluster loses a member, it will recalculate the
wsrep_local_index values. As the index is zero-based, we can be certain
that in a valid cluster there will always be a node with an index of 0.
If the galeramon can't find a node with an index of 0, it means that
either the cluster hasn't stabilized and there's a pending recalculation
of the index or that there's no connectivity between MaxScale and the node
with the index value 0.
With this change and default settings, active-active MaxScale setups with
Galera clusters should always choose the same node as the master.
The document wraps paragraphs at 80 characters and uses triple backtick
bloks for code sections instead of indentation.
Also fixed a few typos and reworded some parts.
0 is now the default of all cache configuration parameters and in
all cases the meaning is the same; that is, no limit. Internally
all limits but ttl are now for the sake of consistency 64-bit.
If a slave fails while a non-critical read is being executed, the read is
retried on a different server. This is controlled by the new
`retry_failed_reads` option.
Only selects done that are done outside of a transaction and with
autocommit enabled are retried.
The documentation listed the rules as a comma separated list when they
were parsed as a whitespace separated list. The match specifiers were also
defined as optional when in fact they were mandatory.
MXS-848 (partially). The QLA-filter now has additional options
to control the printing.
1. "append"
This toggles append-mode, where the filter opens the log files in
update mode (if file already existed) and only adds text to the end.
2. "print_service"
This toggles writing the service name onto each row. Mostly useful
with the unified_file-setting.
3. "print_session"
This toggles writing the session number onto each row. Mostly useful
with the unified_file-setting.
Also, the filter now writes a header to the beginning of the file
when creating it.
The printing has been separated to its own helper-function, in case
more accurate control will be added in the future.
The maximum count and maximum size of the cache can now be
specified and a storage can declare what capabilities it has.
If a storage modile cannot enforce the maximum count or maximum
size limits, the storage is decorated with an LRU storage that
can.
Added a document that describes the module command system and added the
necessary information in the dbfwfilter documentation.
The release notes also point to the newly created document.
Since it's a cache, we do not need to retain the data from one
MaxScale invocation to the next. Consequently, we can turn off
write ahead logging completely if we simply wipe the RocksDB
database at each startup. In addition, the location of the
cache directory can now be specified explicitly, so it can be
placed, for instance, on a RAM disk.
If a master once had slaves and is in the stale status, it will not retain
this status after a restart. Without storing on-disk information, the
stale master status cannot be deduced by looking at the master
alone. Because of this, the user should be able to manually enable the
stale master status.
The listen() backlog is now set to INT_MAX which should guarantee that the
internal limit is always higher than the system limit. This means that the
length of the queue always follows /proc/sys/net/ipv4/tcp_max_syn_backlog.
With the advent of qc_get_field_info, columns can now be matched.
However, there is still some undeterminism caused by the table
information not containing contextual information (exactly where
is the table used).
Further, suppose table X contains the column A and table Y contains
the column B, then given a statement like
SELECT a, b from X, Z;
we cannot know whether a is in X or Z, or b in X or Z, without being
aware of the schema, which we currently are not.
Consequently, as long as MaxScale is not aware of the schema, some
heuristics must be applied. For instance, if exactly one table is
referred to, then we can assume that columns that are not explicitly
qualified are from that table.
The rule tests are currently rather rudimentary and need to be
expanded.