Update filter documentation
- Add note about filters having become available in 2.1. - Add insertstream to release notes and change log. - Add complete example to cache documentation.
This commit is contained in:
parent
c82831cc10
commit
a58498c9bf
@ -12,7 +12,7 @@
|
||||
* MySQL Monitor now has a failover mode.
|
||||
* Named Server Filter now supports wildcards for source option.
|
||||
* Binlog Server can now be configured to encrypt binlog files.
|
||||
* New filters, _cache, _ccrfilter_, _masking_, and _maxrows_ are introduced.
|
||||
* New filters, _cache_, _ccrfilter_, _insertstream_, _masking_, and _maxrows_ are introduced.
|
||||
|
||||
For more details, please refer to:
|
||||
* [MariaDB MaxScale 2.1.0 Release Notes](Release-Notes/MaxScale-2.1.0-Release-Notes.md)
|
||||
|
@ -1,5 +1,7 @@
|
||||
# Consistent Critical Read Filter
|
||||
|
||||
This filter was introduced in MariaDB MaxScale 2.1.
|
||||
|
||||
## Overview
|
||||
|
||||
The Consistent Critical Read (CCR) filter allows consistent critical reads to be
|
||||
|
@ -1,5 +1,7 @@
|
||||
# Cache
|
||||
|
||||
This filter was introduced in MariaDB MaxScale 2.1.
|
||||
|
||||
## Overview
|
||||
The cache filter is a simple cache that is capable of caching the result of
|
||||
SELECTs, so that subsequent identical SELECTs are served directly by MaxScale,
|
||||
@ -8,6 +10,11 @@ without the queries being routed to any server.
|
||||
_Note that the cache is still experimental and that non-backward compatible
|
||||
changes may be made._
|
||||
|
||||
Note that installing the cache causes all statements to be parsed. The
|
||||
implication of that is that unless statements _already_ need to be parsed,
|
||||
e.g. due to the presence of another filter or the chosen router, then adding
|
||||
the cache will not necessarily improve the performance, but may decrease it.
|
||||
|
||||
## Limitations
|
||||
|
||||
All of these limitations may be addressed in forthcoming releases.
|
||||
@ -639,3 +646,44 @@ The value is a boolean and the default is `false`.
|
||||
```
|
||||
storage_options=collect_statistics=true
|
||||
```
|
||||
|
||||
# Example
|
||||
|
||||
In the following we define a cache _MyCache_ that uses the cache storage module
|
||||
`storage_inmemory` and whose _soft ttl_ is `30` seconds and whose _hard ttl_ is
|
||||
`45` seconds. The cached data is shared between all threads and the maximum size
|
||||
of the cached data is `50` mebibytes. The rules for the cache are in the file
|
||||
`cache_rules.json`.
|
||||
|
||||
### Configuration
|
||||
```
|
||||
[MyCache]
|
||||
type=filter
|
||||
module=cache
|
||||
storage=storage_inmemory
|
||||
soft_ttl=30
|
||||
hard_ttl=45
|
||||
cached_data=shared
|
||||
max_size=50Mi
|
||||
rules=cache_rules.json
|
||||
|
||||
[MyService]
|
||||
type=service
|
||||
...
|
||||
filters=MyCache
|
||||
```
|
||||
|
||||
### `cache_rules.json`
|
||||
The rules specify that the data of the table `sbtest` should be cached.
|
||||
|
||||
```
|
||||
{
|
||||
"store": [
|
||||
{
|
||||
"attribute": "table",
|
||||
"op": "=",
|
||||
"value": "sbtest"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
@ -1,5 +1,8 @@
|
||||
# Insert Stream Filter
|
||||
|
||||
This filter was introduced in MariaDB MaxScale 2.1.
|
||||
|
||||
## Overview
|
||||
The _insertstream_ filter converts bulk inserts into CSV data streams that are
|
||||
consumed by the backend server via the LOAD DATA LOCAL INFILE mechanism. This
|
||||
leverages the speed advantage of LOAD DATA LOCAL INFILE over regular inserts
|
||||
|
@ -1,5 +1,7 @@
|
||||
# Masking
|
||||
|
||||
This filter was introduced in MariaDB MaxScale 2.1.
|
||||
|
||||
## Overview
|
||||
With the _masking_ filter it is possible to obfuscate the returned
|
||||
value of a particular column.
|
||||
|
@ -1,5 +1,7 @@
|
||||
# Maxrows
|
||||
|
||||
This filter was introduced in MariaDB MaxScale 2.1.
|
||||
|
||||
## Overview
|
||||
The maxrows filter is capable of restricting the amount of rows that a SELECT,
|
||||
a prepared statement or stored procedure could return to the client application.
|
||||
|
@ -271,6 +271,17 @@ to large sets of data with a single query.
|
||||
For more information, refer to the [Maxrows](../Filters/Maxrows.md)
|
||||
documentation.
|
||||
|
||||
### Insert stream filter
|
||||
|
||||
The _insertstream_ filter converts bulk inserts into CSV data streams that are
|
||||
consumed by the backend server via the LOAD DATA LOCAL INFILE mechanism. This
|
||||
leverages the speed advantage of LOAD DATA LOCAL INFILE over regular inserts
|
||||
while also reducing the overall network traffic by condensing the inserted
|
||||
values into CSV.
|
||||
|
||||
For more information, refer to the [Insert Stream Filter](../Filters/Insert-Stream-Filter.md)
|
||||
documentation.
|
||||
|
||||
### Galeramon Monitor new option
|
||||
The `set_donor_nodes` option allows the setting of _global variable_ _wsrep_sst_donor_ with a list the preferred donor nodes (among slave ones).
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user