Merge branch '2.1' into develop

This commit is contained in:
Johan Wikman
2017-04-27 09:11:02 +03:00
51 changed files with 2233 additions and 3313 deletions

View File

@ -7,10 +7,10 @@ This filter was introduced in MariaDB MaxScale 2.1.
The Consistent Critical Read (CCR) filter allows consistent critical reads to be
done through MaxScale while still allowing scaleout of non-critical reads.
When the filter detects a statement that would modify the database, it attaches a
routing hint to all following statements. This routing hint guides the routing
module to route the statement to the master server where data is guaranteed to be
in a up-to-date state.
When the filter detects a statement that would modify the database, it attaches
a routing hint to all following statements. This routing hint guides the routing
module to route the statement to the master server where data is guaranteed to
be in an up-to-date state.
## Filter Options
@ -42,8 +42,8 @@ _time_. Once the timer has elapsed, all statements are routed normally. If a new
data modifying SQL statement is processed within the time window, the timer is
reset to the value of _time_.
Enabling this parameter in combination with the _count_ parameter causes both the
time window and number of queries to be inspected. If either of the two
Enabling this parameter in combination with the _count_ parameter causes both
the time window and number of queries to be inspected. If either of the two
conditions are met, the query is re-routed to the master.
### `count`
@ -51,8 +51,8 @@ conditions are met, the query is re-routed to the master.
The number of SQL statements to route to master after detecting a data modifying
SQL statement. This feature is disabled by default.
After processing a data modifying SQL statement, a counter is set to the value of
_count_ and all statements are routed to the master. Each executed statement
After processing a data modifying SQL statement, a counter is set to the value
of _count_ and all statements are routed to the master. Each executed statement
after a data modifying SQL statement cause the counter to be decremented. Once
the counter reaches zero, the statements are routed normally. If a new data
modifying SQL statement is processed, the counter is reset to the value of
@ -61,8 +61,8 @@ _count_.
### `match`
An optional parameter that can be used to control which statements trigger the
statement re-routing. The parameter value is a regular expression that is used to
match against the SQL text. Only non-SELECT statements are inspected.
statement re-routing. The parameter value is a regular expression that is used
to match against the SQL text. Only non-SELECT statements are inspected.
```
match=.*INSERT.*

View File

@ -622,6 +622,9 @@ storage=storage_inmemory
## `storage_rocksdb`
This storage module is not built by default and is not included in the
MariaDB MaxScale packages.
This storage module uses RocksDB database for storing the cached data. The
directory where the RocksDB database will be created is by default created
into the _MaxScale cache_ directory, which usually is not on a RAM disk. For
@ -651,7 +654,8 @@ created, under which the actual instance specific cache directories are created.
Specifies whether RocksDB should collect statistics that later can be queried
using `maxadmin`. It should be noted, though, that collecting RocksDB statistics
is not without a cost. From the [RocksDB Documentation](https://github.com/facebook/rocksdb/wiki/Statistics)
is not without a cost.
From the [RocksDB Documentation](https://github.com/facebook/rocksdb/wiki/Statistics)
_The overhead of statistics is usually small but non-negligible. We usually
observe an overhead of 5%-10%._

View File

@ -1,8 +1,10 @@
#RabbitMQ Consumer Client
# RabbitMQ Consumer Client
## Overview
This utility tool is used to read messages from a RabbitMQ broker sent by the [RabbitMQ Filter](RabbitMQ-Filter.md) and forward these messages into an SQL database as queries.
This utility tool is used to read messages from a RabbitMQ broker sent by the
[RabbitMQ Filter](RabbitMQ-Filter.md) and forward these messages into an
SQL database as queries.
## Command Line Arguments
@ -14,7 +16,9 @@ The **RabbitMQ Consumer Client** only has one command line argument.
## Installation
To install the RabbitMQ Consumer Client you ca either use the provided packages or you can compile it from source code. The source code is included as a part of the MariaDB MaxScale source code and can be found in the `rabbitmq_consumer` folder.
To install the RabbitMQ Consumer Client you can either use the provided packages
or you can compile it from source code. The source code is included as a part of the
MariaDB MaxScale source code and can be found in the `rabbitmq_consumer` folder.
## Building from source
@ -48,9 +52,12 @@ include and library directories 'in buildvars.inc'
## Configuration
The consumer client requires that the `consumer.cnf` configuration file is either be present in the `etc` folder of the installation directory or in the folder specified by the `-c` argument.
The consumer client requires that the `consumer.cnf` configuration file is either
be present in the `etc` folder of the installation directory or in the folder
specified by the `-c` argument.
The source broker, the destination database and the message log file can be configured into the separate `consumer.cnf` file.
The source broker, the destination database and the message log file can be
configured into the separate `consumer.cnf` file.
| Option | Description |
|-----------|---------------------------------------------|

View File

@ -1,13 +1,26 @@
#RabbitMQ Filter
# RabbitMQ Filter
## Overview
This filter is designed to extract queries and transform them into a canonical form e.g. `INSERT INTO database.table VALUES ("John Doe", "Downtown",100,50.0);` turns into `INSERT INTO database.table VALUES ("?", "?",?,?);`. The filter pushes these canonicalized queries and their replies in to a RabbitMQ broker where they can later be retrieved. The retrieval can be done with your own application or the [RabbitMQ Consumer Client](RabbitMQ-Consumer-Client.md) utility tool, which reads the messages from the broker and sends the contents of those messages as SQL queries to a database.
This filter is designed to extract queries and transform them into a canonical
form e.g. `INSERT INTO database.table VALUES ("John Doe", "Downtown",100,50.0);`
turns into `INSERT INTO database.table VALUES ("?", "?",?,?);`. The filter
pushes these canonized queries and their replies into a RabbitMQ broker where
they can later be retrieved from. The retrieval can be done with a custom
application or the [RabbitMQ Consumer Client](RabbitMQ-Consumer-Client.md)
utility tool, which reads the messages from the broker and sends the contents of
those messages as SQL queries to a database.
## Configuration
The configuration block for the **mqfilter** filter requires the minimal filter options in it’s section within the maxscale.cnf file, stored in /etc/maxscale.cnf. Although the filter will start, it will use the default values which only work with a freshly installed RabbitMQ server and use its default values. This setup is mostly intended for testing the filter.
The configuration block for the **mqfilter** requires the minimal filter options
in its section within the MaxScale configuration file. Although the filter will
start, it will use the default values which only work with a freshly installed
RabbitMQ server and use its default values. This setup is mostly intended for
testing the filter.
The following is an example of a mqfilter configuration in the maxscale.cnf file used for actual logging of queries to a RabbitMQ broker on a different host.
The following is an example of an mqfilter configuration used for actual logging
of queries to a RabbitMQ broker on a different host.
```
[RabbitMQ]
@ -41,7 +54,10 @@ The mqfilter filter does not support any filter options.
### Filter Parameters
The RabbitMQ filter has parameters to control which queries are logged based on either the attributes of the user or the query itself. These can be combined to to only log queries targeting a certain table in a certain database from a certain user from a certain network address.
The RabbitMQ filter has parameters to control which queries are logged based on
either the attributes of the user or the query itself. These can be combined to
to only log queries targeting a certain table in a certain database from a
certain user from a certain network address.
Option | Description | Accepted Values | Default |

View File

@ -2,11 +2,16 @@
## Overview
The tee filter is a filter module for MariaDB MaxScale is a "plumbing" fitting in the MariaDB MaxScale filter toolkit. It can be used in a filter pipeline of a service to make a copy of requests from the client and dispatch a copy of the request to another service within MariaDB MaxScale.
The tee filter is a "plumbing" fitting in the MariaDB MaxScale filter toolkit.
It can be used in a filter pipeline of a service to make copies of requests from
the client and send the copies to another service within MariaDB MaxScale.
## Configuration
The configuration block for the TEE filter requires the minimal filter parameters in it’s section within the maxscale.cnf file, stored in /etc/maxscale.cnf, that defines the filter to load and the service to send the duplicates to. Currently the tee filter does not support multi-statements.
The configuration block for the TEE filter requires the minimal filter
parameters in its section within the MaxScale configuration file. The service to
send the duplicates to must be defined. Currently the tee filter does not
support multi-statements.
```
[DataMartFilter]
@ -41,31 +46,45 @@ options=case,extended
## Filter Parameters
The tee filter requires a mandatory parameter to define the service to replicate statements to and accepts a number of optional parameters.
The tee filter requires a mandatory parameter to define the service to replicate
statements to and accepts a number of optional parameters.
### Match
An optional parameter that can be used to limit the queries that will be replicated by the tee filter. The parameter value is a regular expression that is used to match against the SQL text. Only SQL statements that matches the text passed as the value of this parameter will be sent to the service defined in the filter section.
An optional parameter used to limit the queries that will be replicated by the
tee filter. The parameter value is a regular expression that is used to match
against the SQL text. Only SQL statements that matches the text passed as the
value of this parameter will be sent to the service defined in the filter
section.
```
match=insert.*into.*order*
```
All regular expressions are evaluated with the option to ignore the case of the text, therefore a match option of select will match both insert, INSERT and any form of the word with upper or lowercase characters.
All regular expressions are evaluated with the option to ignore the case of the
text, therefore a match option of select will match both insert, INSERT and any
form of the word with upper or lowercase characters.
### Exclude
An optional parameter that can be used to limit the queries that will be replicated by the tee filter. The parameter value is a regular expression that is used to match against the SQL text. SQL statements that match the text passed as the value of this parameter will be excluded from the replication stream.
An optional parameter used to limit the queries that will be replicated by the
tee filter. The parameter value is a regular expression that is used to match
against the SQL text. SQL statements that match the text passed as the value of
this parameter will be excluded from the replication stream.
```
exclude=select
```
All regular expressions are evaluated with the option to ignore the case of the text, therefore an exclude option of select will exclude statements that contain both select, SELECT or any form of the word with upper or lowercase characters.
All regular expressions are evaluated with the option to ignore the case of the
text, therefore an exclude option of select will exclude statements that contain
both select, SELECT or any form of the word with upper or lowercase characters.
### Source
The optional source parameter defines an address that is used to match against the address from which the client connection to MariaDB MaxScale originates. Only sessions that originate from this address will be replicated.
The optional source parameter defines an address that is used to match against
the address from which the client connection to MariaDB MaxScale originates.
Only sessions that originate from this address will be replicated.
```
source=127.0.0.1
@ -73,7 +92,9 @@ source=127.0.0.1
### User
The optional user parameter defines a user name that is used to match against the user from which the client connection to MariaDB MaxScale originates. Only sessions that are connected using this username are replicated.
The optional user parameter defines a user name that is used to match against
the user from which the client connection to MariaDB MaxScale originates. Only
sessions that are connected using this username are replicated.
```
user=john
@ -83,9 +104,17 @@ user=john
### Example 1 - Replicate all inserts into the orders table
Assume an order processing system that has a table called orders. You also have another database server, the datamart server, that requires all inserts into orders to be replicated to it. Deletes and updates are not however required.
Assume an order processing system that has a table called orders. You also have
another database server, the datamart server, that requires all inserts into
orders to be replicated to it. Deletes and updates are not, however, required.
Set up a service in MariaDB MaxScale, called Orders, to communicate with the order processing system with the tee filter applied to it. Also set up a service to talk the datamart server, using the DataMart service. The tee filter would have as it’s service entry the DataMart service, by adding a match parameter of "insert into orders" would then result in all requests being sent to the order processing system, and insert statements that include the orders table being additionally sent to the datamart server.
Set up a service in MariaDB MaxScale, called Orders, to communicate with the
order processing system with the tee filter applied to it. Also set up a service
to talk to the datamart server, using the DataMart service. The tee filter would
have as it’s service entry the DataMart service, by adding a match parameter of
"insert into orders" would then result in all requests being sent to the order
processing system, and insert statements that include the orders table being
additionally sent to the datamart server.
```
[Orders]

View File

@ -2,11 +2,17 @@
## Overview
The Transaction Performance Monitoring (TPM) filter is a filter module for MaxScale that monitors every SQL statement that passes through the filter. The filter groups a series of SQL statements into a transaction by detecting 'commit' or 'rollback' statements. It logs all committed transactions with necessary information, such as timestamp, client, SQL statements, latency, etc., which can be used later for transaction performance analysis.
The Transaction Performance Monitoring (TPM) filter is a filter module for MaxScale
that monitors every SQL statement that passes through the filter.
The filter groups a series of SQL statements into a transaction by detecting
'commit' or 'rollback' statements. It logs all committed transactions with necessary
information, such as timestamp, client, SQL statements, latency, etc., which
can be used later for transaction performance analysis.
## Configuration
The configuration block for the TPM filter requires the minimal filter options in it's section within the maxscale.cnf file, stored in /etc/maxscale.cnf.
The configuration block for the TPM filter requires the minimal filter
options in it's section within the maxscale.cnf file, stored in /etc/maxscale.cnf.
```
[MyLogFilter]
@ -32,7 +38,8 @@ The TPM filter accepts a number of optional parameters.
### Filename
The name of the output file created for performance logging. The default filename is **tpm.log**.
The name of the output file created for performance logging.
The default filename is **tpm.log**.
```
filename=/tmp/SqlQueryLog
@ -40,7 +47,10 @@ filename=/tmp/SqlQueryLog
### Source
The optional `source` parameter defines an address that is used to match against the address from which the client connection to MaxScale originates. Only sessions that originate from this address will be logged.
The optional `source` parameter defines an address that is used
to match against the address from which the client connection
to MaxScale originates. Only sessions that originate from this
address will be logged.
```
source=127.0.0.1
@ -48,7 +58,10 @@ source=127.0.0.1
### User
The optional `user` parameter defines a user name that is used to match against the user from which the client connection to MaxScale originates. Only sessions that are connected using this username are logged.
The optional `user` parameter defines a user name that is used
to match against the user from which the client connection to
MaxScale originates. Only sessions that are connected using
this username are logged.
```
user=john
@ -56,7 +69,8 @@ user=john
### Delimiter
The optional `delimiter` parameter defines a delimiter that is used to distinguish columns in the log. The default delimiter is **`:::`**.
The optional `delimiter` parameter defines a delimiter that is used to
distinguish columns in the log. The default delimiter is **`:::`**.
```
delimiter=:::
@ -64,7 +78,9 @@ delimiter=:::
### Query_delimiter
The optional `query_delimiter` defines a delimiter that is used to distinguish different SQL statements in a transaction. The default query delimiter is **`@@@`**.
The optional `query_delimiter` defines a delimiter that is used to
distinguish different SQL statements in a transaction.
The default query delimiter is **`@@@`**.
```
query_delimiter=@@@
@ -72,7 +88,11 @@ query_delimiter=@@@
### Named_pipe
**`named_pipe`** is the path to a named pipe, which TPM filter uses to communicate with 3rd-party applications (e.g., [DBSeer](http://dbseer.org)). Logging is enabled when the router receives the character '1' and logging is disabled when the router receives the character '0' from this named pipe. The default named pipe is **`/tmp/tpmfilter`** and logging is **disabled** by default.
**`named_pipe`** is the path to a named pipe, which TPM filter uses to
communicate with 3rd-party applications (e.g., [DBSeer](http://dbseer.org)).
Logging is enabled when the router receives the character '1' and logging is
disabled when the router receives the character '0' from this named pipe.
The default named pipe is **`/tmp/tpmfilter`** and logging is **disabled** by default.
named_pipe=/tmp/tpmfilter
@ -89,7 +109,8 @@ Similarly, the following command disables the logging:
### Example 1 - Log Transactions for Performance Analysis
You want to log every transaction with its SQL statements and latency for future transaction performance analysis.
You want to log every transaction with its SQL statements and latency
for future transaction performance analysis.
Add a filter with the following definition:
@ -111,7 +132,8 @@ passwd=mypasswd
filters=PerformanceLogger
```
After the filter reads the character '1' from its named pipe, the following is an example log that is generated from the above TPM filter with the above configuration:
After the filter reads the character '1' from its named pipe, the following
is an example log that is generated from the above TPM filter with the above configuration:
```
@ -120,4 +142,4 @@ After the filter reads the character '1' from its named pipe, the following is a
...
```
Note that 3 and 5 are latencies of each transaction in milliseconds.
Note that 3 and 5 are latencies of each transaction in milliseconds.