Fixed typos in documentation.

This commit is contained in:
Markus Makela
2015-04-14 20:36:29 +03:00
parent 253c63e958
commit 786f34cf49
26 changed files with 60 additions and 60 deletions

View File

@ -82,7 +82,7 @@ The first keyword is `users`, which identifies this line as a user definition li
The second component is a list of user names and network addresses in the format *`user`*`@`*`0.0.0.0`*. The first part is the user name and the second part is the network address. You can use the `%` character as the wildcard to enable user name matching from any address or network matching for all users. After the list of users and networks the keyword match is expected.
After this either the keyword `any` `all` or `strict_all` is expected. This defined how the rules are matched. If `any` is used when the first rule is matched the query is considered blocked and the rest of the rules are skipped. If instead the `all` keyword is used all rules must match for the query to be blocked. The `strict_all` is the same as `all` but it checks the rules from left to right in the order they were listed. If one of these does not match, the rest of the rules are not checked. This could be usedful in situations where you would for example combine `limit_queries` and `regex` rules. By using `strict_all` you can have the `regex` rule first and the `limit_queries` rule second. This way the rule only matches if the `regex` rule matches enough times for the `limit_queries` rule to match.
After this either the keyword `any` `all` or `strict_all` is expected. This defined how the rules are matched. If `any` is used when the first rule is matched the query is considered blocked and the rest of the rules are skipped. If instead the `all` keyword is used all rules must match for the query to be blocked. The `strict_all` is the same as `all` but it checks the rules from left to right in the order they were listed. If one of these does not match, the rest of the rules are not checked. This could be useful in situations where you would for example combine `limit_queries` and `regex` rules. By using `strict_all` you can have the `regex` rule first and the `limit_queries` rule second. This way the rule only matches if the `regex` rule matches enough times for the `limit_queries` rule to match.
After the matching part comes the rules keyword after which a list of rule names is expected. This allows reusing of the rules and enables varying levels of query restriction.

View File

@ -22,7 +22,7 @@ The consumer client requires that the `consumer.cnf` configuration file is eithe
The source broker, the destination database and the message log file can be configured into the separate `consumer.cnf` file.
| Option | Desctiption |
| Option | Description |
|-----------|---------------------------------------------|
| hostname | Hostname of the RabbitMQ server |
| port | Port of the RabbitMQ server |
@ -34,5 +34,5 @@ The source broker, the destination database and the message log file can be conf
| dbport | Port of the SQL server |
| dbname | Name of the SQL database to use |
| dbuser | Database username |
| dbpasswd | Database passwork |
| dbpasswd | Database password |
| logfile | Message log filename |

View File

@ -5,7 +5,7 @@ This filter is designed to extract queries and transform them into a canonical f
## Configuration
The configuration block for the **mqfilter** filter requires the minimal filter options in it’s section within the MaxScale.cnf file, stored in $MAXSCALE_HOME/etc/MaxScale.cnf. Although the filter will start, it will use the default values which only work with a freshly installed RabbitMQ server and use its default values. This setup is mostly intednded for testing the filter.
The configuration block for the **mqfilter** filter requires the minimal filter options in it’s section within the MaxScale.cnf file, stored in $MAXSCALE_HOME/etc/MaxScale.cnf. Although the filter will start, it will use the default values which only work with a freshly installed RabbitMQ server and use its default values. This setup is mostly intended for testing the filter.
The following is an example of a mqfilter configuration in the MaxScale.cnf file used for actual logging of queries to a RabbitMQ broker on a different host.

View File

@ -56,7 +56,7 @@ user=john
### Example 1 - Replace MySQL 5.1 create table syntax with that for later versions
MySQL 5.1 used the parameter TYPE = to set the storage engine that should be used for a table. In later versions this changed to be ENGINE =. Imagine you have an application that you can not change for some reason, but you wish to migrate to a newer version of MySQL. The regexfilter can be used to transform the create table statments into the form that could be used by MySQL 5.5
MySQL 5.1 used the parameter TYPE = to set the storage engine that should be used for a table. In later versions this changed to be ENGINE =. Imagine you have an application that you can not change for some reason, but you wish to migrate to a newer version of MySQL. The regexfilter can be used to transform the create table statements into the form that could be used by MySQL 5.5
[CreateTableFilter]

View File

@ -58,7 +58,7 @@ user=john
Assume an order processing system that has a table called orders. You also have another database server, the datamart server, that requires all inserts into orders to be replicated to it. Deletes and updates are not however required.
Set up a service in MaxScale, called Orders, to communicate with the order processing system with the tee filter applied to it. Also set up a service to talk the datamart server, using the DataMart service. The tee filter woudl have as it’s service entry the DataMart service, by adding a match parameter of "insert into orders" would then result in all requests being sent to the order processing system, and insert statements that include the orders table being additionally sent to the datamart server.
Set up a service in MaxScale, called Orders, to communicate with the order processing system with the tee filter applied to it. Also set up a service to talk the datamart server, using the DataMart service. The tee filter would have as it’s service entry the DataMart service, by adding a match parameter of "insert into orders" would then result in all requests being sent to the order processing system, and insert statements that include the orders table being additionally sent to the datamart server.
[Orders]

View File

@ -36,7 +36,7 @@ The number of SQL statements to store and report upon.
count=30
The default vakue for the numebr of statements recorded is 10.
The default value for the number of statements recorded is 10.
### Match
@ -62,7 +62,7 @@ source=127.0.0.1
### User
The optional user parameter defines a user name that is used to match against the user from which the client connection to MaxScale originates. Only sessions that are connected using this username will result in results being gebnerated.
The optional user parameter defines a user name that is used to match against the user from which the client connection to MaxScale originates. Only sessions that are connected using this username will result in results being generated.
user=john
@ -126,11 +126,11 @@ In the router definition add both filters
filters=SlowAppServer | ControlAppServer
You will then have two sets of logs files written, one which profiles the top 20 queries of the slow application server and another that gives you the top 20 queries of your control application server. These two sets of files can then be compared to determine what if anythign is different between the two.
You will then have two sets of logs files written, one which profiles the top 20 queries of the slow application server and another that gives you the top 20 queries of your control application server. These two sets of files can then be compared to determine what if anything is different between the two.
# Output Report
The following is an example report for a number of fictitious queries executed against the employees exaple database available for MySQL.
The following is an example report for a number of fictitious queries executed against the employees example database available for MySQL.
-bash-4.1$ cat /var/logs/top/Employees-top-10.137