From fe32ab63f154b8bc1d0cf8ac7611a8a83459ea5a Mon Sep 17 00:00:00 2001 From: Markus Makela Date: Fri, 30 Jan 2015 20:57:48 +0200 Subject: [PATCH] Removed documentation files from the wrong folders. --- rabbitmq_consumer/rabbitmq.md | 375 ------------------------------ server/modules/filter/fwfilter.md | 108 --------- 2 files changed, 483 deletions(-) delete mode 100644 rabbitmq_consumer/rabbitmq.md delete mode 100644 server/modules/filter/fwfilter.md diff --git a/rabbitmq_consumer/rabbitmq.md b/rabbitmq_consumer/rabbitmq.md deleted file mode 100644 index ec74a7f3b..000000000 --- a/rabbitmq_consumer/rabbitmq.md +++ /dev/null @@ -1,375 +0,0 @@ -# Rabbit MQ setup and MaxScale Integration -## Introduction -A step by step guide helps installing a RabbitMQ server and testing it before MaxScale integration. - -New plugin filter and a message consumer application need to be compiled and linked with an external C library, RabbitMQ-c, that provides AMQP protocol integration. -Custom configuration, with TCP/IP and Queue parameters, is also detailed here. -The software install setup provides RPM and DEB packaging and traditional compilation steps. - -## Step 1 - Get the RabbitMQ binaries - -On Centos 6.5 using fedora / RHEL rpm get the rpm from [http://www.rabbitmq.com/](http://www.rabbitmq.com/ "RabbitMQ") - - rabbitmq-server-3.3.4-1.noarch.rpm - -Please note, before installing RabbitMQ, you must install Erlang. - -Example: - - yum install erlang - Package erlang-R14B-04.3.el6.x86_64 already installed and latest version - -## Step 2 - Install and Start the Server - -Install the packages using your distribution's package manager and start the server: - - yum install rabbitmq-server-3.3.4-1.noarch.rpm - systemctl start rabbitmq-server.service - -To configure your RabbitMQ server, please refer to the RabbitMQ website: [http://www.rabbitmq.com/](http://www.rabbitmq.com/ RabbitMQ website). - -rabbitmqctl is a command line tool for managing a RabbitMQ broker. It performs all actions by connecting to one of the broker's nodes. - - rabbitmqctl list_queues - rabbitmqctl list_queues | list_exchanges| cluster_status | list_bindings | list_connections | list_consumers | status - -Example output: - - [root@maxscale-02 MaxScale]# rabbitmqctl status - Status of node 'rabbit@maxscale-02' ... - [{pid,12251}, - {running_applications,[{rabbit,"RabbitMQ","3.3.4"}, - {os_mon,"CPO CXC 138 46","2.2.7"}, - {xmerl,"XML parser","1.2.10"}, - {mnesia,"MNESIA CXC 138 12","4.5"}, - {sasl,"SASL CXC 138 11","2.1.10"}, - {stdlib,"ERTS CXC 138 10","1.17.5"}, - {kernel,"ERTS CXC 138 10","2.14.5"}]}, - {os,{unix,linux}}, - {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:2:2] [rq:2] [async-threads:30] [kernel-poll:true]\n"}, - ... - {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, - ... - ...done. - - - [root@maxscale-02 MaxScale]# rabbitmqctl list_bindings - Listing bindings ... - x1 exchange q1 queue k1 [] - ...done. - -Interaction with the server may require stop & reset at some point: - - rabbitmqctl stop_app - rabbitmqctl reset - rabbitmqctl start_app - -## Step 3 - Install and test the client libraries - -The selected library for MaxScale integration of RabbitMQ is: -[https://github.com/alanxz/rabbitmq-c](https://github.com/alanxz/rabbitmq-c RabbitMQ-C) - -### Manual software compilation - -To compile the RabbitMQ-C libraries manually: - - git clone https://github.com/alanxz/rabbitmq-c.git - cd rabbitmq-c - cmake -DCMAKE_INSTALL_PREFIX=/usr . - make - make install - -Please note, this will install the packages to /usr. If you do not wish to install them to this location, provide a different value for the CMAKE_INSTALL_PREFIX variable. - - -### Setup using the EPEL repository - -Check how to configure your distribution for the EPEL repository: [https://fedoraproject.org/wiki/EPEL](https://fedoraproject.org/wiki/EPEL EPEL) - -Configure your repositories and install the software: - - yum install librabbitmq.x86_64 - -you might also like to install: - - librabbitmq-tools.x86_64, librabbitmq-devel.x86_64 - -Please note you may also install the rabbitmq server from the EPEL repository: - - yum install rabbitmq-server - - - - -### Basic tests with library - -The required library librabbitmq-c is now installed and we continue with basic operations with amqp_* tools, located in the examples/ folder of the build direcotry, testing client server interaction. - -Please note, those example applications may not be included in the RPM library packages. - -#### Test 1 - create the exchange - - [root@maxscale-02 examples]# ./amqp_exchange_declare - Usage: amqp_exchange_declare host port exchange exchangetype - -Declare the exchange: - - [root@maxscale-02 examples]# ./amqp_exchange_declare 127.0.0.1 5672 foo direct - -#### Test 2 - Listen to exchange with selected binding key - - [root@maxscale-02 examples]# ./amqp_listen - Usage: amqp_listen host port exchange bindingkey - -Start the listener: - - [root@maxscale-02 examples]# ./amqp_listen 127.0.0.1 5672 foo k1 & - -#### Test 3 - Send a message … - - [root@maxscale-02 examples]# ./amqp_sendstring - Usage: amqp_sendstring host port exchange routingkey messagebody - - [root@maxscale-02 examples]# ./amqp_sendstring 127.0.0.1 5672 foo k1 “This is a new message” - -... and watch the listener output - - Delivery 1, exchange foo routingkey k1 - Content-type: text/plain - - -## Step 4 - MaxScale integration with librabbitmq-c - -A new filter (mqfilter.c) is implemented in order to send messages to the rabbitmq server and a message consumer (rabbitmq_consumer/consumer.c) program will get messages and store them into a MySQL/MariaDB database. -A quick way to install MaxScale with the RabbitMQ filter is to go to the MaxScale source directory and run the following commands: - - mkdir build - cd build - cmake .. -DBUILD_RABBITMQ=Y - make - make install - -To build the RabbitMQ filter CMake needs an additional parameter: - - -DBUILD_RABBITMQ=Y - -If the librabbitmq-c library is manually compiled it may be necessary to manually pass the location of the libraries and header files to CMake. - -Libraries: - - -DRABBITMQ_LIBRARIES= - -Headers: - - -DRABBITMQ_HEADERS= - - -Please note, Message Queue Consumer (consumer.c) also needs to be compiled with MySQL/MariaDB client libraries in addition to the RabbitMQ-c libraries. If you have your MySQL/MariaDB client libraries and headers in non-standard locations, you can pass them manually to CMake: - -Libraries: - - -DMYSQLCLIENT_LIBRARIES= - -Headers: - - -DMYSQLCLIENT_HEADERS= - -The message queue consumer must be also built as a separate task, it’s not built as part of MaxScale build system. To build it, run the following commands in the rabbitmq_consumer directory in the MaxScale source folder: - - mkdir build - cd build - cmake .. - make - -To install it: - - make install - -To build packages: - - make package - -This generates RPM or DEB packages based on your system. These packages can then be installed on remote systems for easy access to the data generated by the consumer client. - -## Step 5 - Configure new applications - -The new filter needs to be configured in MaxScale.cnf. - - [Test Service] - type=service - router=readconnroute - router_options=slave - servers=server1,server2,server3,server5,server4 - user=massi - passwd=massi - filters=MQ - - [MQ] - type=filter - module=mqfilter - exchange=x1 - key=k1 - queue=q1 - hostname=127.0.0.1 - port=5672 - logging_trigger=all - - - -Logging triggers define whether to log all or a subset of the incoming queries using these options: - - # log only some elements or all - logging_trigger=[all,source,schema,object] - - # Whether to log only SELECT, UPDATE, INSERT and DELETE queries or all possible queries - logging_log_all=true|false - - - - # Log only when any of the trigger parameters match or only if all parameters match - logging_strict=true|false - - # specify objects - logging_object=mytable,another_table - - # specify logged users - logging_source_user=testuser,testuser - - - # specify source addresses - logging_source_host=127.0.0.1,192.168.10.14 - - # specify schemas - logging_schema=employees,orders,catalog - - -Example: - - logging_trigger=object,schema,source - logging_strict=false - logging_log_all=false - logging_object=my1 - logging_schema=test - logging_source_user=maxtest - - - - - - - -The logging result of the example is: - - if user maxtest does something, it's logged - and all queries in test schema are logged - anything targeting my1 table is logged - SELECT NOW(), SELECT MD5(“xyz)” are not logged - -Please note that if we want to log only the user ‘maxtest’ accessing the schema ‘test’ with target ‘my1’ the option logging_strict must be set to TRUE and if we want to include those selects without schema name the option logging_log_all must be set to TRUE. - -The mqfilter logs into the MaxScale TRACE log informations about the matched logging triggers and the message delivering: - - 2014 09/03 06:22:04 Trigger is TRG_SOURCE: user: testuser = testuser - 2014 09/03 06:22:04 Trigger is TRG_SCHEMA: test = test - 2014 09/03 06:22:04 Trigger is TRG_OBJECT: test.t1 = t1 - 2014 09/03 06:22:04 Routing message to: 127.0.0.1:5672 / as guest/guest, exchange: x1 key:k1 queue:q1 - -The consumer application needs to be configured as well: - - - #The options for the consumer are: - #hostname RabbitMQ hostname - #port RabbitMQ port - #vhost RabbitMQ virtual host - #user RabbitMQ username - #passwd RabbitMQ password - - - #queue Name of the queue to use - #dbserver SQL server name - #dbport SQL server port - #dbname Name of the database to use - #dbuser SQL server username - #dbpasswd SQL server password - #logfile Message log filename - - [consumer] - hostname=127.0.0.1 - port=5672 - vhost=/ - user=guest - passwd=guest - queue=q1 - dbserver=127.0.0.1 - dbport=3308 - dbname=mqpairs - dbuser=xxx - dbpasswd=yyy - -We may probably need to modify LD_LIBRARY_PATH before launching ‘consumer’: - - # export LD_LIBRARY_PATH=/packages/rabbitmq-c/rabbitmq-c/librabbitmq:/packages/mariadb_client-2.0.0-Linux/lib/mariadb:/usr/lib64 - -and finally we can launch it: - - # ./consumer - -If the consumer.cnf file is not in the same directory as the binary file is, you can provide the location of the folder that it is in by passing it the -c flag followed by the path: - - # ./consumer -c path/to/file - -and start maxScale as well - -## Step 6 - Test the filter and check collected data -Assuming that MaxScale and the message consumer are successfully running let’s connect to the service with an active mqfilter: - - [root@maxscale-02 MaxScale]# mysql -h 127.0.0.1 -P 4506 -uxxx -pyyy - ... - MariaDB [(none)]> select RAND(3), RAND(5); - +--------------------+---------------------+ - | RAND(3) | RAND(5) | - +--------------------+---------------------+ - | 0.9057697559760601 | 0.40613597483014313 | - +--------------------+---------------------+ - 1 row in set (0.01 sec) - - … - MariaDB [(none)]> select RAND(3544), RAND(11); - - - -we can check the consumer output in the terminal where it was started: - - -------------------------------------------------------------- - Received: 1409671452|select @@version_comment limit ? - Received: 1409671452|Columns: 1 - ... - Received: 1409671477|select RAND(?), RAND(?) - Received: 1409671477|Columns: 2 - - We query now the database for the content collected so far: - - MariaDB [(none)]> use mqpairs; - Database changed - - - - MariaDB [mqpairs]> select * from pairs; - - +-------------------------------------+----------------------------------+------------+---------------------+---------------------+---------+ - | tag | query | reply | date_in | date_out | counter | - +-------------------------------------+----------------------------------+------------+---------------------+---------------------+---------+ - | 006c006d006e006f007000710072007374 | select @@version_comment limit ? | Columns: 1 | 2014-09-02 11:14:51 | 2014-09-02 11:26:38 | 3 | - | 00750076007700780079007a007b007c7d | SELECT DATABASE() | Columns: 1 | 2014-09-02 11:14:56 | 2014-09-02 11:27:06 | 3 | - | 007e007f00800081008200830084008586 | show databases | Columns: 1 | 2014-09-02 11:14:56 | 2014-09-02 11:27:06 | 3 | - | 008700880089008a008b008c008d008e8f | show tables | Columns: 1 | 2014-09-02 11:14:56 | 2014-09-02 11:27:06 | 3 | - | 0090009100920093009400950096009798 | select * from mqpairs.pairs | Columns: 6 | 2014-09-02 11:15:00 | 2014-09-02 11:27:00 | 12 | - | 00fc00fd00fe00ff0100010101020103104 | select NOW() | Columns: 1 | 2014-09-02 11:24:23 | 2014-09-02 11:24:23 | 1 | - | 01050106010701080109010a010b010c10d | select RAND(?), RAND(?) | Columns: 2 | 2014-09-02 11:24:37 | 2014-09-02 11:24:37 | 1 | - +-------------------------------------+----------------------------------+------------+---------------------+---------------------+---------+ - 7 rows in set (0.01 sec) - -The filter send queries to the RabbitMQ server in the canonical format, i.e select RAND(?), RAND(?). -The queries Message Queue Consumer application gets from the server are stored with a counter that quickly shows how many times that normalized query was received: - - | 01050106010701080109010a010b010c10d | select RAND(?), RAND(?) | Columns: 2 | 2014-09-02 11:24:37 | 2014-09-02 11:29:15 | 3 | diff --git a/server/modules/filter/fwfilter.md b/server/modules/filter/fwfilter.md deleted file mode 100644 index c376303f6..000000000 --- a/server/modules/filter/fwfilter.md +++ /dev/null @@ -1,108 +0,0 @@ -Firewall filter - -# Overview -The firewall filter is used to block queries that match a set of rules. It can be used to prevent harmful queries into the database or to limit the access to the database based on a more defined set of rules compared to the traditional GRANT-based rights management. - -# Configuration - -The firewall filter only requires a minimal set of configurations in the MaxScale.cnf file. The actual rules of the firewall filter are located in a separate text file. The following is an example of a firewall filter configuration in the MaxScale.cnf file. - - - [Firewall] - type=filter - module=fwfilter - rules=/home/user/rules.txt - -## Filter Options - -The firewall filter does not support anny filter options. - -## Filter Parameters - -The firewall filter has one mandatory parameter that defines the location of the rule file. This is the 'rules' parameter and it expects an absolute path to the rule file. - -# Rule syntax - -The rules are defined by using the following syntax. - -` rule NAME deny [wildcard | columns VALUE ... | - regex REGEX | limit_queries COUNT TIMEPERIOD HOLDOFF | - no_where_clause] [at_times VALUE...] [on_queries [select|update|insert|delete]]` - -Rules always define a blocking action so the basic mode for the firewall filter is to allow all queries that do not match a given set of rules. Rules are identified by their name and have a mandatory part and optional parts. - -The first step of defining a rule is to start with the keyword 'rule' which identifies this line of text as a rule. The second token is identified as the name of the rule. After that the mandatory token 'deny' is required to mark the start of the actual rule definition. - -## Mandator rule parameters - -The firewall filter's rules expect a single mandatory parameter for a rule. You can define multiple rules to cover situations where you would like to apply multiple mandatory rules to a query. - -### Wildcard - -This rule blocks all queries that use the wildcard character *. - -### Columns - -This rule expects a list of values after the 'columns' keyword. These values are interpreted as column names and if a query targets any of these, it is blocked. - -### Regex - -This rule blocks all queries matching a regex enclosed in single or double quotes. - -### Limit_queries - -The limit_queries rule expects three parameters. The first parameter is the number of allowed queries during the time period. The second is the time period in seconds and the third is the amount of time for which the rule is considered active and blocking. - -### No_where_clause - -This rule inspects the query and blocks it if it has no where clause. This way you can't do a DELETE FROM ... query without having the where clause. This does not prevent wrongful usage of the where clause e.g. DELETE FROM ... WHERE 1=1. - -## Optional rule parameters - -Each mandatory rule accepts one or more optional parameters. These are to be defined after the mandatory part of the rule. - -### At_times - -This rule expects a list of time ranges that define the times when the rule in question is active. The time formats are expected to be ISO-8601 compliant and to be separated by a single dash (the - character). For example defining the active period of a rule to be 17:00 to 19:00 you would add 'at times 17:00:00-19:00:00' to the end of the rule. - -### On_queries - -This limits the rule to be active only on certain types of queries. - -## Applying rules to users - -To apply the defined rules to users use the following syntax. - -`users NAME ... match [any|all] rules RULE ...` - -The first keyword is users which identifies this line as a user definition line. After this a list of usernames and network addresses in the format 'user@0.0.0.0' is expected. The first part is the username and the second part is the network address. You can use the '%' character as the wildcard to enable username matching from any address or network matching for all users. After the list of users and networks the keyword match is expected. After this either the keyword 'any' or 'all' is expected. This defined how the rules are matched. If 'any' is used when the first rule is matched the query is considered blocked and the rest of the rules are skipped. If instead the 'all' keyword is used all rules must match for the query to be blocked. - -After the matching part comes the rules keyword after which a list of rule names is expected. This allows reusing of the rules and enables varying levels of query restriction. - -# Examples - -## Example rule file - -The following is an example of a rule file which defines six rules and applies them to three sets of users. This rule file is used in all of the examples. - - rule block_wildcard deny wildcard at_times 8:00:00-17:00:00 - rule no_personal_info deny columns phone salary address on_queries select|delete at_times 12:00:00-18:00:00 - rule simple_regex deny regex '.*insert.*into.*select.*' - rule dos_block deny limit_queries 10000 1.0 500.0 at_times 12:00:00-18:00:00 - rule safe_delete deny no_where_clause on_queries delete - rule managers_table deny regex '.*from.*managers.*' - users John@% Jane@% match any rules no_personal_info block_wildcard - users %@80.120.% match any rules block_wildcard dos_block - users %@% match all rules safe_delete managers_table - -## Example 1 - Deny access to personal information and prevent huge queries during peak hours - -Assume that a database cluster with tables that have a large number of columns is under heavy load during certain times of the day. Now also assume that large selects and querying of personal information creates unwanted stress on the cluster. Now we wouldn't want to completely prevent all the users from accessing personal information or performing large select queries, we only want to block the users John and Jane. - -This can be achieved by creating two rules. One that blocks the usage of the wildcard and one that prevents queries that target a set of columns. To apply these rules to the users we define a users line into the rule file with both the rules and all the users we want to apply the rules to. The rules are defined in the example rule file on line 1 and 2 and the users line is defined on line 7. - -## Example 2 - Only safe deletes into the managers table - -We want to prevent accidental deletes into the managers table where the where clause is missing. This poses a problem, we don't want to require all the delete queries to have a where clause. We only want to prevent the data in the managers table from being deleted without a where clause. - -To achieve this, we need two rules. The first rule can be seen on line 5 in the example rule file. This defines that all delete operations must have a where clause. This rule alone does us no good so we need a second one. The second rule is defined on line 6 and it blocks all queries that match the provided regular expression. When we combine these two rules we get the result we want. You can see the application of these rules on line 9 of the example rule file. The usage of the 'all' matching mode requires that all the rules must match for the query to be blocked. This in effect combines the two rules into a more complex rule.