Merge pull request #124 from mariadb-corporation/2.1-doc-markusjm

2.1 documentation update
This commit is contained in:
Johan Wikman 2017-04-20 12:07:31 +03:00 committed by GitHub
commit bda8c478e7
13 changed files with 780 additions and 727 deletions

View File

@ -24,11 +24,11 @@ module=hintfilter
## Comments and comment types
The client connection will need to have comments enabled. For example the `mysql` command line client has comments disabled by default.
The client connection will need to have comments enabled. For example the `mysql` command line client has comments disabled by default and they need to be enabled by passing the `-c` option.
For comment types, use either `-- ` (notice the whitespace) or `#` after the semicolon or `/* .. */` before the semicolon. All comment types work with routing hints.
For comment types, use either `-- ` (notice the whitespace after the double hyphen) or `#` after the semicolon or `/* .. */` before the semicolon.
The MySQL manual doesn`t specify if comment blocks, i.e. `/* .. */`, should contain a w
The MySQL manual doesn't specify if comment blocks, i.e. `/* .. */`, should contain a
whitespace character before or after the tags, so adding whitespace at both the start and the end is advised.
## Hint body
@ -39,10 +39,10 @@ All hints must start with the `maxscale` tag.
-- maxscale <hint body>
```
The hints have two types, ones that route to a server and others that contain
The hints have two types, ones that define a server type and others that contain
name-value pairs.
###Routing destination hints
### Routing destination hints
These hints will instruct the router to route a query to a certain type of a server.
```

View File

@ -1,56 +0,0 @@
# How errors are handled in MariaDB MaxScale
This document describes how errors are handled in MariaDB MaxScale, its protocol modules and routers.
Assume a client, maxscale, and master/slave replication cluster.
An "error" can be due to failed authentication, routing error (unsupported query type etc.), or backend failure.
## Authentication error
Authentication is relatively complex phase in the beginning of session creation. Roughly speaking, client protocol has loaded user information from backend so that it can authenticate client without consulting backend. When client sends authentication data to MariaDB MaxScale data is compared against backend’s user data in the client protocol module. If authentication fails client protocol module refreshes backend data just in case it had became obsolete after last refresh. If authentication still fails after refresh, authentication error occurs.
Close sequence starts from mysql_client.c:gw_read_client_event where
1. session state is set to SESSION_STATE_STOPPING
2. dcb_close is called for client DCB
1. client DCB is removed from epoll set and state is set to DCB_STATE_NOPOLLING
2. client protocol’s close is called (gw_client_close)
* protocol struct is done’d
* router’s closeSession is called (includes calling dcb_close for backends)
3. dcb_call_callback is called for client DCB with DCB_REASON_CLOSE
4. client DCB is set to zombies list
Each call for dcb_close in closeSession repeat steps 2a-d.
## Routing errors
### Invalid capabilities returned by router
When client protocol module receives query from client the protocol state is (typically) MYSQL_IDLE. The protocol state is checked in mysql_client.c:gw_read_client_event. First place where a hard error may occur is when router capabilities are read. If router response is invalid (other than RCAP_TYPE_PACKET_INPUT and RCAP_TYPE_STMT_INPUT). In case of invalid return value from the router, error is logged, followed by session closing.
### Backend failure
When mysql_client.c:gw_read_client_event calls either route_by_statement or directly MXS_SESSION_ROUTE_QUERY script, which calls the routeQuery function of the head session’s router. routeQuery returns 1 if succeed, or 0 in case of error. Success here means that query was routed and reply will be sent to the client while error means that routing failed because of backend (server/servers/service) failure or because of side effect of backend failure.
In case of backend failure, error is replied to client and handleError is called to resolve backend problem. handleError is called with action ERRACT_NEW_CONNECTION which tells to error handler that it should try to find a replacement for failed backend. Handler will return true if there are enough backend servers for session’s needs. If handler returns false it means that session can’t continue processing further queries and will be closed. Client will be sent an error message and dcb_close is called for client DCB.
Close sequence is similar to that described above from phase #2 onward.
Reasons for "backend failure" in rwsplit:
* router has rses_closed == true because other thread has detected failure and started to close session
* master has disappeared; demoted to slave, for example
### Router error
In cases where MXS_SESSION_ROUTE_QUERY has returned successfully (=1) query may not be successfully processed in backend or even sent to it. It is possible that router fails in routing the particular query but there is no such error which would prevent session from continuing. In this case router handles error silently by creating and adding MySQL error to first available backend’s (incoming) eventqueue where it is found and sent to client (clientReply).

File diff suppressed because it is too large Load Diff

View File

@ -262,8 +262,7 @@ For more information on how to use these scripts, see the output of `cdc.py -h`
To build the avrorouter from source, you will need the [Avro C](https://avro.apache.org/docs/current/api/c/)
library, liblzma, [the Jansson library](http://www.digip.org/jansson/) and sqlite3 development headers. When
configuring MaxScale with CMake, you will need to add `-DBUILD_CDC=Y
-DBUILD_CDC=Y` to build the avrorouter and the CDC protocol module.
configuring MaxScale with CMake, you will need to add `-DBUILD_CDC=Y` to build the CDC module set.
For more details about building MaxScale from source, please refer to the
[Building MaxScale from Source Code](../Getting-Started/Building-MaxScale-from-Source-Code.md) document.

View File

@ -13,7 +13,7 @@ replication setup where replication is high-priority.
## Mandatory Router Parameters
The binlogrouter requires the `server`, `user` and `passwd` parameters. These
The binlogrouter requires the `server`, `user` and `password` parameters. These
should be configured according to the
[Configuration Guide](../Getting-Started/Configuration-Guide.md#service).
@ -32,18 +32,20 @@ following options should be given as a value to the `router_options` parameter.
### `binlogdir`
This parameter allows the location that MariaDB MaxScale uses to store binlog
files to be set. If this parameter is not set to a directory name then MariaDB
This parameter controls the location where MariaDB MaxScale stores the binary log
files. If this parameter is not set to a directory name then MariaDB
MaxScale will store the binlog files in the directory
/var/cache/maxscale/<Service Name>. In the binlog dir there is also the 'cache'
directory that contains data retrieved from the master during registration phase
and the master.ini file which contains the configuration of current configured
master.
`/var/cache/maxscale/<Service Name>` where `<Service Name>` is the name of the
service in the configuration file. The _binlogdir_ also contains the
_cache_ subdirectory which stores data retrieved from the master during the slave
registration phase. The master.ini file also resides in the _binlogdir_. This
file keeps track of the current master configuration and it is updated when a
`CHANGE MASTER TO` query is executed.
From 2.1 onwards, the 'cache' directory is stored in the same location as other
user credential caches. This means that with the default options, the user
credential cache is stored in
/var/cache/maxscale/<Service Name>/<Listener Name>/cache/.
`/var/cache/maxscale/<Service Name>/<Listener Name>/cache/`.
Read the [MySQL Authenticator](../Authenticators/MySQL-Authenticator.md)
documentation for instructions on how to define a custom location for the user
@ -51,45 +53,45 @@ cache.
### `uuid`
This is used to set the unique uuid that the binlog router uses when it connects
to the master server. If no explicit value is given for the uuid in the
configuration file then a uuid will be generated.
This is used to set the unique UUID that the binlog router uses when it connects
to the master server. If no explicit value is given for the UUID in the
configuration file then a UUID will be generated.
### `server_id`
As with uuid, MariaDB MaxScale must have a unique _server id_ for the connection
it makes to the master. This parameter provides the value of the server id that
As with UUID, MariaDB MaxScale must have a unique _server_id_. This parameter
configures the value of the _server_id_ that
MariaDB MaxScale will use when connecting to the master.
The id can also be specified using `server-id` but that is deprecated
and will be removed in a future release of MariaDB MaxScale.
Older versions of MaxScale allowed the ID to be specified using `server-id`.
This has been deprecated and will be removed in a future release of MariaDB MaxScale.
### `master_id`
The _server id_ value that MariaDB MaxScale should use to report to the slaves
The _server_id_ value that MariaDB MaxScale should use to report to the slaves
that connect to MariaDB MaxScale. This may either be the same as the server id
of the real master or can be chosen to be different if the slaves need to be
aware of the proxy layer. The real master server id will be used if the option
aware of the proxy layer. The real master server ID will be used if the option
is not set.
The id can also be specified using `master-id` but that is deprecated
and will be removed in a future release of MariaDB MaxScale.
Older versions of MaxScale allowed the ID to be specified using `master-id`.
This has been deprecated and will be removed in a future release of MariaDB MaxScale.
### `master_uuid`
It is a requirement of replication that each slave have a unique UUID value. The
MariaDB MaxScale router will identify itself to the slaves using the uuid of the
It is a requirement of replication that each slave has a unique UUID value. The
MariaDB MaxScale router will identify itself to the slaves using the UUID of the
real master if this option is not set.
### `master_version`
The MariaDB MaxScale router will identify itself to the slaves using the server
version of the real master if this option is not set.
By default, the router will identify itself to the slaves using the server
version of the real master. This option allows the router to use a custom version string.
### `master_hostname`
The MariaDB MaxScale router will identify itself to the slaves using the server
hostname of the real master if this option is not set.
By default, the router will identify itself to the slaves using the
hostname of the real master. This option allows the router to use a custom hostname.
### `user`
@ -113,13 +115,13 @@ the router options or using the username and password defined of the service
must be granted replication privileges on the database server.
```
MariaDB> CREATE USER 'repl'@'maxscalehost' IDENTIFIED by 'password';
MariaDB> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'maxscalehost';
CREATE USER 'repl'@'maxscalehost' IDENTIFIED by 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'maxscalehost';
```
### `password`
The password of the above user. If the password is not explicitly given then the
The password for the user. If the password is not explicitly given then the
password in the service entry will be used. For compatibility with other
username and password definitions within the MariaDB MaxScale configuration file
it is also possible to use the parameter passwd=.
@ -167,9 +169,9 @@ incomplete transactions detection.
### `send_slave_heartbeat`
This defines whether (on | off) MariaDB MaxScale sends to the slave the
heartbeat packet when there are no real binlog events to send. The default value
if 'off', no heartbeat event is sent to slave server. If value is 'on' the
This defines whether MariaDB MaxScale sends the heartbeat packet to the slave
when there are no real binlog events to send. The default value
is 'off' and no heartbeat events are sent to slave servers. If value is 'on' the
interval value (requested by the slave during registration) is reported in the
diagnostic output and the packet is send after the time interval without any
event to send.
@ -205,6 +207,7 @@ master.ini or later via CHANGE MASTER TO. This parameter cannot be modified at
runtime, default is 9.
### `encrypt_binlog`
Whether to encrypt binlog files: the default is Off.
When set to On the binlog files will be encrypted using specified AES algorithm
@ -226,11 +229,11 @@ the binlog events positions in binlog file are the same as in the master binlog
file and there is no position mismatch.
### `encryption_algorithm`
'aes_ctr' or 'aes_cbc'
The default is 'aes_cbc'
The encryption algorithm, either 'aes_ctr' or 'aes_cbc'. The default is 'aes_cbc'
### `encryption_key_file`
The specified key file must contains lines with following format:
`id;HEX(KEY)`
@ -277,10 +280,8 @@ values may be used for all other options.
## Examples
The [Replication
Proxy](../Tutorials/Replication-Proxy-Binlog-Router-Tutorial.md) tutorial will
The [Replication Proxy](../Tutorials/Replication-Proxy-Binlog-Router-Tutorial.md) tutorial will
show you how to configure and administrate a binlogrouter installation.
Tutorial also includes SSL communication setup to the master server and SSL
client connections setup to MaxScale Binlog Server.

View File

@ -18,38 +18,4 @@ protocol=telnetd
port=4442
```
Connections using the telnet protocol to port 4442 of the MariaDB MaxScale host will result in a new debug CLI session. A default username and password are used for this module, new users may be created using the add user command. As soon as any users are explicitly created the default username will no longer continue to work. The default username is admin with a password of mariadb.
The debugcli supports two modes of operation, `developer` and `user`. The mode is set via the `router_options` parameter. The user mode is more suited to end-users and administrators, whilst the develop mode is explicitly targeted to software developing adding or maintaining the MariaDB MaxScale code base. Details of the differences between the modes can be found in the debugging guide for MariaDB MaxScale. The default is `user` mode. The following service definition would enable a developer version of the debugcli.
```
[Debug Service]
type=service
router=debugcli
router_options=developer
```
It should be noted that both `user` and `developer` instances of debugcli may be defined within the same instance of MariaDB MaxScale, however they must be defined as two distinct services, each with a distinct listener.
```
[Debug Service]
type=service
router=debugcli
router_options=developer
[Debug Listener]
type=listener
service=Debug Service
protocol=telnetd
port=4442
[Admin Service]
type=service
router=debugcli
[Admin Listener]
type=listener
service=Debug Service
protocol=telnetd
port=4242
```
Connections using the telnet protocol to port 4442 of the MariaDB MaxScale host will result in a new debug CLI session. A default username and password are used for this module, new users may be created using the administrative interface. As soon as any users are explicitly created the default username will no longer continue to work. The default username is `admin` with a password of `mariadb`.

View File

@ -8,11 +8,9 @@ The readconnroute router provides simple and lightweight load balancing across a
## Configuration
Readconnroute router-specific settings are specified in the configuration file of MariaDB MaxScale in its specific section. The section can be freely named but the name is used later as a reference from listener section.
For more details about the standard service parameters, refer to the [Configuration Guide](../Getting-Started/Configuration-Guide.md).
## Router Options
### Router Options
**`router_options`** can contain a list of valid server roles. These roles are used as the valid types of servers the router will form connections to when new sessions are created.
```

View File

@ -1,4 +1,4 @@
#SchemaRouter Router
# SchemaRouter Router
The SchemaRouter router provides an easy and manageable sharding solution by building a single logical database server from multiple separate ones. Each database is shown to the client and queries targeting unique databases are routed to their respective servers. In addition to providing simple database-based sharding, the schemarouter router also enables cross-node session variable usage by routing all queries that modify the session to all nodes.
@ -29,7 +29,7 @@ The list of databases is built by sending a SHOW DATABASES query to all the serv
If you are connecting directly to a database or have different users on some of the servers, you need to get the authentication data from all the servers. You can control this with the `auth_all_servers` parameter. With this parameter, MariaDB MaxScale forms a union of all the users and their grants from all the servers. By default, the schemarouter will fetch the authentication data from all servers.
For example, if two servers have the database 'shard' and the following rights are granted only on one server, all queries targeting the database 'shard' would be routed to the server where the grants were given.
For example, if two servers have the database `shard` and the following rights are granted only on one server, all queries targeting the database `shard` would be routed to the server where the grants were given.
```
# Execute this on both servers
@ -68,6 +68,8 @@ refresh_interval=60
## Router Options
**Note:** Router options for the Schemarouter were deprecated in MaxScale 2.1.
The following options are options for the `router_options` parameter of the
service. Multiple router options are given as a comma separated list of key
value pairs.
@ -87,7 +89,7 @@ will not be consistent anymore.
### `refresh_databases`
Enable database map refreshing mid-session. These are triggered by a failure to
change the database i.e. `USE ...``queries.
change the database i.e. `USE ...` queries.
### `refresh_interval`

View File

@ -12,15 +12,15 @@ Please note the solution is a quick setup example that may not be suited for all
## Clustering Software installation
On each node in the cluster do the following steps:
On each node in the cluster do the following steps.
(1) Add clustering repos to yum
### Add clustering repos to yum
```
# vi /etc/yum.repos.d/ha-clustering.repo
```
Add the following to the file
Add the following to the file.
```
[haclustering]
@ -30,7 +30,7 @@ enabled=1
gpgcheck=0
```
(2) Install the software
### Install the software
```
# yum install pacemaker corosync crmsh
@ -44,7 +44,7 @@ Package corosync-1.4.5-2.4.x86_64
Package crmsh-2.0+git46-1.1.x86_64
```
(3) Assign hostname on each node
### Assign hostname on each node
In this example the three names used for the nodes are: node1,node,node3
@ -56,7 +56,7 @@ In this example the three names used for the nodes are: node1,node,node3
[root@server3 ~]# hostname node3
```
(4) For each node add server names in /etc/hosts
For each node, add all the server names into `/etc/hosts`.
```
[root@node3 ~]# vi /etc/hosts
@ -70,9 +70,9 @@ In this example the three names used for the nodes are: node1,node,node3
10.35.15.26 node3
```
**Please note**: add **current-node** as an alias for the current node in each of the /etc/hosts files.
**Note**: add _current-node_ as an alias for the current node in each of the /etc/hosts files.
(5) Prepare authkey for optional cryptographic use
### Prepare authkey for optional cryptographic use
On one of the nodes, say node2 run the corosync-keygen utility and follow
@ -84,7 +84,7 @@ Corosync Cluster Engine Authentication key generator. Gathering 1024 bits
After completion the key will be found in /etc/corosync/authkey.
```
(6) Prepare the corosync configuration file
### Prepare the corosync configuration file
Using node2 as an example:
@ -141,7 +141,7 @@ name: pacemaker
}
```
**Please note** in this example:
**Note**: in this example:
- unicast UDP is used
@ -149,7 +149,7 @@ name: pacemaker
- Pacemaker processes are started by the corosync daemon, so there is no need to launch it via /etc/init.d/pacemaker start
(7) copy configuration files and auth key on each of the other nodes
### Copy configuration files and auth key on each of the other nodes
```
[root@node2 ~]# scp /etc/corosync/* root@node1:/etc/corosync/
@ -157,11 +157,7 @@ name: pacemaker
[root@node2 ~]# scp /etc/corosync/* root@nodeN:/etc/corosync/
```
(8) Corosync needs port *5*405 to be opened:
- configure any firewall or iptables accordingly
For a quick start just disable iptables on each nodes:
Corosync needs port 5405 to be opened. Configure any firewall or iptables accordingly. For a quick start just disable iptables on each nodes:
```
[root@node2 ~]# service iptables stop
@ -169,7 +165,7 @@ For a quick start just disable iptables on each nodes:
[root@nodeN ~]# service iptables stop
```
(9) Start Corosyn on each node:
### Start Corosyn on each node
```
[root@node2 ~] #/etc/init.d/corosync start
@ -177,14 +173,14 @@ For a quick start just disable iptables on each nodes:
[root@nodeN ~] #/etc/init.d/corosync start
```
and check the corosync daemon is successfully bound to port 5405:
Check that the corosync daemon is successfully bound to port 5405.
```
[root@node2 ~] #netstat -na | grep 5405
udp 0 0 10.228.103.72:5405 0.0.0.0:*
```
Check if other nodes are reachable with nc utility and option UDP (-u):
Check if other nodes are reachable with nc utility and option UDP (-u).
```
[root@node2 ~] #echo "check ..." | nc -u node1 5405
@ -194,19 +190,21 @@ Check if other nodes are reachable with nc utility and option UDP (-u):
[root@node1 ~] #echo "check ..." | nc -u node3 5405
```
If the following message is displayed
If the following message is displayed, there is an issue with communication between the nodes.
**nc: Write error: Connection refused**
```
nc: Write error: Connection refused
```
There is an issue with communication between the nodes, this is most likely to be an issue with the firewall configuration on your nodes. Check and resolve issues with your firewall configuration.
This is most likely to be an issue with the firewall configuration on your nodes. Check and resolve any issues with your firewall configuration.
(10) Check the cluster status, from any node
### Check the cluster status from any node
```
[root@node3 ~]# crm status
```
After a while this will be the output:
The command should produce the following.
```
[root@node3 ~]# crm status
@ -239,9 +237,9 @@ For additional information see:
[http://clusterlabs.org/doc/](http://clusterlabs.org/doc/)
The configuration is automatically updated on every node:
The configuration is automatically updated on every node.
Check it from another node, say node1
Check it from another node, say node1:
```
[root@node1 ~]# crm configure show
@ -260,9 +258,9 @@ property cib-bootstrap-options: \
The Corosync / Pacemaker cluster is ready to be configured to manage resources.
## MariaDB MaxScale init script /etc/init.d/maxscale
## MariaDB MaxScale init script
The MariaDB MaxScale /etc/init.d./maxscale script allows to start/stop/restart and monitor MariaDB MaxScale process running in the system.
The MariaDB MaxScale init script in `/etc/init.d./maxscale` allows to start, stop, restart and monitor the MariaDB MaxScale process running on the system.
```
[root@node1 ~]# /etc/init.d/maxscale
@ -315,14 +313,11 @@ Checking MaxScale status: MaxScale (pid 25953) is running.[ OK ]
The script exit code for "status" is 0
Note: the MariaDB MaxScale script is LSB compatible and returns the proper exit code for each action:
For additional information;
Read the following for additional information about LSB init scripts:
[http://www.linux-ha.org/wiki/LSB_Resource_Agents](http://www.linux-ha.org/wiki/LSB_Resource_Agents)
After checking MariaDB MaxScale is well managed by the /etc/init.d/script is possible to configure the MariaDB MaxScale HA via Pacemaker.
After checking that the init scripts for MariaDB MaxScale work, it is possible to configure MariaDB MaxScale for HA via Pacemaker.
# Configure MariaDB MaxScale for HA with Pacemaker
@ -333,7 +328,7 @@ op start interval="0” timeout=”15s” \
op stop interval="0” timeout=”30s”
```
MaxScale resource will be started:
MaxScale resource will be started.
```
[root@node2 ~]# crm status
@ -351,11 +346,11 @@ Online: [ node1 node2 node3 ]
MaxScale (lsb:maxscale): Started node1
```
##Basic use cases:
## Basic use cases
### 1. Resource restarted after a failure:
### Resource restarted after a failure
In the example MariaDB MaxScale PID is 26114, kill the process immediately:
In the example MariaDB MaxScale PID is 26114, kill the process immediately.
```
[root@node2 ~]# kill -9 26114
@ -377,9 +372,9 @@ Failed actions:
MaxScale_monitor_15000 on node1 'not running' (7): call=19, status=complete, last-rc-change='Mon Jun 30 13:16:14 2014', queued=0ms, exec=0ms
```
**Note** the **MaxScale_monitor** failed action
**Note**: the _MaxScale_monitor_ failed action
After a few seconds it will be started again:
After a few seconds it will be started again.
```
[root@node2 ~]# crm status
@ -397,9 +392,9 @@ Online: [ node1 node2 node3 ]
MaxScale (lsb:maxscale): Started node1
```
### 2. The resource cannot be migrated to node1 for a failure:
### The resource cannot be migrated to node1 for a failure
First, migrate the the resource to another node, say node3
First, migrate the the resource to another node, say node3.
```
[root@node1 ~]# crm resource migrate MaxScale node3
@ -412,7 +407,7 @@ Failed actions:
MaxScale_start_0 on node1 'not running' (7): call=76, status=complete, last-rc-change='Mon Jun 30 13:31:17 2014', queued=2015ms, exec=0ms
```
Note the **MaxScale_start** failed action on node1, and after a few seconds
**Note**: the _MaxScale_start_ failed action on node1, and after a few seconds.
```
[root@node3 ~]# crm status
@ -434,7 +429,7 @@ Failed actions:
MaxScale_start_0 on node1 'not running' (7): call=76, status=complete, last-rc-change='Mon Jun 30 13:31:17 2014', queued=2015ms, exec=0ms
```
Successfully, MaxScale has been started on a new node: node2.
Successfully, MaxScale has been started on a new node (node2).
**Note**: Failed actions remain in the output of crm status.
@ -447,7 +442,7 @@ Cleaning up MaxScale on node2
Cleaning up MaxScale on node3
```
The cleaned status is visible from other nodes as well:
The cleaned status is visible from other nodes as well.
```
[root@node2 ~]# crm status
@ -467,25 +462,21 @@ Online: [ node1 node2 node3 ]
## Add a Virtual IP (VIP) to the cluster
It’s possible to add a virtual IP to the cluster:
It’s possible to add a virtual IP to the cluster. MariaDB MaxScale process will be only contacted via this IP. The virtual IP can move across nodes in case one of them fails.
MariaDB MaxScale process will be only contacted with this IP, that mat move across nodes with maxscale process as well.
Setup is very easy:
assuming an addition IP address is available and can be added to one of the nodes, this i the new configuration to add:
The Setup is very easy. Assuming an addition IP address is available and can be added to one of the nodes, this is the new configuration to add.
```
[root@node2 ~]# crm configure primitive maxscale_vip ocf:heartbeat:IPaddr2 params ip=192.168.122.125 op monitor interval=10s
```
MariaDB MaxScale process and the VIP must be run in the same node, so it’s mandatory to add to the configuration the group ‘maxscale_service’.
MariaDB MaxScale process and the VIP must be run in the same node, so it is mandatory to add to the configuration to the group ‘maxscale_service’.
```
[root@node2 ~]# crm configure group maxscale_service maxscale_vip MaxScale
```
The final configuration is, from another node:
The following is the final configuration.
```
[root@node3 ~]# crm configure show
@ -511,7 +502,7 @@ property cib-bootstrap-options: \
last-lrm-refresh=1404125486
```
Check the resource status:
Check the resource status.
```
[root@node1 ~]# crm status
@ -533,5 +524,5 @@ Online: [ node1 node2 node3 ]
MaxScale (lsb:maxscale): Started node2
```
With both resources on node2, now MariaDB MaxScale service will be reachable via the configured VIP address 192.168.122.125
With both resources on node2, now MariaDB MaxScale service will be reachable via the configured VIP address 192.168.122.125.

View File

@ -12,34 +12,34 @@ Once you have MariaDB MaxScale installed and the database users created, we can
## Creating Your MariaDB MaxScale Configuration
MariaDB MaxScale reads its configuration from `/etc/maxscale.cnf`. This is not created as part of the installation process and must be manually created. A template file does exist in the `/usr/share/maxscale` folder that can be use as a basis for your configuration.
MariaDB MaxScale reads its configuration from `/etc/maxscale.cnf`. A template configuration is provided with the MaxScale installation.
A global, maxscale, section is included within every MariaDB MaxScale configuration file; this is used to set the values of various MariaDB MaxScale wide parameters, perhaps the most important of these is the number of threads that MariaDB MaxScale will use to execute the code that forwards requests and handles responses for clients.
A global, `[maxscale]`, section is included within every MariaDB MaxScale configuration file; this is used to set the values of various MariaDB MaxScale wide parameters, perhaps the most important of these is the number of threads that MariaDB MaxScale will use to handle client requests.
```
[maxscale]
threads=4
```
Since we are using MySQL Replication and connection routing we want two different ports to which the client application can connect; one that will be directed to the current master within the replication cluster and another that will load balance between the slaves. To achieve this within MariaDB MaxScale we need to define two services in the ini file; one for the read/write operations that should be executed on the master server and another for connections to one of the slaves. Create a section for each in your MariaDB MaxScale configuration file and set the type to service, the section names are the names of the services themselves and should be meaningful to the administrator. Names may contain whitespace.
Since we are using MySQL Replication and connection routing we want two different ports to which the client application can connect; one that will be directed to the current master within the replication cluster and another that will load balance between the slaves. To achieve this within MariaDB MaxScale we need to define two services in the ini file; one for the read/write operations that should be executed on the master server and another for connections to one of the slaves. Create a section for each in your MariaDB MaxScale configuration file and set the type to service, the section names are the names of the services themselves and should be meaningful to the administrator. Avoid using whitespace in the section names.
```
[Write Service]
[Write-Service]
type=service
[Read Service]
[Read-Service]
type=service
```
The router for these two sections is identical, the readconnroute module, also the services should be provided with the list of servers that will be part of the cluster. The server names given here are actually the names of server sections in the configuration file and not the physical hostnames or addresses of the servers.
```
[Write Service]
[Write-Service]
type=service
router=readconnroute
servers=dbserv1, dbserv2, dbserv3
[Read Service]
[Read-Service]
type=service
router=readconnroute
servers=dbserv1, dbserv2, dbserv3
@ -48,13 +48,13 @@ servers=dbserv1, dbserv2, dbserv3
In order to instruct the router to which servers it should route we must add router options to the service. The router options are compared to the status that the monitor collects from the servers and used to restrict the eligible set of servers to which that service may route. In our case we use the two options master and slave for our two services.
```
[Write Service]
[Write-Service]
type=service
router=readconnroute
router_options=master
servers=dbserv1, dbserv2, dbserv3
[Read Service]
[Read-Service]
type=service
router=readconnroute
router_options=slave
@ -77,7 +77,7 @@ maxpasswd plainpassword
The username and password, either encrypted or plain text, are stored in the service section using the user and passwd parameters.
```
[Write Service]
[Write-Service]
type=service
router=readconnroute
router_options=master
@ -85,7 +85,7 @@ servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
[Read Service]
[Read-Service]
type=service
router=readconnroute
router_options=slave
@ -97,28 +97,28 @@ passwd=96F99AA1315BDC3604B006F427DD9484
This completes the definitions required by the services, however listening ports must be associated with the services in order to allow network connections. This is done by creating a series of listener sections. These sections again are named for the convenience of the administrator and should be of type listener with an entry labeled service which contains the name of the service to associate the listener with. Each service may have multiple listeners.
```
[Write Listener]
[Write-Listener]
type=listener
service=Write Service
service=Write-Service
[Read Listener]
[Read-Listener]
type=listener
service=Read Service
service=Read-Service
```
A listener must also define the protocol module it will use for the incoming network protocol, currently this should be the MySQLClient protocol for all database listeners. The listener may then supply a network port to listen on and/or a socket within the file system.
```
[Write Listener]
[Write-Listener]
type=listener
service=Write Service
service=Write-Service
protocol=MySQLClient
port=4306
socket=/tmp/ClusterMaster
[Read Listener]
[Read-Listener]
type=listener
service=Read Service
service=Read-Service
protocol=MySQLClient
port=4307
```
@ -150,7 +150,7 @@ protocol=MySQLBackend
In order for MariaDB MaxScale to monitor the servers using the correct monitoring mechanisms a section should be provided that defines the monitor to use and the servers to monitor. Once again a section is created with a symbolic name for the monitor, with the type set to monitor. Parameters are added for the module to use, the list of servers to monitor and the username and password to use when connecting to the the servers with the monitor.
```
[Replication Monitor]
[Replication-Monitor]
type=monitor
module=mysqlmon
servers=dbserv1, dbserv2, dbserv3
@ -168,7 +168,7 @@ The final stage in the configuration is to add the option service which is used
type=service
router=cli
[CLI Listener]
[CLI-Listener]
type=listener
service=CLI
protocol=maxscaled
@ -195,59 +195,38 @@ Check the error log in /var/log/maxscale/ to see if any errors are detected in t
% maxadmin list services
Services.
--------------------------+----------------------+--------+---------------
Service Name | Router Module | #Users | Total Sessions
--------------------------+----------------------+--------+---------------
Read Service | readconnroute | 1 | 1
Write Service | readconnroute | 1 | 1
CLI | cli | 2 | 2
--------------------------+----------------------+--------+---------------
% maxadmin list servers
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
dbserv1 | 192.168.2.1 | 3306 | 0 | Running, Slave
dbserv2 | 192.168.2.2 | 3306 | 0 | Running, Master
dbserv3 | 192.168.2.3 | 3306 | 0 | Running, Slave
-------------------+-----------------+-------+-------------+--------------------
% maxadmin list listeners
Listeners.
---------------------+--------------------+-----------------+-------+--------
Service Name | Protocol Module | Address | Port | State
---------------------+--------------------+-----------------+-------+--------
Read Service | MySQLClient | * | 4307 | Running
Write Service | MySQLClient | * | 4306 | Running
CLI | maxscaled | localhost | 6603 | Running
---------------------+--------------------+-----------------+-------+--------
%
```
MariaDB MaxScale is now ready to start accepting client connections and routing them to the master or slaves within your cluster. Other configuration options are available that can alter the criteria used for routing, these include monitoring the replication lag within the cluster and routing only to slaves that are within a predetermined delay from the current master or using weights to obtain unequal balancing operations. These options may be found in the MariaDB MaxScale Configuration Guide. More detail on the use of maxadmin can be found in the document [MaxAdmin - The MariaDB MaxScale Administration & Monitoring Client Application](Administration-Tutorial.md).
MariaDB MaxScale is now ready to start accepting client connections and routing them to the master or slaves within your cluster. Other configuration options are available that can alter the criteria used for routing, these include monitoring the replication lag within the cluster and routing only to slaves that are within a predetermined delay from the current master or using weights to obtain unequal balancing operations. These options may be found in the MariaDB MaxScale Configuration Guide.
More detail on the use of maxadmin can be found in the document [MaxAdmin - The MariaDB MaxScale Administration & Monitoring Client Application](Administration-Tutorial.md).

View File

@ -1,27 +1,5 @@
# MariaDB MaxScale Notification Service and Feedback Support
Massimiliano Pinto
Last Updated: 10th March 2015
## Contents
## Document History
<table>
<tr>
<td>Date</td>
<td>Change</td>
<td>Who</td>
</tr>
<tr>
<td>10th March 2015</td>
<td>Initial version</td>
<td>Massimiliano Pinto</td>
</tr>
</table>
## Overview
The purpose of Notification Service in MariaDB MaxScale is for a customer registered for the service to receive update notices, security bulletins, fixes and workarounds that are tailored to the database server configuration.
@ -30,9 +8,9 @@ The purpose of Notification Service in MariaDB MaxScale is for a customer regist
MariaDB MaxScale may collect the installed plugins and send the information's nightly, between 2:00 AM and 4:59 AM.
It tries to send data and if there is any failure (timeout, server is down, etc), the next retry is in 1800 seconds (30 minutes)
It tries to send data and if there is any failure (timeout, server is down, etc), the next retry is in 1800 seconds (30 minutes).
This feature is not enabled by default: MariaDB MaxScale must be configured in [feedback] section:
This feature is not enabled by default: MariaDB MaxScale must be configured in `[feedback]` section:
```
[feedback]
@ -41,15 +19,16 @@ feedback_url=https://enterprise.mariadb.com/feedback/post
feedback_user_info=x-y-z-w
```
The activation code that will be provided by MariaDB corp upon request by the customer and it should be put in feedback_user_info.
The activation code that will be provided by MariaDB Corporation Ab upon request by the customer and it should be put in feedback_user_info.
Example:
```
feedback_user_info=0467009f-b04d-45b1-a77b-b6b2ec9c6cf4
```
MariaDB MaxScale generates the feedback report containing following information:
-The activation code used to enable feedback
- The activation code used to enable feedback
- MariaDB MaxScale Version
- An identifier of the MariaDB MaxScale installation, i.e. the HEX encoding of SHA1 digest of the first network interface MAC address
- Operating System (i.e Linux)
@ -57,12 +36,12 @@ MariaDB MaxScale generates the feedback report containing following information:
- All the modules in use in MariaDB MaxScale and their API and version
- MariaDB MaxScale server UNIX_TIME at generation time
MariaDB MaxScale shall send the generated feedback report to a feedback server specified in feedback_url
MariaDB MaxScale shall send the generated feedback report to a feedback server specified in _feedback_url_.
## Manual Operation
If it’s not possible to send data due to firewall or security settings the report could be generated manually (feedback_user_info is required) via MaxAdmin
If it’s not possible to send data due to firewall or security settings the report could be generated manually (feedback_user_info is required) via MaxAdmin.
```
MaxScale>show feedbackreport

View File

@ -1,4 +1,4 @@
#Simple Sharding with Two Servers
# Simple Sharding with Two Servers
![Schema Based Sharding](images/Simple-Sharding.png)
@ -10,25 +10,13 @@ MariaDB MaxScale will appear to the client as a database server with the combina
This document is designed as a simple tutorial on schema-based sharding using MariaDB MaxScale in an environment in which you have two servers. The object of this tutorial is to have a system that, to the client side, acts like a single MySQL database but actually is sharded between the two servers.
The process of setting and configuring MariaDB MaxScale will be covered within this document. The installation and configuration of the MySQL servers will not be covered in-depth. The users should be configured according to the configuration guide.
The database users should be configured according to [the configuration guide](../Getting-Started/Configuration-Guide.md). The [MaxScale Tutorial](MaxScale-Tutorial.md) contains easy to follow instructions on how to set up MaxScale.
This tutorial will assume the user is using of the binary distributions available and has installed this in the default location. The process of configuring MariaDB MaxScale will be covered within this document. The installation and configuration of the MySQL servers will not be covered in-depth.
This tutorial will assume the user is running from one of the binary distributions available and has installed this in the default location. Building from source code in GitHub is covered in guides elsewhere as is installing to non-default locations.
## Preparing MaxScale
## Process
The steps involved in creating a system from the binary distribution of MariaDB MaxScale are:
* Install the package relevant to your distribution
* Create the required users on your MariaDB or MySQL server
* Create a MariaDB MaxScale configuration file
### Installation
The precise installation process will vary from one distribution to another details of what to do with the RPM and DEB packages can be found on the download site when you select the distribution you are downloading from. The process involves setting up your package manager to include the MariaDB repositories and then running the package manager for your distribution, RPM or apt-get.
Upon successful completion of the installation command you will have MariaDB MaxScale installed and ready to be run but without a configuration. You must create a configuration file before you first run MariaDB MaxScale.
Follow the [MaxScale Tutorial](MaxScale-Tutorial.md) to install and prepare the required database users for MaxScale. You don't need to create the configuration file for MaxScale as it will be covered in the next section.
### Creating Your MariaDB MaxScale Configuration
@ -92,7 +80,8 @@ After this we have a fully working configuration and we can move on to starting
Upon completion of the configuration process MariaDB MaxScale is ready to be started . This may either be done manually by running the maxscale command or via the service interface. The service scripts are located in the `/etc/init.d/` folder and are accessible through both the `service` and `systemctl` commands.
After starting MariaDB MaxScale check the error log in /var/log/maxscale to see if any errors are detected in the configuration file. Also the maxadmin command may be used to confirm that MariaDB MaxScale is running and the services, listeners etc have been correctly configured.
MariaDB MaxScale is now ready to start accepting client connections and routing them. Queries are routed to the right servers based on the database they target and switching between the shards is seamless since MariaDB MaxScale keeps the session state intact between servers.
If MariaDB MaxScale fails to start, check the error log in `/var/log/maxscale` to see what sort of errors were detected.
**Note:** As the sharding solution in MaxScale is relatively simple, cross-database queries between two or more shards are not supported.

View File

@ -14,6 +14,18 @@ For a complete list of changes in MaxScale 2.1.0, refer to the
Before starting the upgrade, we **strongly** recommend you back up your current
configuration file.
## IPv6 Support
MaxScale 2.1.2 added support for IPv6 addresses. The default interface that listeners bind to
was changed from the IPv4 address `0.0.0.0` to the IPv6 address `::`. To bind to the old IPv4 address,
add `address=0.0.0.0` to the listener definition.
## Persisted Configuration Files
Starting with MaxScale 2.1, any changes made with the newly added
[runtime configuration change](../Reference/MaxAdmin.md#runtime-configuration-changes)
will be persisted in a configuration file. These files are located in `/var/lib/maxscale/maxscale.cnf.d/`.
## MaxScale Log Files
The name of the log file was changed from _maxscaleN.log_ to _maxscale.log_. The