Document removal of ndbclustermon and mmmon

This commit is contained in:
Markus Mäkelä 2019-04-24 14:06:27 +03:00
parent 1e84742cbb
commit 810dc06d5c
No known key found for this signature in database
GPG Key ID: 72D48FCE664F7B19
8 changed files with 12 additions and 412 deletions

View File

@ -110,11 +110,6 @@ Module specific documentation.
- [Aurora Monitor](Monitors/Aurora-Monitor.md)
- [Clustrix Monitor](Monitors/Clustrix-Monitor.md)
Legacy monitors that have been deprecated.
- [Multi-Master Monitor](Monitors/MM-Monitor.md)
- [MySQL Cluster Monitor](Monitors/NDB-Cluster-Monitor.md)
## Protocols
Documentation for MaxScale protocol modules.

View File

@ -1933,14 +1933,6 @@ Simple sharding on database level:
Binary log server:
* [Binlogrouter](../Routers/Binlogrouter.md)
## Diagnostic modules
These modules are used for diagnostic purposes and can tell about the status of
MariaDB MaxScale and the cluster it is monitoring.
* [MaxAdmin Module](../Routers/CLI.md)
* [Telnet Module](../Routers/Debug-CLI.md)
## Monitor Modules
Monitor modules are used by MariaDB MaxScale to internally monitor the state of
@ -1956,8 +1948,6 @@ which sets the status of each server via MaxAdmin is needed.
* [MariaDB Monitor](../Monitors/MariaDB-Monitor.md)
* [Galera Monitor](../Monitors/Galera-Monitor.md)
* [NDBCluster Monitor](../Monitors/NDB-Cluster-Monitor.md)
* [Multi-Master Monitor](../Monitors/MM-Monitor.md)
## Filter Modules
@ -2026,9 +2016,6 @@ password=61DD955512C39A4A8BC4BB1E5F116705
Read the following documents for different methods of altering the MaxScale
configuration at runtime.
* MaxAdmin
* [Runtime Configuration Changes](../Reference/MaxAdmin.md#runtime-configuration-changes)
* MaxCtrl
* [`create`](../Reference/MaxCtrl.md#create)
* [`destroy`](../Reference/MaxCtrl.md#destroy)
@ -2038,6 +2025,9 @@ configuration at runtime.
* [REST API](../REST-API/API.md) documentation
* MaxAdmin
* [Runtime Configuration Changes](../Reference/MaxAdmin.md#runtime-configuration-changes)
All changes to the configuration are persisted as individual configuration files
in `/var/lib/maxscale/maxscale.cnf.d/`. These files are applied after the main
configuration file and all auxiliary configurations have been loaded. This means

View File

@ -247,14 +247,10 @@ slave_down |A Slave server has gone down
slave_up |A Slave server has come up
server_down |A server with no assigned role has gone down
server_up |A server with no assigned role has come up
ndb_down |A MySQL Cluster node has gone down
ndb_up |A MySQL Cluster node has come up
lost_master |A server lost Master status
lost_slave |A server lost Slave status
lost_ndb |A MySQL Cluster node lost node membership
new_master |A new Master was detected
new_slave |A new Slave was detected
new_ndb |A new MySQL Cluster node was found
### `journal_max_age`

View File

@ -1,24 +0,0 @@
# NDB Cluster Monitor
**NOTE:** This module has been deprecated, do not use it.
## Overview
The MySQL Cluster Monitor is a monitoring module for MaxScale that monitors a MySQL Cluster. It assigns a NDB status for the server if it is a part of a MySQL Cluster.
## Configuration
A minimal configuration for a monitor requires a set of servers for monitoring and a username and a password to connect to these servers. The user requires the REPLICATION CLIENT privilege to successfully monitor the state of the servers.
```
[MySQL-Cluster-Monitor]
type=monitor
module=ndbclustermon
servers=server1,server2,server3
user=myuser
password=mypwd
```
### Common Monitor Parameters
For a list of optional parameters that all monitors support, read the [Monitor Common](Monitor-Common.md) document.

View File

@ -567,7 +567,6 @@ the request. The value of `state` must be one of the following values.
|maintenance| Server is put into maintenance |
|running | Server is up and running |
|synced | Server is a Galera node |
|ndb | Server is a NDBCluster node |
|stale | Server is a stale Master |
For example, to set the server _db-server-1_ into maintenance mode, a request to

View File

@ -52,6 +52,15 @@ administrative users, recreate the user.
The `debugcli` router and the `telnetd` protocol module it uses have been
removed.
### `ndbclustermon`
The `ndbclustermon` module has been removed.
### `mmmon`
The `mmmon` module has been removed as the `mariadbmon` monitor largely does
what it used to do.
## New Features
### Servers can be drained

View File

@ -33,7 +33,6 @@ Role|Description
master|A server assigned as a master by one of MariaDB MaxScale monitors. Depending on the monitor implementation, this could be a master server of a Master-Slave replication cluster or a Write-Master of a Galera cluster.
slave|A server assigned as a slave of a master. If all slaves are down, but the master is still available, then the router will use the master.
synced| A Galera cluster node which is in a synced state with the cluster.
ndb|A MySQL Replication Cluster node
running|A server that is up and running. All servers that MariaDB MaxScale can connect to are labeled as running.
If no `router_options` parameter is configured in the service definition,

View File

@ -1,364 +0,0 @@
# MySQL Cluster setup and MariaDB MaxScale configuration
## Overview
The document covers the MySQL Cluster 7.2.17 setup and MariaDB MaxScale
configuration for load balancing the SQL nodes access.
## MySQL Cluster setup
The MySQL Cluster 7.2.17 setup is based on two virtual servers with Linux Centos
6.5
- server1:
* NDB Manager process
* SQL data node1
* MySQL 5.5.38 as SQL node1
- server2:
* SQL data node2
* MySQL 5.5.38 as SQL node2
Cluster configuration file is `/var/lib/mysql-cluster/config.ini`, copied on all
servers.
```
[ndbd default]
NoOfReplicas=2
DataMemory=60M
IndexMemory=16M
[ndb_mgmd]
hostname=178.62.38.199
id=21
datadir=/var/lib/mysql-cluster
[mysqld]
hostname=178.62.38.199
[mysqld]
hostname=162.243.90.81
[ndbd]
hostname=178.62.38.199
[ndbd]
hostname=162.243.90.81
```
Note that it’s possible to specify all node id:s and `datadir` as well for each
cluster component.
Example:
```
[ndbd]
hostname=162.243.90.81
id=43
datadir=/usr/local/mysql/data
```
Also, `/etc/my.cnf`, copied as well in all servers.
```
[mysqld]
ndbcluster
ndb-connectstring=178.62.38.199
innodb_buffer_pool_size=16M
[mysql_cluster]
ndb-connectstring=178.62.38.199
```
## Startup of MySQL Cluster
Each cluster node process must be started separately, and on the host where it
resides. The management node should be started first, then the data nodes, and
finally any SQL nodes:
- On the management host, server1, issue the following command from the system
shell to start the management node process:
```
[root@server1 ~]# ndb_mgmd -f /var/lib/mysql-cluster/config.ini
```
- On each of the data node hosts, run this command to start the ndbd process:
```
[root@server1 ~]# ndbd —-initial -—initial-start
[root@server2 ~]# ndbd —-initial -—initial-start
```
- On each SQL node start the MySQL server process:
```
[root@server1 ~]# /etc/init.d/mysql start
[root@server2 ~]# /etc/init.d/mysql start
```
## Check the cluster status
If all has gone well and the cluster has been set up correctly, the cluster
should now be operational.
It’s possible to test this by invoking the `ndb_mgm` management node client.
The output should look as shown here, although you might see some slight
differences in the output depending upon the exact version of MySQL in use:
```
[root@server1 ~]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: 178.62.38.199:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=24 @178.62.38.199 (mysql-5.5.38 ndb-7.2.17, Nodegroup: 0, *)
id=25 @162.243.90.81 (mysql-5.5.38 ndb-7.2.17, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=21 @178.62.38.199 (mysql-5.5.38 ndb-7.2.17)
[mysqld(API)] 2 node(s)
id=22 @178.62.38.199 (mysql-5.5.38 ndb-7.2.17)
id=23 @162.243.90.81 (mysql-5.5.38 ndb-7.2.17)
ndb_mgm>
```
The SQL node is referenced here as [mysqld(API)], which reflects the fact that
the mysqld process is acting as a MySQL Cluster API node.
## Working with NDBCLUSTER engine in MySQL
- First create a table with NDBCLUSTER engine:
```
[root@server1 ~]# mysql
mysql> CREATE TABLE `t1` ( `a` int(11) DEFAULT NULL ) ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (3.28 sec)
mysql> show create table t1;
+------- +-------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------+-------------------------------------------------------------------------------------------+
| t1 | CREATE TABLE `t1` (
`a` int(11) DEFAULT NULL
) ENGINE=ndbcluster DEFAULT CHARSET=latin1 |
+-------+-------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)
```
- Add a row in the table:
```
mysql> insert into test.t1 values(11);
Query OK, 1 row affected (0.15 sec)
```
- Select the current number of rows:
```
mysql> select count(1) from t1;
+----------+
| count(1) |
+----------+
| 1 |
+----------+
1 row in set (0.07 sec)
```
- The same from the MySQL client pointing to SQL node on server2:
```
[root@server2 ~]# mysql
mysql> select count(1) from test.t1;
+----------+
| count(1) |
+----------+
| 1 |
+----------+
1 row in set (0.08 sec)
```
## Configuring MariaDB MaxScale for connection load balancing of SQL nodes
Add these sections into the maxscale.cnf config file:
```
[Cluster-Service]
type=service
router=readconnroute
router_options=ndb
servers=server1,server2
user=test
password=test
version_string=5.5.37-CLUSTER
[Cluster-Listener]
type=listener
service=Cluster-Service
protocol=MariaDBClient
port=4906
[NDB-Cluster-Monitor]
type=monitor
module=ndbclustermon
servers=server1,server2
user=monitor
password=monitor
monitor_interval=8000
[server1]
#SQL node1
type=server
address=127.0.0.1
port=3306
protocol=MariaDBBackend
[server2]
#SQL node2
type=server
address=162.243.90.81
port=3306
protocol=MariaDBBackend
```
Assuming MariaDB MaxScale is installed in server1, start it.
```
[root@server1 ~]# cd /usr/bin
[root@server1 bin]# ./maxscale -c ../
```
Using the debug interface it’s possible to check the status of monitored
servers.
```
MaxScale> show monitors
Monitor: 0x387b880
Name: NDB Cluster Monitor
Monitor running
Sampling interval: 8000 milliseconds
Monitored servers: 127.0.0.1:3306, 162.243.90.81:3306
MaxScale> show servers
Server 0x3873b40 (server1)
Server: 127.0.0.1
Status: NDB, Running
Protocol: MariaDBBackend
Port: 3306
Server Version: 5.5.38-ndb-7.2.17-cluster-gpl
Node Id: 22
Master Id: -1
Repl Depth: 0
Number of connections: 0
Current no. of conns: 0
Current no. of operations: 0
Server 0x3873a40 (server2)
Server: 162.243.90.81
Status: NDB, Running
Protocol: MariaDBBackend
Port: 3306
Server Version: 5.5.38-ndb-7.2.17-cluster-gpl
Node Id: 23
Master Id: -1
Repl Depth: 0
Number of connections: 0
Current no. of conns: 0
Current no. of operations: 0
```
It’s now possible to run basic tests with the read connection load balancing
for the two configured SQL nodes.
(1) test MaxScale load balancing requesting the Ndb_cluster_node_id variable:
```
[root@server1 ~]# mysql -h 127.0.0.1 -P 4906 -u test -ptest -e "SHOW STATUS LIKE 'Ndb_cluster_node_id'"
+---------------------+-------+
| Variable_name | Value |
+---------------------+-------+
| Ndb_cluster_node_id | 23 |
+---------------------+-------+
[root@server1 ~]# mysql -h 127.0.0.1 -P 4906 -u test -ptest -e "SHOW STATUS LIKE 'Ndb_cluster_node_id'"
+---------------------+-------+
| Variable_name | Value |
+---------------------+-------+
| Ndb_cluster_node_id | 22 |
+---------------------+-------+
```
The MariaDB MaxScale connection load balancing is working.
(2) test a select statement on an NBDBCLUSTER table, database test and table t1
created before:
```
[root@server1 ~] mysql -h 127.0.0.1 -P 4906 -utest -ptest -e "SELECT COUNT(1) FROM test.t1"
+----------+
| COUNT(1) |
+----------+
| 1 |
+----------+
```
(3) test an insert statement
```
mysql -h 127.0.0.1 -P 4906 -utest -ptest -e "INSERT INTO test.t1 VALUES (19)"
```
(4) test again the select and check the number of rows
```
[root@server1 ~] mysql -h 127.0.0.1 -P 4906 -utest -ptest -e "SELECT COUNT(1) FROM test.t1"
+----------+
| COUNT(1) |
+----------+
| 2 |
+----------+
```