Documentation Update
Documentation Update
This commit is contained in:
@ -10,7 +10,7 @@ The document shows an example of a Pacemaker / Corosync setup with MaxScale base
|
||||
|
||||
Please note the solution is a quick setup example that may not be suited for all production environments.
|
||||
|
||||
# Clustering Software installation
|
||||
## Clustering Software installation
|
||||
|
||||
On each node in the cluster do the following steps:
|
||||
|
||||
@ -32,7 +32,9 @@ gpgcheck=0
|
||||
|
||||
(2) Install the software
|
||||
|
||||
```
|
||||
# yum install pacemaker corosync crmsh
|
||||
```
|
||||
|
||||
Package versions used
|
||||
|
||||
@ -258,7 +260,7 @@ property cib-bootstrap-options: \
|
||||
default-resource-stickiness=infinity
|
||||
```
|
||||
|
||||
The Corosync / Pacemaker cluster is ready to be configured to manage resources.
|
||||
The Corosync / Pacemaker cluster is ready to be configured to manage resources.
|
||||
|
||||
## MaxScale init script /etc/init.d/maxscale
|
||||
|
||||
@ -349,9 +351,9 @@ Online: [ node1 node2 node3 ]
|
||||
Online: [ node1 node2 node3 ]
|
||||
|
||||
MaxScale (lsb:maxscale): Started node1
|
||||
```
|
||||
```
|
||||
|
||||
#Basic use cases:
|
||||
##Basic use cases:
|
||||
|
||||
### 1. Resource restarted after a failure:
|
||||
|
||||
@ -395,7 +397,7 @@ Online: [ node1 node2 node3 ]
|
||||
Online: [ node1 node2 node3 ]
|
||||
|
||||
MaxScale (lsb:maxscale): Started node1
|
||||
```
|
||||
```
|
||||
|
||||
### 2. The resource cannot be migrated to node1 for a failure:
|
||||
|
||||
@ -463,7 +465,7 @@ Online: [ node1 node2 node3 ]
|
||||
Online: [ node1 node2 node3 ]
|
||||
|
||||
MaxScale (lsb:maxscale): Started node2
|
||||
```
|
||||
```
|
||||
|
||||
## Add a Virtual IP (VIP) to the cluster
|
||||
|
||||
|
Reference in New Issue
Block a user