Tidy Tutorials
This commit is contained in:
parent
3b42f2e0ba
commit
258b6e2d04
@ -1,100 +1,86 @@
|
||||
Getting Started With MariaDB MaxScale
|
||||
# Filters
|
||||
|
||||
Filters
|
||||
|
||||
# What Are Filters?
|
||||
## What Are Filters?
|
||||
|
||||
The filter mechanism in MaxScale is a means by which processing can be inserted into the flow of requests and responses between the client connection to MaxScale and the MaxScale connection to the backend database servers. The path from the client side of MaxScale out to the actual database servers can be considered a pipeline, filters can then be placed in that pipeline to monitor, modify, copy or block the content that flows through that pipeline.
|
||||
|
||||
# Types Of Filter
|
||||
## Types Of Filter
|
||||
|
||||
Filters can be divided into a number of categories
|
||||
|
||||
* Logging filters
|
||||
### Logging filters
|
||||
|
||||
Logging filters do not in any way alter the statement or results of the statements that are passed through MaxScale. They merely log some information about some or all of the statements and/or result sets.
|
||||
|
||||
Two examples of logging filters are contained within the MaxScale GA, a filter that will log all statements and another that will log only a number of statements, based on the duration of the execution of the query.
|
||||
|
||||
* Statement rewriting filters
|
||||
### Statement rewriting filters
|
||||
|
||||
Statement rewriting filters modify the statements that are passed through the filter. This allows a filter to be used as a mechanism to alter the statements that are seen by the database, an example of the use of this might be to allow an application to remain unchanged when the underlying database changes or to compensate for the migration from one database schema to another.
|
||||
|
||||
The MaxScale GA includes a filter that can modify statements by use of regular expressions to match statements and replaced that matched text.
|
||||
|
||||
* Result set manipulation filters
|
||||
### Result set manipulation filters
|
||||
|
||||
A result set manipulation filter is very similar to a statement rewriting but applies to the result set returned rather than the statement executed. An example of this may be obfuscating the values in a column.
|
||||
|
||||
The MaxScale 1.0 GA release does not contain any result set manipulation filters.
|
||||
|
||||
* Routing hint filters
|
||||
### Routing hint filters
|
||||
|
||||
Routing hint filters are filters that embed hints in the request that can be used by the router onto which the query is passed. These hints include suggested destinations as well as metric that may be used by the routing process.
|
||||
|
||||
The MaxScale 1.0 GA release does not contain any hint filters.
|
||||
|
||||
* Firewall filters
|
||||
### Firewall filters
|
||||
|
||||
A firewall filter is a mechanism that allows queries to be blocked within MaxScale before they are sent on to the database server for execution. They allow constructs or individual queries to be intercepted and give a level of access control that is more flexible than the traditional database grant mechanism.
|
||||
|
||||
The 1.0 GA release of MaxScale does not include any firewall filters.
|
||||
|
||||
* Pipeline control filters
|
||||
### Pipeline control filters
|
||||
|
||||
A pipeline filter is one that has an affect on how the requests are routed within the internal MaxScale components. The most obvious version of this is the ability to add a "tee" connector in the pipeline, duplicating the request and sending it to a second MaxScale service for processing.
|
||||
|
||||
The MaxScale 1.0 GA release contains an implementation of a tee filter that allows statements to be matched using a regular expression and passed to a second service within MaxScale.
|
||||
|
||||
# Filter Definition
|
||||
## Filter Definition
|
||||
|
||||
Filters are defined in the configuration file, MaxScale.ini, using a section for each filter instance. The content of the filter sections in the configuration file various from filter to filter, however there are always to entries present for every filter, the type and module.
|
||||
|
||||
[MyFilter]
|
||||
|
||||
type=filter
|
||||
|
||||
module=xxxfilter
|
||||
[MyFilter]
|
||||
type=filter
|
||||
module=xxxfilter
|
||||
|
||||
The type is used by the configuration manager within MaxScale to determine what this section is defining and the module is the name of the plugin that implements the filter.
|
||||
|
||||
When a filter is used within a service in MaxScale the entry filters= is added to the service definition in the ini file section for the service. Multiple filters can be defined using a syntax akin to the Linux shell pipe syntax.
|
||||
|
||||
[Split Service]
|
||||
|
||||
type=service
|
||||
|
||||
router=readwritesplit
|
||||
|
||||
servers=dbserver1,dbserver2,dbserver3,dbserver4
|
||||
|
||||
user=massi
|
||||
|
||||
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
|
||||
|
||||
filters=hints | top10
|
||||
[Split Service]
|
||||
type=service
|
||||
router=readwritesplit
|
||||
servers=dbserver1,dbserver2,dbserver3,dbserver4
|
||||
user=massi
|
||||
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
|
||||
filters=hints | top10
|
||||
|
||||
The names used in the filters= parameter are the names of the filter definition sections in the ini file. The same filter definition can be used in multiple services and the same filter module can have multiple instances, each with its own section in the ini file.
|
||||
|
||||
# Filter Examples
|
||||
## Filter Examples
|
||||
|
||||
The filters that are bundled with the MaxScale 1.0 GA release are documented separately, in this section a short overview of how these might be used for some simple tasks will be discussed. These are just examples of how these filters might be used, other filters may also be easily added that will enhance the MaxScale functionality still further.
|
||||
|
||||
## Log The 30 Longest Running Queries
|
||||
### Log The 30 Longest Running Queries
|
||||
|
||||
The top filter can be used to measure the execution time of every statement within a connection and log the details of the longest running statements.
|
||||
|
||||
The first thing to do is to define a filter entry in the ini file for the top filter. In this case we will call it "top30". The type is filter and the module that implements the filter is called topfilter.
|
||||
|
||||
[top30]
|
||||
|
||||
type=filter
|
||||
|
||||
module=topfilter
|
||||
|
||||
count=30
|
||||
|
||||
filebase=/var/log/DBSessions/top30
|
||||
[top30]
|
||||
type=filter
|
||||
module=topfilter
|
||||
count=30
|
||||
filebase=/var/log/DBSessions/top30
|
||||
|
||||
In the definition above we have defined two filter specific parameters, the count of the number of statement to be logged and a filebase that is used to define where to log the information. This filename is a stem to which a session id is added for each new connection that uses the filter.
|
||||
|
||||
@ -102,47 +88,33 @@ The filter keeps track of every statement that is executed, monitors the time it
|
||||
|
||||
It is possible to see what is in the current list by using the maxadmin tool to view the state of the filter by looking at the session data. First you need to find the session id for the session of interest, this can be done using commands such as list sessions. You can then use the show session command to see the details for a particular session.
|
||||
|
||||
MaxScale> show session 0x736680
|
||||
|
||||
Session 0x736680
|
||||
MaxScale> show session 0x736680
|
||||
|
||||
Session 0x736680
|
||||
State: Session ready for routing
|
||||
|
||||
Service: Split Service (0x719f60)
|
||||
|
||||
Client DCB: 0x7361a0
|
||||
|
||||
Client Address: 127.0.0.1
|
||||
|
||||
Connected: Thu Jun 26 10:10:44 2014
|
||||
|
||||
Filter: top30
|
||||
|
||||
Report size 30
|
||||
|
||||
Logging to file /var/log/DBSessions/top30.1.
|
||||
|
||||
Current Top 30:
|
||||
|
||||
1 place:
|
||||
|
||||
Execution time: 23.826 seconds
|
||||
|
||||
SQL: select sum(salary), year(from_date) from salaries s, (select distinct year(from_date) as y1 from salaries) y where (makedate(y.y1, 1) between s.from_date and s.to_date) group by y.y1 ("1988-08-01?
|
||||
|
||||
2 place:
|
||||
|
||||
Execution time: 5.251 seconds
|
||||
|
||||
SQL: select d.dept_name as "Department", y.y1 as "Year", count(*) as "Count" from departments d, dept_emp de, (select distinct year(from_date) as y1 from dept_emp order by 1) y where d.dept_no = de.dept_no and (makedate(y.y1, 1) between de.from_date and de.to_date) group by y.y1, d.dept_name order by 1, 2
|
||||
|
||||
3 place:
|
||||
|
||||
Execution time: 2.903 seconds
|
||||
|
||||
SQL: select year(now()) - year(birth_date) as age, gender, avg(salary) as "Average Salary" from employees e, salaries s where e.emp_no = s.emp_no and ("1988-08-01" between from_date AND to_date) group by year(now()) - year(birth_date), gender order by 1,2
|
||||
|
||||
...
|
||||
...
|
||||
|
||||
When the session ends a report will be written for the session into the logfile defined. That report will include the top 30 longest running statements, plus summary data for the session;
|
||||
|
||||
@ -158,75 +130,49 @@ When the session ends a report will be written for the session into the logfile
|
||||
|
||||
* The average execution time for a statement in this connection.
|
||||
|
||||
## Duplicate Data From Your Application Into Cassandra
|
||||
### Duplicate Data From Your Application Into Cassandra
|
||||
|
||||
The scenario we are using in this example is one in which you have an online gaming application that is designed to work with a MariaDB/MySQL database. The database schema includes a high score table which you would like to have access to in a Cassandra cluster. The application is already using MaxScale to connect to a MariaDB Galera cluster, using a service names BubbleGame. The definition of that service is as follows
|
||||
|
||||
[BubbleGame]
|
||||
|
||||
type=service
|
||||
|
||||
router=readwritesplit
|
||||
|
||||
servers=dbbubble1,dbbubble2,dbbubble3,dbbubble4,dbbubble5
|
||||
|
||||
user=maxscale
|
||||
|
||||
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
|
||||
[BubbleGame]
|
||||
type=service
|
||||
router=readwritesplit
|
||||
servers=dbbubble1,dbbubble2,dbbubble3,dbbubble4,dbbubble5
|
||||
user=maxscale
|
||||
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
|
||||
|
||||
The table you wish to store in Cassandra in called HighScore and will contain the same columns in both the MariaDB table and the Cassandra table. The first step is to install a MariaDB instance with the Cassandra storage engine to act as a bridge server between the relational database and Cassandra. In this bridge server add a table definition for the HighScore table with the engine type set to cassandra. Add this server into the MaxScale configuration and create a service that will connect to this server.
|
||||
|
||||
[CassandraDB]
|
||||
|
||||
type=server
|
||||
|
||||
address=192.168.4.28
|
||||
|
||||
port=3306
|
||||
|
||||
protocol=MySQLBackend
|
||||
|
||||
[Cassandra]
|
||||
|
||||
type=service
|
||||
|
||||
router=readconnrouter
|
||||
|
||||
router_options=running
|
||||
|
||||
servers=CassandraDB
|
||||
|
||||
user=maxscale
|
||||
|
||||
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
|
||||
[CassandraDB]
|
||||
type=server
|
||||
address=192.168.4.28
|
||||
port=3306
|
||||
protocol=MySQLBackend
|
||||
[Cassandra]
|
||||
type=service
|
||||
router=readconnrouter
|
||||
router_options=running
|
||||
servers=CassandraDB
|
||||
user=maxscale
|
||||
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
|
||||
|
||||
Next add a filter definition for the tee filter that will duplication insert statements that are destined for the HighScore table to this new service.
|
||||
|
||||
[HighScores]
|
||||
|
||||
type=filter
|
||||
|
||||
module=teefilter
|
||||
|
||||
match=insert.*HighScore.*values
|
||||
|
||||
service=Cassandra
|
||||
[HighScores]
|
||||
type=filter
|
||||
module=teefilter
|
||||
match=insert.*HighScore.*values
|
||||
service=Cassandra
|
||||
|
||||
The above filter definition will cause all statements that match the regular expression inset.*HighScore.*values to be duplication and sent not just to the original destination, via the router but also to the service named Cassandra.
|
||||
|
||||
The final step is to add the filter to the BubbleGame service to enable the use of the filter.
|
||||
|
||||
[BubbleGame]
|
||||
|
||||
type=service
|
||||
|
||||
router=readwritesplit
|
||||
|
||||
servers=dbbubble1,dbbubble2,dbbubble3,dbbubble4,dbbubble5
|
||||
|
||||
user=maxscale
|
||||
|
||||
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
|
||||
|
||||
filters=HighScores
|
||||
[BubbleGame]
|
||||
type=service
|
||||
router=readwritesplit
|
||||
servers=dbbubble1,dbbubble2,dbbubble3,dbbubble4,dbbubble5
|
||||
user=maxscale
|
||||
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
|
||||
filters=HighScores
|
||||
|
||||
|
@ -1,8 +1,6 @@
|
||||
Getting Started With MariaDB MaxScale
|
||||
# Connection Routing with Galera Cluster
|
||||
|
||||
Connection Routing with Galera Cluster
|
||||
|
||||
# Environment & Solution Space
|
||||
## Environment & Solution Space
|
||||
|
||||
This document is designed as a quick introduction to setting up MaxScale in an environment in which you have a Galera Cluster within which you wish to balance connection across all the database nodes of the cluster that are active members of cluster.
|
||||
|
||||
@ -10,7 +8,7 @@ The process of setting and configuring MaxScale will be covered within this docu
|
||||
|
||||
This tutorial will assume the user is running from one of the binary distributions available and has installed this in the default location. Building from source code in GitHub is covered in guides elsewhere as is installing to non-default locations.
|
||||
|
||||
# Process
|
||||
## Process
|
||||
|
||||
The steps involved in creating a system from the binary distribution of MaxScale are:
|
||||
|
||||
@ -20,13 +18,13 @@ The steps involved in creating a system from the binary distribution of MaxScale
|
||||
|
||||
* Create a MaxScale configuration file
|
||||
|
||||
## Installation
|
||||
### Installation
|
||||
|
||||
The precise installation process will vary from one distribution to another details of what to do with the RPM and DEB packages can be found on the download site when you select the distribution you are downloading from. The process involves setting up your package manager to include the MariaDB repositories and then running the package manager for your distribution, RPM or apt-get.
|
||||
|
||||
Upon successful completion of the installation command you will have MaxScale installed and ready to be run but without a configuration. You must create a configuration file before you first run MaxScale.
|
||||
|
||||
## Creating Database Users
|
||||
### Creating Database Users
|
||||
|
||||
MaxScale needs to connect to the backend databases and run queries for two reasons; one to determine the current state of the database and the other to retrieve the user information for the database cluster. This may be done either using two separate usernames or with a single user.
|
||||
|
||||
@ -60,233 +58,159 @@ The second user is used to monitored the state of the cluster. This user, which
|
||||
|
||||
If you wish to use two different usernames for the two different roles of monitoring and collecting user information then create a different username using the first two steps from above.
|
||||
|
||||
## Creating Your MaxScale Configuration
|
||||
### Creating Your MaxScale Configuration
|
||||
|
||||
MaxScale configuration is held in an ini file that is located in the file MaxScale.cnf in the directory $MAXSCALE_HOME/etc, if you have installed in the default location then this file is available in /usr/local/skysql/maxscale/etc/MaxScale.cnf. This is not created as part of the installation process and must be manually created. A template file does exist within this directory that may be use as a basis for your configuration.
|
||||
|
||||
A global, maxscale, section is included within every MaxScale configuration file; this is used to set the values of various MaxScale wide parameters, perhaps the most important of these is the number of threads that MaxScale will use to execute the code that forwards requests and handles responses for clients.
|
||||
|
||||
[maxscale]
|
||||
|
||||
threads=4
|
||||
[maxscale]
|
||||
threads=4
|
||||
|
||||
Since we are using Galera Cluster and connection routing we want a single to which the client application can connect; MaxScale will then route connections to this port onwards to the various nodes within the Galera Cluster. To achieve this within MaxScale we need to define a service in the ini file. Create a section for each in your MaxScale.ini file and set the type to service, the section name is the names of the service and should be meaningful to the administrator. Names may contain whitespace.
|
||||
|
||||
[Galera Service]
|
||||
|
||||
type=service
|
||||
[Galera Service]
|
||||
type=service
|
||||
|
||||
The router for this section the readconnroute module, also the service should be provided with the list of servers that will be part of the cluster. The server names given here are actually the names of server sections in the configuration file and not the physical hostnames or addresses of the servers.
|
||||
|
||||
[Galera Service]
|
||||
|
||||
type=service
|
||||
|
||||
router=readconnroute
|
||||
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
[Galera Service]
|
||||
type=service
|
||||
router=readconnroute
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
|
||||
In order to instruct the router to which servers it should route we must add router options to the service. The router options are compared to the status that the monitor collects from the servers and used to restrict the eligible set of servers to which that service may route. In our case we use the option that restricts us to servers that are fully functional members of the Galera cluster which are able to support SQL operations on the cluster. To achieve this we use the router option synced.
|
||||
|
||||
[Galera Service]
|
||||
|
||||
type=service
|
||||
|
||||
router=readconnroute
|
||||
|
||||
router_options=synced
|
||||
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
[Galera Service]
|
||||
type=service
|
||||
router=readconnroute
|
||||
router_options=synced
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
|
||||
The final step in the service section is to add the username and password that will be used to populate the user data from the database cluster. There are two options for representing the password, either plain text or encrypted passwords may be used. In order to use encrypted passwords a set of keys must be generated that will be used by the encryption and decryption process. To generate the keys use the maxkeys command and pass the name of the secrets file in which the keys are stored.
|
||||
|
||||
% maxkeys /usr/local/skysql/maxscale/etc/.secrets
|
||||
|
||||
%
|
||||
% maxkeys /usr/local/skysql/maxscale/etc/.secrets
|
||||
%
|
||||
|
||||
Once the keys have been created the maxpasswd command can be used to generate the encrypted password.
|
||||
|
||||
% maxpasswd plainpassword
|
||||
|
||||
96F99AA1315BDC3604B006F427DD9484
|
||||
|
||||
%
|
||||
% maxpasswd plainpassword
|
||||
96F99AA1315BDC3604B006F427DD9484
|
||||
%
|
||||
|
||||
The username and password, either encrypted or plain text, are stored in the service section using the user and passwd parameters.
|
||||
|
||||
[Galera Service]
|
||||
|
||||
type=service
|
||||
|
||||
router=readconnroute
|
||||
|
||||
router_options=synced
|
||||
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
|
||||
user=maxscale
|
||||
|
||||
passwd=96F99AA1315BDC3604B006F427DD9484
|
||||
[Galera Service]
|
||||
type=service
|
||||
router=readconnroute
|
||||
router_options=synced
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
user=maxscale
|
||||
passwd=96F99AA1315BDC3604B006F427DD9484
|
||||
|
||||
This completes the definitions required by the service, however listening ports must be associated with a service in order to allow network connections. This is done by creating a series of listener sections. These sections again are named for the convenience of the administrator and should be of type listener with an entry labelled service which contains the name of the service to associate the listener with. Each service may have multiple listeners.
|
||||
|
||||
[Galera Listener]
|
||||
|
||||
type=listener
|
||||
|
||||
service=Galera Service
|
||||
[Galera Listener]
|
||||
type=listener
|
||||
service=Galera Service
|
||||
|
||||
A listener must also define the protocol module it will use for the incoming network protocol, currently this should be the MySQLClient protocol for all database listeners. The listener may then supply a network port to listen on and/or a socket within the file system.
|
||||
|
||||
[Galera Listener]
|
||||
|
||||
type=listener
|
||||
|
||||
service=Galera Service
|
||||
|
||||
protocol=MySQLClient
|
||||
|
||||
port=4306
|
||||
|
||||
socket=/tmp/DB.Cluster
|
||||
[Galera Listener]
|
||||
type=listener
|
||||
service=Galera Service
|
||||
protocol=MySQLClient
|
||||
port=4306
|
||||
socket=/tmp/DB.Cluster
|
||||
|
||||
An address parameter may be given if the listener is required to bind to a particular network address when using hosts with multiple network addresses. The default behaviour is to listen on all network interfaces.
|
||||
|
||||
The next stage is the configuration is to define the server information. This defines how to connect to each of the servers within the cluster, again a section is created for each server, with the type set to server, the network address and port to connect to and the protocol to use to connect to the server. Currently the protocol for all database connections in MySQLBackend.
|
||||
|
||||
[dbserv1]
|
||||
|
||||
type=server
|
||||
|
||||
address=192.168.2.1
|
||||
|
||||
port=3306
|
||||
|
||||
protocol=MySQLBackend
|
||||
|
||||
[dbserv2]
|
||||
|
||||
type=server
|
||||
|
||||
address=192.168.2.2
|
||||
|
||||
port=3306
|
||||
|
||||
protocol=MySQLBackend
|
||||
|
||||
[dbserv3]
|
||||
|
||||
type=server
|
||||
|
||||
address=192.168.2.3
|
||||
|
||||
port=3306
|
||||
|
||||
protocol=MySQLBackend
|
||||
[dbserv1]
|
||||
type=server
|
||||
address=192.168.2.1
|
||||
port=3306
|
||||
protocol=MySQLBackend
|
||||
[dbserv2]
|
||||
type=server
|
||||
address=192.168.2.2
|
||||
port=3306
|
||||
protocol=MySQLBackend
|
||||
[dbserv3]
|
||||
type=server
|
||||
address=192.168.2.3
|
||||
port=3306
|
||||
protocol=MySQLBackend
|
||||
|
||||
In order for MaxScale to monitor the servers using the correct monitoring mechanisms a section should be provided that defines the monitor to use and the servers to monitor. Once again a section is created with a symbolic name for the monitor, with the type set to monitor. Parameters are added for the module to use, the list of servers to monitor and the username and password to use when connecting to the the servers with the monitor.
|
||||
|
||||
[Galera Monitor]
|
||||
|
||||
type=monitor
|
||||
|
||||
module=galeramon
|
||||
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
|
||||
user=maxscale
|
||||
|
||||
passwd=96F99AA1315BDC3604B006F427DD9484
|
||||
[Galera Monitor]
|
||||
type=monitor
|
||||
module=galeramon
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
user=maxscale
|
||||
passwd=96F99AA1315BDC3604B006F427DD9484
|
||||
|
||||
As with the password definition in the server either plain text or encrypted passwords may be used.
|
||||
|
||||
The final stage in the configuration is to add the option service which is used by the maxadmin command to connect to MaxScale for monitoring and administration purposes. This creates a service section and a listener section.
|
||||
|
||||
[CLI]
|
||||
|
||||
type=service
|
||||
|
||||
router=cli
|
||||
|
||||
[CLI Listener]
|
||||
|
||||
type=listener
|
||||
|
||||
service=CLI
|
||||
|
||||
protocol=maxscaled
|
||||
|
||||
address=localhost
|
||||
|
||||
port=6603
|
||||
[CLI]
|
||||
type=service
|
||||
router=cli
|
||||
[CLI Listener]
|
||||
type=listener
|
||||
service=CLI
|
||||
protocol=maxscaled
|
||||
address=localhost
|
||||
port=6603
|
||||
|
||||
In the case of the example above it should be noted that an address parameter has been given to the listener, this limits connections to maxadmin commands that are executed on the same machine that hosts MaxScale.
|
||||
|
||||
# Starting MaxScale
|
||||
## Starting MaxScale
|
||||
|
||||
Upon completion of the configuration process MaxScale is ready to be started for the first time. This may either be done manually by running the maxscale command or via the service interface.
|
||||
|
||||
% maxscale
|
||||
% maxscale
|
||||
|
||||
or
|
||||
|
||||
% service maxscale start
|
||||
% service maxscale start
|
||||
|
||||
Check the error log in /usr/local/skysql/maxscale/log to see if any errors are detected in the configuration file and to confirm MaxScale has been started. Also the maxadmin command may be used to confirm that MaxScale is running and the services, listeners etc have been correctly configured.
|
||||
|
||||
% maxadmin -pskysql list services
|
||||
% maxadmin -pskysql list services
|
||||
|
||||
Services.
|
||||
|
||||
--------------------------+----------------------+--------+---------------
|
||||
|
||||
Service Name | Router Module | #Users | Total Sessions
|
||||
|
||||
--------------------------+----------------------+--------+---------------
|
||||
|
||||
Galera Service | readconnroute | 1 | 1
|
||||
|
||||
CLI | cli | 2 | 2
|
||||
|
||||
--------------------------+----------------------+--------+---------------
|
||||
|
||||
% maxadmin -pskysql list servers
|
||||
|
||||
Servers.
|
||||
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
|
||||
Server | Address | Port | Connections | Status
|
||||
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
|
||||
dbserv1 | 192.168.2.1 | 3306 | 0 | Running, Synced, Master
|
||||
|
||||
dbserv2 | 192.168.2.2 | 3306 | 0 | Running, Synced, Slave
|
||||
|
||||
dbserv3 | 192.168.2.3 | 3306 | 0 | Running, Synced, Slave
|
||||
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
Services.
|
||||
--------------------------+----------------------+--------+---------------
|
||||
Service Name | Router Module | #Users | Total Sessions
|
||||
--------------------------+----------------------+--------+---------------
|
||||
Galera Service | readconnroute | 1 | 1
|
||||
CLI | cli | 2 | 2
|
||||
--------------------------+----------------------+--------+---------------
|
||||
% maxadmin -pskysql list servers
|
||||
Servers.
|
||||
-------------------+-----------------+-------+-------------+-------------------
|
||||
Server | Address | Port | Connections | Status
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
dbserv1 | 192.168.2.1 | 3306 | 0 | Running, Synced, Master
|
||||
dbserv2 | 192.168.2.2 | 3306 | 0 | Running, Synced, Slave
|
||||
dbserv3 | 192.168.2.3 | 3306 | 0 | Running, Synced, Slave
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
|
||||
A Galera Cluster is a multi-master clustering technology, however the monitor is able to impose false notions of master and slave roles within a Galera Cluster in order to facilitate the use of Galera as if it were a standard MySQL Replication setup. This is merely an internal MaxScale convenience and has no impact on the behaviour of the cluster.
|
||||
|
||||
% maxadmin -pskysql list listeners
|
||||
% maxadmin -pskysql list listeners
|
||||
|
||||
Listeners.
|
||||
Listeners.
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
Service Name | Protocol Module | Address | Port | State
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
Galera Service | MySQLClient | * | 4306 | Running
|
||||
CLI | maxscaled | localhost | 6603 | Running
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
%
|
||||
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
|
||||
Service Name | Protocol Module | Address | Port | State
|
||||
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
|
||||
Galera Service | MySQLClient | * | 4306 | Running
|
||||
|
||||
CLI | maxscaled | localhost | 6603 | Running
|
||||
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
|
||||
%
|
||||
|
||||
MaxScale is now ready to start accepting client connections and routing them to the master or slaves within your cluster. Other configuration options are available that can alter the criteria used for routing, such as using weights to obtain unequal balancing operations. These options may be found in the MaxScale Configuration Guide. More detail on the use of maxadmin can be found in the document "MaxAdmin - The MaxScale Administration & Monitoring Client Application".
|
||||
MaxScale is now ready to start accepting client connections and routing them to the master or slaves within your cluster. Other configuration options are available that can alter the criteria used for routing, such as using weights to obtain unequal balancing operations. These options may be found in the MaxScale Configuration Guide. More detail on the use of maxadmin can be found in the document ["MaxAdmin - The MaxScale Administration & Monitoring Client Application"](../Reference/MaxAdmin.md).
|
||||
|
||||
|
@ -1,8 +1,6 @@
|
||||
Getting Started With MariaDB MaxScale
|
||||
# Read/Write Splitting with Galera Cluster
|
||||
|
||||
Read/Write Splitting with Galera Cluster
|
||||
|
||||
# Environment & Solution Space
|
||||
## Environment & Solution Space
|
||||
|
||||
This document is designed as a quick introduction to setting up MaxScale in an environment in which you have a Galera Cluster which you wish to use as a single database node for update and one or more read only nodes. The object of this tutorial is to have a system that appears to the clients of MaxScale as if there is a single database behind MaxScale. MaxScale will split the statements such that write statements will be sent to only one server in the cluster and read statements will be balanced across the remainder of the servers.
|
||||
|
||||
@ -12,7 +10,7 @@ The process of setting and configuring MaxScale will be covered within this docu
|
||||
|
||||
This tutorial will assume the user is running from one of the binary distributions available and has installed this in the default location. Building from source code in GitHub is covered in guides elsewhere as is installing to non-default locations.
|
||||
|
||||
# Process
|
||||
## Process
|
||||
|
||||
The steps involved in creating a system from the binary distribution of MaxScale are:
|
||||
|
||||
@ -22,13 +20,13 @@ The steps involved in creating a system from the binary distribution of MaxScale
|
||||
|
||||
* Create a MaxScale configuration file
|
||||
|
||||
## Installation
|
||||
### Installation
|
||||
|
||||
The precise installation process will vary from one distribution to another details of what to do with the RPM and DEB packages can be found on the download site when you select the distribution you are downloading from. The process involves setting up your package manager to include the MariaDB repositories and then running the package manager for your distribution, RPM or apt-get.
|
||||
|
||||
Upon successful completion of the installation command you will have MaxScale installed and ready to be run but without a configuration. You must create a configuration file before you first run MaxScale.
|
||||
|
||||
## Creating Database Users
|
||||
### Creating Database Users
|
||||
|
||||
MaxScale needs to connect to the backend databases and run queries for two reasons; one to determine the current state of the database and the other to retrieve the user information for the database cluster. This may be done either using two separate usernames or with a single user.
|
||||
|
||||
@ -62,237 +60,163 @@ The second user is used to monitored the state of the cluster. This user, which
|
||||
|
||||
If you wish to use two different usernames for the two different roles of monitoring and collecting user information then create a different username using the first two steps from above.
|
||||
|
||||
## Creating Your MaxScale Configuration
|
||||
### Creating Your MaxScale Configuration
|
||||
|
||||
MaxScale configuration is held in an ini file that is located in the file MaxScale.cnf in the directory $MAXSCALE_HOME/etc, if you have installed in the default location then this file is available in /usr/local/skysql/maxscale/etc/MaxScale.cnf. This is not created as part of the installation process and must be manually created. A template file does exist within this directory that may be use as a basis for your configuration.
|
||||
|
||||
A global, maxscale, section is included within every MaxScale configuration file; this is used to set the values of various MaxScale wide parameters, perhaps the most important of these is the number of threads that MaxScale will use to execute the code that forwards requests and handles responses for clients.
|
||||
|
||||
[maxscale]
|
||||
|
||||
threads=4
|
||||
[maxscale]
|
||||
threads=4
|
||||
|
||||
The first step is to create a service for our Read/Write Splitter. Create a section in your MaxScale.ini file and set the type to service, the section names are the names of the services themselves and should be meaningful to the administrator. Names may contain whitespace.
|
||||
|
||||
[Splitter Service]
|
||||
|
||||
type=service
|
||||
[Splitter Service]
|
||||
type=service
|
||||
|
||||
The router for we need to use for this configuration is the readwritesplit module, also the services should be provided with the list of servers that will be part of the cluster. The server names given here are actually the names of server sections in the configuration file and not the physical hostnames or addresses of the servers.
|
||||
|
||||
[Splitter Service]
|
||||
|
||||
type=service
|
||||
|
||||
router=readwritesplit
|
||||
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
[Splitter Service]
|
||||
type=service
|
||||
router=readwritesplit
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
|
||||
The final step in the service sections is to add the username and password that will be used to populate the user data from the database cluster. There are two options for representing the password, either plain text or encrypted passwords may be used. In order to use encrypted passwords a set of keys must be generated that will be used by the encryption and decryption process. To generate the keys use the maxkeys command and pass the name of the secrets file in which the keys are stored.
|
||||
|
||||
% maxkeys /usr/local/skysql/maxscale/etc/.secrets
|
||||
|
||||
%
|
||||
% maxkeys /usr/local/skysql/maxscale/etc/.secrets
|
||||
%
|
||||
|
||||
Once the keys have been created the maxpasswd command can be used to generate the encrypted password.
|
||||
|
||||
% maxpasswd plainpassword
|
||||
|
||||
96F99AA1315BDC3604B006F427DD9484
|
||||
|
||||
%
|
||||
% maxpasswd plainpassword
|
||||
96F99AA1315BDC3604B006F427DD9484
|
||||
%
|
||||
|
||||
The username and password, either encrypted or plain text, are stored in the service section using the user and passwd parameters.
|
||||
|
||||
[Splitter Service]
|
||||
|
||||
type=service
|
||||
|
||||
router=readwritesplit
|
||||
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
|
||||
user=maxscale
|
||||
|
||||
passwd=96F99AA1315BDC3604B006F427DD9484
|
||||
[Splitter Service]
|
||||
type=service
|
||||
router=readwritesplit
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
user=maxscale
|
||||
passwd=96F99AA1315BDC3604B006F427DD9484
|
||||
|
||||
This completes the definitions required by the service, however listening ports must be associated with the service in order to allow network connections. This is done by creating a series of listener sections. This section again is named for the convenience of the administrator and should be of type listener with an entry labelled service which contains the name of the service to associate the listener with. A service may have multiple listeners.
|
||||
|
||||
[Splitter Listener]
|
||||
|
||||
type=listener
|
||||
|
||||
service=Splitter Service
|
||||
[Splitter Listener]
|
||||
type=listener
|
||||
service=Splitter Service
|
||||
|
||||
A listener must also define the protocol module it will use for the incoming network protocol, currently this should be the MySQLClient protocol for all database listeners. The listener may then supply a network port to listen on and/or a socket within the file system.
|
||||
|
||||
[Splitter Listener]
|
||||
|
||||
type=listener
|
||||
|
||||
service=Splitter Service
|
||||
|
||||
protocol=MySQLClient
|
||||
|
||||
port=3306
|
||||
|
||||
socket=/tmp/ClusterMaster
|
||||
[Splitter Listener]
|
||||
type=listener
|
||||
service=Splitter Service
|
||||
protocol=MySQLClient
|
||||
port=3306
|
||||
socket=/tmp/ClusterMaster
|
||||
|
||||
An address parameter may be given if the listener is required to bind to a particular network address when using hosts with multiple network addresses. The default behaviour is to listen on all network interfaces.
|
||||
|
||||
The next stage is the configuration is to define the server information. This defines how to connect to each of the servers within the cluster, again a section is created for each server, with the type set to server, the network address and port to connect to and the protocol to use to connect to the server. Currently the protocol module for all database connections in MySQLBackend.
|
||||
|
||||
[dbserv1]
|
||||
|
||||
type=server
|
||||
|
||||
address=192.168.2.1
|
||||
|
||||
port=3306
|
||||
|
||||
protocol=MySQLBackend
|
||||
|
||||
[dbserv2]
|
||||
|
||||
type=server
|
||||
|
||||
address=192.168.2.2
|
||||
|
||||
port=3306
|
||||
|
||||
protocol=MySQLBackend
|
||||
|
||||
[dbserv3]
|
||||
|
||||
type=server
|
||||
|
||||
address=192.168.2.3
|
||||
|
||||
port=3306
|
||||
|
||||
protocol=MySQLBackend
|
||||
[dbserv1]
|
||||
type=server
|
||||
address=192.168.2.1
|
||||
port=3306
|
||||
protocol=MySQLBackend
|
||||
[dbserv2]
|
||||
type=server
|
||||
address=192.168.2.2
|
||||
port=3306
|
||||
protocol=MySQLBackend
|
||||
[dbserv3]
|
||||
type=server
|
||||
address=192.168.2.3
|
||||
port=3306
|
||||
protocol=MySQLBackend
|
||||
|
||||
In order for MaxScale to monitor the servers using the correct monitoring mechanisms a section should be provided that defines the monitor to use and the servers to monitor. Once again a section is created with a symbolic name for the monitor, with the type set to monitor. Parameters are added for the module to use, the list of servers to monitor and the username and password to use when connecting to the the servers with the monitor.
|
||||
|
||||
[Galera Monitor]
|
||||
|
||||
type=monitor
|
||||
|
||||
module=galeramon
|
||||
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
|
||||
user=maxscale
|
||||
|
||||
passwd=96F99AA1315BDC3604B006F427DD9484
|
||||
[Galera Monitor]
|
||||
type=monitor
|
||||
module=galeramon
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
user=maxscale
|
||||
passwd=96F99AA1315BDC3604B006F427DD9484
|
||||
|
||||
As with the password definition in the server either plain text or encrypted passwords may be used.
|
||||
|
||||
This monitor module will assign one node within the Galera Cluster as the current master and other nodes as slave. Only those nodes that are active members of the cluster are considered when making the choice of master node. Normally the master node will be the node with the lowest value of the status variable, WSREP_LOCAL_INDEX. When cluster membership changes a new master may be elected. In order to prevent changes of the node that is currently master, a parameter can be added to the monitor that will result in the current master remaining as master even if a node with a lower value of WSREP_LOCAL_INDEX joins the cluster. This parameter is called disable_master_failback.
|
||||
|
||||
[Galera Monitor]
|
||||
|
||||
type=monitor
|
||||
|
||||
module=galeramon
|
||||
|
||||
diable_master_failback=1
|
||||
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
|
||||
user=maxscale
|
||||
|
||||
passwd=96F99AA1315BDC3604B006F427DD9484
|
||||
[Galera Monitor]
|
||||
type=monitor
|
||||
module=galeramon
|
||||
diable_master_failback=1
|
||||
servers=dbserv1, dbserv2, dbserv3
|
||||
user=maxscale
|
||||
passwd=96F99AA1315BDC3604B006F427DD9484
|
||||
|
||||
Using this option the master node will only change if there is a problem with the current master and never because other nodes have joined the cluster.
|
||||
|
||||
The final stage in the configuration is to add the option service which is used by the maxadmin command to connect to MaxScale for monitoring and administration purposes. This creates a service section and a listener section.
|
||||
|
||||
[CLI]
|
||||
|
||||
type=service
|
||||
|
||||
router=cli
|
||||
|
||||
[CLI Listener]
|
||||
|
||||
type=listener
|
||||
|
||||
service=CLI
|
||||
|
||||
protocol=maxscaled
|
||||
|
||||
address=localhost
|
||||
|
||||
port=6603
|
||||
[CLI]
|
||||
type=service
|
||||
router=cli
|
||||
[CLI Listener]
|
||||
type=listener
|
||||
service=CLI
|
||||
protocol=maxscaled
|
||||
address=localhost
|
||||
port=6603
|
||||
|
||||
In the case of the example above it should be noted that an address parameter has been given to the listener, this limits connections to maxadmin commands that are executed on the same machine that hosts MaxScale.
|
||||
|
||||
# Starting MaxScale
|
||||
## Starting MaxScale
|
||||
|
||||
Upon completion of the configuration process MaxScale is ready to be started for the first time. This may either be done manually by running the maxscale command or via the service interface.
|
||||
|
||||
% maxscale
|
||||
% maxscale
|
||||
|
||||
or
|
||||
|
||||
% service maxscale start
|
||||
% service maxscale start
|
||||
|
||||
Check the error log in /usr/local/skysql/maxscale/log to see if any errors are detected in the configuration file and to confirm MaxScale has been started. Also the maxadmin command may be used to confirm that MaxScale is running and the services, listeners etc have been correctly configured.
|
||||
|
||||
% maxadmin -pskysql list services
|
||||
% maxadmin -pskysql list services
|
||||
|
||||
Services.
|
||||
|
||||
--------------------------+----------------------+--------+---------------
|
||||
|
||||
Service Name | Router Module | #Users | Total Sessions
|
||||
|
||||
--------------------------+----------------------+--------+---------------
|
||||
|
||||
Splitter Service | readwritesplit | 1 | 1
|
||||
|
||||
CLI | cli | 2 | 2
|
||||
|
||||
--------------------------+----------------------+--------+---------------
|
||||
|
||||
% maxadmin -pskysql list servers
|
||||
|
||||
Servers.
|
||||
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
|
||||
Server | Address | Port | Connections | Status
|
||||
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
|
||||
dbserv1 | 192.168.2.1 | 3306 | 0 | Running, Synced, Master
|
||||
|
||||
dbserv2 | 192.168.2.2 | 3306 | 0 | Running, Synced, Slave
|
||||
|
||||
dbserv3 | 192.168.2.3 | 3306 | 0 | Running, Synced, Slave
|
||||
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
Services.
|
||||
--------------------------+----------------------+--------+---------------
|
||||
Service Name | Router Module | #Users | Total Sessions
|
||||
--------------------------+----------------------+--------+---------------
|
||||
Splitter Service | readwritesplit | 1 | 1
|
||||
CLI | cli | 2 | 2
|
||||
--------------------------+----------------------+--------+---------------
|
||||
|
||||
% maxadmin -pskysql list servers
|
||||
Servers.
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
Server | Address | Port | Connections | Status
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
dbserv1 | 192.168.2.1 | 3306 | 0 | Running, Synced, Master
|
||||
dbserv2 | 192.168.2.2 | 3306 | 0 | Running, Synced, Slave
|
||||
dbserv3 | 192.168.2.3 | 3306 | 0 | Running, Synced, Slave
|
||||
-------------------+-----------------+-------+-------------+--------------------
|
||||
|
||||
A Galera Cluster is a multi-master clustering technology, however the monitor is able to impose false notions of master and slave roles within a Galera Cluster in order to facilitate the use of Galera as if it were a standard MySQL Replication setup. This is merely an internal MaxScale convenience and has no impact on the behaviour of the cluster but does allow the monitor to create these pseudo roles which are utilised by the Read/Write Splitter.
|
||||
|
||||
% maxadmin -pskysql list listeners
|
||||
% maxadmin -pskysql list listeners
|
||||
|
||||
Listeners.
|
||||
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
|
||||
Service Name | Protocol Module | Address | Port | State
|
||||
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
|
||||
Splitter Service | MySQLClient | * | 3306 | Running
|
||||
|
||||
CLI | maxscaled | localhost | 6603 | Running
|
||||
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
|
||||
%
|
||||
Listeners.
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
Service Name | Protocol Module | Address | Port | State
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
Splitter Service | MySQLClient | * | 3306 | Running
|
||||
CLI | maxscaled | localhost | 6603 | Running
|
||||
---------------------+--------------------+-----------------+-------+--------
|
||||
%
|
||||
|
||||
MaxScale is now ready to start accepting client connections and routing them to the master or slaves within your cluster. Other configuration options are available that can alter the criteria used for routing, these include monitoring the replication lag within the cluster and routing only to slaves that are within a predetermined delay from the current master or using weights to obtain unequal balancing operations. These options may be found in the MaxScale Configuration Guide. More detail on the use of maxadmin can be found in the document "MaxAdmin - The MaxScale Administration & Monitoring Client Application".
|
||||
|
||||
|
@ -93,7 +93,7 @@ Each cluster node process must be started separately, and on the host where it r
|
||||
|
||||
- On the management host, server1, issue the following command from the system shell to start the management node process:
|
||||
|
||||
[root@server1 ~]# ndb_mgmd -f /var/lib/mysql-cluster/config.ini
|
||||
[root@server1 ~]# ndb_mgmd -f /var/lib/mysql-cluster/config.ini
|
||||
|
||||
- On each of the data node hosts, run this command to start the ndbd process:
|
||||
|
||||
@ -249,24 +249,24 @@ Add these sections in MaxScale.cnf config file:
|
||||
|
||||
Assuming MaxScale is installed in server1, start it
|
||||
|
||||
[root@server1 ~]# cd /usr/local/skysql/maxscale/bin
|
||||
[root@server1 ~]# cd /usr/local/skysql/maxscale/bin
|
||||
|
||||
[root@server1 bin]# ./maxscale -c ../
|
||||
[root@server1 bin]# ./maxscale -c ../
|
||||
|
||||
Using the debug interface it’s possible to check the status of monitored servers
|
||||
|
||||
MaxScale> show monitors
|
||||
MaxScale> show monitors
|
||||
|
||||
Monitor: 0x387b880
|
||||
Monitor: 0x387b880
|
||||
|
||||
Name: NDB Cluster Monitor
|
||||
Monitor running
|
||||
Sampling interval: 8000 milliseconds
|
||||
Monitored servers: 127.0.0.1:3306, 162.243.90.81:3306
|
||||
|
||||
MaxScale> show servers
|
||||
MaxScale> show servers
|
||||
|
||||
Server 0x3873b40 (server1)
|
||||
Server 0x3873b40 (server1)
|
||||
|
||||
Server: 127.0.0.1
|
||||
Status: NDB, Running
|
||||
@ -280,7 +280,7 @@ Server 0x3873b40 (server1)
|
||||
Current no. of conns: 0
|
||||
Current no. of operations: 0
|
||||
|
||||
Server 0x3873a40 (server2)
|
||||
Server 0x3873a40 (server2)
|
||||
|
||||
Server: 162.243.90.81
|
||||
Status: NDB, Running
|
||||
@ -298,7 +298,7 @@ It’s now possible to run basic tests with the read connection load balancing
|
||||
|
||||
(1) test MaxScale load balancing requesting the Ndb_cluster_node_id variable:
|
||||
|
||||
[root@server1 ~]# mysql -h 127.0.0.1 -P 4906 -u test -ptest -e "SHOW STATUS LIKE 'Ndb_cluster_node_id'"
|
||||
[root@server1 ~]# mysql -h 127.0.0.1 -P 4906 -u test -ptest -e "SHOW STATUS LIKE 'Ndb_cluster_node_id'"
|
||||
|
||||
+---------------------+-------+
|
||||
| Variable_name | Value |
|
||||
@ -306,7 +306,7 @@ It’s now possible to run basic tests with the read connection load balancing
|
||||
| Ndb_cluster_node_id | 23 |
|
||||
+---------------------+-------+
|
||||
|
||||
[root@server1 ~]# mysql -h 127.0.0.1 -P 4906 -u test -ptest -e "SHOW STATUS LIKE 'Ndb_cluster_node_id'"
|
||||
[root@server1 ~]# mysql -h 127.0.0.1 -P 4906 -u test -ptest -e "SHOW STATUS LIKE 'Ndb_cluster_node_id'"
|
||||
|
||||
+---------------------+-------+
|
||||
| Variable_name | Value |
|
||||
@ -318,7 +318,7 @@ The MaxScale connection load balancing is working.
|
||||
|
||||
(2) test a select statement on an NBDBCLUSTER table, database test and table t1 created before:
|
||||
|
||||
[root@server1 ~] mysql -h 127.0.0.1 -P 4906 -utest -ptest -e "SELECT COUNT(1) FROM test.t1"
|
||||
[root@server1 ~] mysql -h 127.0.0.1 -P 4906 -utest -ptest -e "SELECT COUNT(1) FROM test.t1"
|
||||
|
||||
+----------+
|
||||
| COUNT(1) |
|
||||
@ -332,7 +332,7 @@ mysql -h 127.0.0.1 -P 4906 -utest -ptest -e "INSERT INTO test.t1 VALUES (19)"
|
||||
|
||||
(4) test again the select and check the number of rows
|
||||
|
||||
[root@server1 ~] mysql -h 127.0.0.1 -P 4906 -utest -ptest -e "SELECT COUNT(1) FROM test.t1"
|
||||
[root@server1 ~] mysql -h 127.0.0.1 -P 4906 -utest -ptest -e "SELECT COUNT(1) FROM test.t1"
|
||||
|
||||
+----------+
|
||||
| COUNT(1) |
|
||||
|
Loading…
x
Reference in New Issue
Block a user