Merge branch '2.2' into 2.2-mrm

This commit is contained in:
Markus Mäkelä 2017-10-30 11:06:34 +02:00
commit 3a78b716b8
49 changed files with 1458 additions and 500 deletions

View File

@ -136,12 +136,14 @@ then
exit 1
fi
mkdir -p jansson/build
pushd jansson/build
cd jansson
git checkout v2.9
mkdir build
cd build
cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_C_FLAGS=-fPIC -DJANSSON_INSTALL_LIB_DIR=$install_libdir
make
sudo make install
popd
cd ../../
# Avro C API
wget -r -l1 -nH --cut-dirs=2 --no-parent -A.tar.gz --no-directories http://mirror.netinch.com/pub/apache/avro/stable/c

View File

@ -11,6 +11,8 @@
* Firewall can now prevent the use of functions in conjunction with
certain columns.
* Parser of MaxScale extended to support window functions and CTEs.
* Parser of MaxScale extended to support PL/SQL compatibility features
of upcoming 10.3 release.
* Prepared statements are now parsed and the execution of read only
ones will be routed to slaves.
* Server states are persisted, so in case of crash and restart MaxScale
@ -20,6 +22,7 @@
* The Masking filter can now both obfuscate and partially mask columns.
* Binlog router supports MariaDB 10 GTID at both ends.
* KILL CONNECTION can now be used through MaxScale.
* Environment variables can now be used in the MaxScale configuration file.
For more details, please refer to:
* [MariaDB MaxScale 2.2.0 Release Notes](Release-Notes/MaxScale-2.2.0-Release-Notes.md)

View File

@ -239,7 +239,7 @@ respect to `SELECT` statements. The allowed values are:
statements are cacheable, but must verify that.
```
select=assume_cacheable
selects=assume_cacheable
```
Default is `verify_cacheable`. In this case, the `SELECT` statements will be

View File

@ -570,6 +570,32 @@ This will log all statements that cannot be parsed completely. This may be
useful if you suspect that MariaDB MaxScale routes statements to the wrong
server (e.g. to a slave instead of to a master).
#### `substitute_variables`
Enable or disable the substitution of environment variables in the MaxScale
configuration file. If the substitution of variables is enabled and a
configuration line like
```
some_parameter=$SOME_VALUE
```
is encountered, then `$SOME_VALUE` will be replaced with the actual value
of the environment variable `SOME_VALUE`. Note:
* Variable substitution will be made _only_ if '$' is the first character
of the value.
* _Everything_ following '$' is interpreted as the name of the environment
variable.
* Referring to a non-existing environment variable is a fatal error.
By default, the value of `substitute_variables` is `false`.
```
substitute_variables=true
```
The setting of `substitute_variables` will have an effect on all parameters
in the all other sections, irrespective of where the `[maxscale]` section
is placed in the configuration file. However, in the `[maxscale]` section,
to ensure that substitution will take place, place the
`substitute_variables=true` line first.
### REST API Configuration
The MaxScale REST API is an HTTP interface that provides JSON format data
@ -581,8 +607,7 @@ configuration file.
#### `admin_host`
The network interface where the HTTP admin interface listens on. The default
value is the IPv6 address `::` which listens on all available network
interfaces.
value is the IPv4 address `127.0.0.1` which only listens for local connections.
#### `admin_port`

View File

@ -1,4 +1,4 @@
# REST API design document
# REST API
This document describes the version 1 of the MaxScale REST API.

View File

@ -298,6 +298,54 @@ Invalid request body:
`Status: 403 Forbidden`
### Update monitor relationships
```
PATCH /v1/monitors/:name/relationships/servers
```
The _:name_ in the URI must map to a monitor name with all whitespace replaced
with hyphens.
The request body must be a JSON object that defines only the _data_ field. The
value of the _data_ field must be an array of relationship objects that define
the _id_ and _type_ fields of the relationship. This object will replace the
existing relationships of the monitor.
The following is an example request and request body that defines a single
server relationship for a monitor.
```
PATCH /v1/monitors/my-monitor/relationships/servers
{
data: [
{ "id": "my-server", "type": "servers" }
]
}
```
All relationships for a monitor can be deleted by sending an empty array as the
_data_ field value. The following example removes all servers from a monitor.
```
PATCH /v1/monitors/my-monitor/relationships/servers
{
data: []
}
```
#### Response
Monitor relationships modified:
`Status: 204 No Content`
Invalid JSON body:
`Status: 403 Forbidden`
### Destroy a monitor
Destroy a created monitor. The monitor must not have relationships to any

View File

@ -467,6 +467,55 @@ Invalid JSON body:
`Status: 403 Forbidden`
### Update server relationships
```
PATCH /v1/servers/:name/relationships/:type
```
The _:name_ in the URI must map to a server name with all whitespace replaced
with hyphens. The _:type_ in the URI must be either _services_, for service
relationships, or _monitors_, for monitor relationships.
The request body must be a JSON object that defines only the _data_ field. The
value of the _data_ field must be an array of relationship objects that define
the _id_ and _type_ fields of the relationship. This object will replace the
existing relationships of the particular type from the server.
The following is an example request and request body that defines a single
service relationship for a server.
```
PATCH /v1/servers/my-db-server/relationships/services
{
data: [
{ "id": "my-rwsplit-service", "type": "services" }
]
}
```
All relationships for a server can be deleted by sending an empty array as the
_data_ field value. The following example removes the server from all services.
```
PATCH /v1/servers/my-db-server/relationships/services
{
data: []
}
```
#### Response
Server relationships modified:
`Status: 204 No Content`
Invalid JSON body:
`Status: 403 Forbidden`
### Destroy a server
```

View File

@ -420,6 +420,54 @@ Service is modified:
`Status: 204 No Content`
### Update service relationships
```
PATCH /v1/services/:name/relationships/servers
```
The _:name_ in the URI must map to a service name with all whitespace replaced
with hyphens.
The request body must be a JSON object that defines only the _data_ field. The
value of the _data_ field must be an array of relationship objects that define
the _id_ and _type_ fields of the relationship. This object will replace the
existing relationships of the service.
The following is an example request and request body that defines a single
server relationship for a service.
```
PATCH /v1/services/my-rw-service/relationships/servers
{
data: [
{ "id": "my-server", "type": "servers" }
]
}
```
All relationships for a service can be deleted by sending an empty array as the
_data_ field value. The following example removes all servers from a service.
```
PATCH /v1/services/my-rw-service/relationships/servers
{
data: []
}
```
#### Response
Service relationships modified:
`Status: 204 No Content`
Invalid JSON body:
`Status: 403 Forbidden`
### Stop a service
Stops a started service.

View File

@ -1,4 +1,4 @@
# MariaDB MaxScale 2.1.10 Release Notes
# MariaDB MaxScale 2.1.10 Release Notes -- 2017-10-30
Release 2.1.10 is a GA release.
@ -40,10 +40,14 @@ To enable this functionality, add `query_retries=<number-of-retries>` under the
[Here is a list of bugs fixed in MaxScale 2.1.10.](https://jira.mariadb.org/issues/?jql=project%20%3D%20MXS%20AND%20issuetype%20%3D%20Bug%20AND%20status%20%3D%20Closed%20AND%20fixVersion%20%3D%202.1.10)
* [MXS-1468](https://jira.mariadb.org/browse/MXS-1468) Using dynamic commands to create readwritesplit configs fail after restart
* [MXS-1459](https://jira.mariadb.org/browse/MXS-1459) Binlog checksum default value is wrong if a slave connects with checksum = NONE before master registration or master is not accessible at startup
* [MXS-1457](https://jira.mariadb.org/browse/MXS-1457) Deleted servers are not ignored when users are loaded
* [MXS-1456](https://jira.mariadb.org/browse/MXS-1456) OOM when script variable is empty
* [MXS-1451](https://jira.mariadb.org/browse/MXS-1451) Password is not stored with skip_authentication=true
* [MXS-1450](https://jira.mariadb.org/browse/MXS-1450) Maxadmin commands with a leading space are silently ignored
* [MXS-1449](https://jira.mariadb.org/browse/MXS-1449) Database change not allowed
* [MXS-1163](https://jira.mariadb.org/browse/MXS-1163) Log flood using binlog server on Ubuntu Yakkety Yak
## Packaging

View File

@ -0,0 +1,73 @@
# MariaDB MaxScale 2.2.1 Release Notes
Release 2.2.1 is a Beta release.
This document describes the changes in release 2.2.1, when compared to
release 2.2.0.
For any problems you encounter, please consider submitting a bug
report at [Jira](https://jira.mariadb.org).
## Changed Features
### Binlog server
- MariaDB 10 GTID is always enabled for slave connections.
- Automatically set binlog storage to 'tree' mode when
_mariadb10_master_gtid_ option is on.
## Dropped Features
## New Features
### REST API Relationship Endpoints
The _servers_, _monitors_ and _services_ types now support direct updating of
relationships via the `relationships` endpoints. This conforms to the JSON API
specification on updating resource relationships.
For more information, refer to the REST API documentation. An example of this
can be found in the
[Server Resource documentation](../REST-API/Resources-Server.md#update-server-relationships).
### PL/SQL Comaptibility
The parser of MaxScale has been extended to support the PL/SQL compatibility
features of the upcoming 10.3 release. For more information on how to enable
this mode, please refer to the
[configuration guide](../Getting-Started/Configuration-Guide.md#sql_mode).
This functionality was available already in MaxScale 2.2.0.
### Environment Variables in the configuration file
If the global configuration entry `substitute_variables` is set to true,
then if the first character of a value in the configuration file is a `$`
then everything following that is interpreted as an environment variable
and the configuration value is replaced with the value of the environment
variable. For more information please consult the
[Configuration Guide](Getting-Started/Configuration-Guide.md).
## Bug fixes
[Here is a list of bugs fixed in MaxScale 2.2.1.](https://jira.mariadb.org/issues/?jql=project%20%3D%20MXS%20AND%20issuetype%20%3D%20Bug%20AND%20status%20%3D%20Closed%20AND%20fixVersion%20%3D%202.2.1)
## Known Issues and Limitations
There are some limitations and known issues within this version of MaxScale.
For more information, please refer to the [Limitations](../About/Limitations.md) document.
## Packaging
RPM and Debian packages are provided for the Linux distributions supported
by MariaDB Enterprise.
Packages can be downloaded [here](https://mariadb.com/resources/downloads).
## Source Code
The source code of MaxScale is tagged at GitHub with a tag, which is identical
with the version of MaxScale. For instance, the tag of version X.Y.Z of MaxScale
is X.Y.Z. Further, *master* always refers to the latest released non-beta version.
The source code is available [here](https://github.com/mariadb-corporation/MaxScale).

View File

@ -159,15 +159,39 @@ the router options.
### `mariadb10-compatibility`
This parameter allows binlogrouter to replicate from a MariaDB 10.0 master
server. If `mariadb10_slave_gtid` is not enabled GTID will not be used in the
replication. This parameter is enabled by default since MaxScale 2.2.0. In
earlier versions the parameter was disabled by default.
server: this parameter is enabled by default since MaxScale 2.2.0.
In earlier versions the parameter was disabled by default.
```
# Example
router_options=mariadb10-compatibility=1
```
Additionally, since MaxScale 2.2.1, MariaDB 10.x slave servers
can connect to binlog server using GTID value instead of binlog name and position.
Example of a MariaDB 10.x slave connection to MaxScale
```
MariaDB> SET @@global.gtid_slave_pos='0-10122-230';
MariaDB> CHANGE MASTER TO
MASTER_HOST='192.168.10.8',
MASTER_PORT=5306,
MASTER_USE_GTID=Slave_pos;
MariaDB> START SLAVE;
```
**Note:**
- Slave servers can connect either with _file_ and _pos_ or GTID.
- MaxScale saves all the incoming MariaDB GTIDs (DDLs and DMLs)
in a sqlite3 database located in _binlogdir_ (`gtid_maps.db`).
When a slave server connects with a GTID request a lookup is made for
the value match and following binlog events will be sent.
### `transaction_safety`
This parameter is used to enable/disable incomplete transactions detection in
@ -271,29 +295,6 @@ Example:
3;bbbbbbbbbaaaaaaabbbbbccccceeeddddd3333333ddddaaaaffffffeeeeecccd
```
### `mariadb10_slave_gtid`
If enabled this option allows MariaDB 10.x slave servers to connect to binlog
server using GTID value instead of binlog_file name and position.
MaxScale saves all the incoming MariaDB GTIDs (DDLs and DMLs)
in a sqlite3 database located in _binlogdir_ (`gtid_maps.db`).
When a slave server connects with a GTID request a lookup is made for
the value match and following binlog events will be sent.
Default option value is _off_.
Example of a MariaDB 10.x slave connection to MaxScale
```
MariaDB> SET @@global.gtid_slave_pos='0-10122-230';
MariaDB> CHANGE MASTER TO
MASTER_HOST='192.168.10.8',
MASTER_PORT=5306,
MASTER_USE_GTID=Slave_pos;
MariaDB> START SLAVE;
```
**Note:** Slave servers can connect either with _file_ and _pos_ or GTID.
### `mariadb10_master_gtid`
This option allows MaxScale binlog router to register
with MariaDB 10.X master using GTID instead of _binlog_file_ name
@ -336,22 +337,6 @@ in the binlog files with ignorable events.
- It's not possible to specify the GTID _domain_id: the master one
is being used for all operations. All slave servers must use the same replication domain as the master server.
### `binlog_structure`
This option controls the way binlog file are saved in the _binlogdir_:
there are two possible values, `flat | tree`
The `tree` mode can only be set with `mariadb10_master_gtid=On`
- `flat` is the default value, files are saved as usual.
- `tree` enables the saving of files using this hierarchy model:
_binlogdir_/_domain_id_/_server_id_/_filename_
The _tree_ structure easily allows the changing of the master server
without caring about binlog filename and sequence:
just change _host_ and _port_, the replication will
resume from last GTID MaxScale has seen.
### `master_retry_count`
This option sets the maximum number of connection retries when the master server is disconnected or not reachable.
@ -390,9 +375,7 @@ follows.
encrypt_binlog=1,
encryption_algorithm=aes_ctr,
encryption_key_file=/var/binlogs/enc_key.txt,
mariadb10_slave_gtid=On,
mariadb10_master_gtid=Off,
binlog_structure=flat,
slave_hostname=maxscale-blr-1,
master_retry_count=1000,
connect_retry=60

View File

@ -66,9 +66,7 @@ A **complete example** of a service entry for a binlog router service would be a
encrypt_binlog=On,
encryption_algorithm=aes_ctr,
encryption_key_file=/var/binlogs/enc_key.txt,
mariadb10_slave_gtid=On,
mariadb10_master_gtid=Off,
binlog_structure=flat,
slave_hostname=maxscale-blr-1,
master_retry_count=1000,
connect_retry=60
@ -199,7 +197,7 @@ If a slave is connected to MaxScale with SSL, an entry will be present in the Sl
Slave connected with SSL: Established
```
If option `mariadb10_slave_gtid=On` last seen GTID is shown:
If option `mariadb10-compatibility=On` last seen GTID is shown:
```
Last seen MariaDB GTID: 0-10124-282
@ -254,7 +252,7 @@ Master_SSL_Verify_Server_Cert: No
Master_Info_File: /home/maxscale/binlog/first/binlogs/master.ini
```
If the option `mariadb10_slave_gtid` is set to On, the last seen GTID is shown:
If the option `mariadb10-compatibility` is set to On, the last seen GTID is shown:
```
Using_Gtid: No
@ -278,14 +276,11 @@ slaves must not use *MASTER_AUTO_POSITION = 1* option.
It also works with a MariaDB 10.X setup (master and slaves).
Starting from MaxScale 2.2 the slave connections may include **GTID** feature
`MASTER_USE_GTID=Slave_pos` if option *mariadb10_slave_gtid* has been set.
The default is that a slave connection must not include any GTID
feature: `MASTER_USE_GTID=no`
Starting from MaxScale 2.2.1 the slave connections might optionally include
**GTID** feature `MASTER_USE_GTID=Slave_pos`: only option *mariadb10-compatibility* is required.
Starting from MaxScale 2.2 it's also possible to register to MariaDB 10.X master using
**GTID** using the two new options *mariadb10_master_gtid* and *binlog_structure*.
**GTID** using the new option *mariadb10_master_gtid*.
Current GTID implementation limitations:
@ -458,19 +453,12 @@ error logs and in *SHOW SLAVE STATUS*,
##### MariaDB 10 GTID
If _mariadb10_master_gtid_ is On changing the master doesn't require the setting of a
new _file_ and _pos_, just specify new host and port with CHANGE MASTER; depending on the _binlog_structure_ values some additional steps migth be required.
new _file_ and _pos_, just specify new host and port with CHANGE MASTER.
If _binlog_structure=flat_, in order to keep previous binlog files untouched in MaxScale _binlogdir_ (no overwriting),
the next in sequence file must exist in the Master server, as per above scenario _file and pos_ (2).
It migth also happen that each server in the replication setup has its own binlog file name
convention (server1_bin, server2_bin etc) or the user doesn't want to care at all about
name and sequence. The _binlog_structure_ option set to _tree_ value simplifies the change
master process: as the binlog files are saved using a hierarchy model
As the binlog files will be automatically saved using a hierarchy model
(_binlogdir/domain_id/server_id/_filename_), MaxScale can work with any filename and any
sequence and no binlog file will be overwritten by accident.
**Scenario** example:
Let's start saying it's a good practice to issue in the new Master `FLUSH TABLES` which
@ -508,38 +496,17 @@ MariaDB> SELECT @@global.gtid_current_pos;
```
Starting the replication in MaxScale, `START SLAVE`,
will result in new events being downloaded and stored.
If _binlog_structure=flat_ (default), the binlog events are saved in the new file
`mysql-bin.000061`, which should have been created in the Master before starting
replication from MaxScale, see above scenario (2)
If _binlog_structure=tree_, the binlog events are saved in the new file
`0/10333/mysql-bin.000001` (which is the current file in the new master)
The latter example clearly shows that the binlog file has a different sequence number
(1 instead of 61) and possibly a new name.
will result in new events being downloaded and stored in the new file
`0/10333/mysql-bin.000001` (which should be the current file in the new master)
As usual, check for any error in log files and with
MariaDB> SHOW SLAVE STATUS;
Issuing the admin command `SHOW BINARY LOGS` it's possible to see the list
of log files which have been downloaded:
```
MariaDB> SHOW BINARY LOGS;
+------------------+-----------+
| Log_name | File_size |
+------------------+-----------+
| mysql-bin.000113 | 2214 |
...
| mysql-bin.000117 | 535 |
+------------------+-----------+
```
It's possible to follow the _master change_ history if option `binlog_structure=tree`:
the displayed log file names have a prefix with replication domain_id and server_id.
of log files which have been downloaded and to follow the _master change_
history: the displayed log file names have a prefix with
replication domain_id and server_id.
```
MariaDB> SHOW BINARY LOGS;
@ -574,8 +541,8 @@ be issued for the new configuration.
### Removing binary logs from binlogdir
Since version 2.2, if `mariadb10_slave_gtid` or `mariadb10_master_gtid`
are set to On, it's possible to remove the binlog files from _binlogdir_
Since version 2.2.1, if `mariadb10-compatibility`is set to On,
it's possible to remove the binlog files from _binlogdir_
and delete related entries in GTID repository using the admin
command `PURGE BINARY LOGS TO 'file'`
@ -682,8 +649,8 @@ Example:
```
##### MariaDB 10 GTID
If connecting slaves are MariaDB 10.x it's also possible to connect with GTID,
*mariadb10_slave_gtid=On* has to be set in configuration before starting MaxScale.
Since MaxScale 2.2.1 the MariaDB 10.x connecting slaves can optionally connect with GTID,
*mariadb10-compatibility=On* has to be set in configuration before starting MaxScale.
```
SET @@global.gtid_slave_pos='';
@ -717,7 +684,7 @@ MariaDB> CHANGE MASTER TO
MariaDB> START SLAVE;
```
Additionally, if *mariadb10_slave_gtid=On*, it's also possible to retrieve the list of binlog files downloaded from the master with the new admin command _SHOW BINARY LOGS_:
Additionally it's also possible to retrieve the list of binlog files downloaded from the master with the new admin command _SHOW BINARY LOGS_:
```
MariaDB> SHOW BINARY LOGS;

View File

@ -1,115 +1,35 @@
.TH maxscale 1
.SH NAME
maxscale - An intelligent database proxy
.SH SYNOPSIS
.B maxscale
[\fIOPTIONS...\fR]
.SH DESCRIPTION
MariaDB MaxScale is a database proxy that forwards database statements to one or
more database servers.
MariaDB MaxScale is an intelligent database proxy that allows the forwarding
of database statements to one or more database servers using complex rules, a
semantic understanding of the database statements and the roles of the various
servers within the backend cluster of databases.
The forwarding is performed using rules based on the semantic understanding of
the database statements and on the roles of the servers within the backend
cluster of databases.
MariaDB MaxScale is designed to provide load balancing and high availability
functionality transparently to the applications. In addition it provides
a highly scalable and flexible architecture, with plugin components to
support different protocols and routing decisions.
MariaDB MaxScale is designed to provide, transparently to applications, load
balancing and high availability functionality. MariaDB MaxScale has a scalable
and flexible architecture, with plugin components to support different protocols
and routing approaches.
.SH OPTIONS
.TP
.BR "-c, --config-check"
Validate configuration file and exit.
.TP
.BR "-d, --nodaemon"
Run MaxScale in the terminal process.
.TP
.BR -f " \fIFILE\fB, --config=\fIFILE\fR"
Relative or absolute pathname of MaxScale configuration file to load.
.TP
.BR -l "[\fIfile|shm|stdout\fB], --log=[\fIfile|shm|stdout\fB]"
Log to file, shared memory or standard output. The default is to log to file.
.TP
.BR -L " \fIPATH\fB, --logdir=\fIPATH\fB"
Path to log file directory.
.TP
.BR -A " \fIPATH\fB, --cachedir=\fIPATH\fB"
Path to cache directory. This is where MaxScale stores cached authentication data.
.TP
.BR -B " \fIPATH\fB, --libdir=\fIPATH\fB"
Path to module directory. Modules are only searched from this folder.
.TP
.BR -C " \fIPATH\fB, --configdir=\fIPATH\fB"
Path to configuration file directory. MaxScale will look for the \fImaxscale.cnf\fR file from this folder.
.TP
.BR -D " \fIPATH\fB, --datadir=\fIPATH\fB"
Path to data directory. This is where the embedded mysql tables are stored in addition to other MaxScale specific data.
.TP
.BR -E " \fIPATH\fB, --execdir=\fIPATH\fB"
Location of the executable files. When internal processes are launched from within MaxScale the binaries are assumed to be in this directory. If you have a custom location for binary executable files you need to add this parameter.
.TP
.BR -F " \fIPATH\fB, --persistdir=\fIPATH\fB"
Location of persisted configuration files. These files are created when configuration is changed during runtime so that the changes may be reapplied at startup.
.TP
.BR -M " \fIPATH\fB, --module_configdir=\fIPATH\fB"
Location of module configuration files.
.TP
.BR -H " \fIPATH\fB, --connector_plugindir=\fIPATH\fB"
Location of MariaDB Connector-C plugin.
.TP
.BR -N " \fIPATH\fB, --language=\fIPATH\fB"
Location of errmsg.sys file.
.TP
.BR -P " \fIPATH\fB, --piddir=\fIPATH\fB"
Location of MaxScale's PID file.
.TP
.BR -R " \fIPATH\fB, --basedir=\fIPATH\fB"
Base path for all other paths.
.TP
.BR -U " \fIUSER\fB, --user=\fIUSER\fB"
Run MaxScale as another user. The user ID and group ID of this user are used to run MaxScale.
.TP
.BR -s " [\fIyes\fB|\fIno\fB], --syslog=[\fIyes\fB|\fIno\fB]"
Log messages to syslog.
.TP
.BR -S " [\fIyes\fB|\fIno\fB], \fB--maxlog=[\fIyes\fB|\fIno\fB]"
Log messages to MaxScale's own log files.
.TP
.BR -G " [\fIyes\fB|\fIno\fB], \fB--log_augmentation=[\fI0\fB|\fI1\fB]"
Augment messages with the name of the function where the message was logged (default: 0)"
.TP
.BR -g " [\fIyes\fB|\fIno\fB], \fB--debug=[\fIArg1\fB,\fIArg2,...\fB]"
Enable or disable debug features. Supported arguments:
\fBdisable-module-unloading \fRShared libraries are not unloaded on exit, may give better Valgrind leak reports.
\fBenable-module-unloading \fREnable unloading. Default setting.
.TP
.BR "-v, --version"
Print version information and exit.
.TP
.BR "-V, --version-full"
Print full version information including the Git commit the binary was built from and exit.
.TP
.BR "-?, --help"
Show the help information for MaxScale and exit.
.SH EXAMPLES
Tutorials on GitHub:
Quickstart Guide:
.RS
.I https://github.com/mariadb-corporation/MaxScale/blob/2.1/Documentation/Documentation-Contents.md#tutorials
.I https://mariadb.com/kb/en/mariadb-enterprise/mariadb-maxscale-22-setting-up-mariadb-maxscale/
.RE
.SH SEE ALSO
The MariaDB MaxScale documentation on the MariaDB Knowledge Base:
Installation Guide:
.RS
.I https://mariadb.com/kb/en/mariadb-enterprise/mariadb-maxscale/
.I https://mariadb.com/kb/en/mariadb-enterprise/mariadb-maxscale-22-mariadb-maxscale-installation-guide/
.RE
The MariaDB MaxScale documentation on GitHub:
MaxScale Documentation:
.RS
.I https://github.com/mariadb-corporation/MaxScale/blob/2.1/Documentation/Documentation-Contents.md
.I https://mariadb.com/kb/en/mariadb-enterprise/mariadb-maxscale-22-contents/
.RE
.SH BUGS
You can see a list of known bugs and report new bugs at:

View File

@ -32,7 +32,7 @@ MXS_BEGIN_DECLS
/** Default port where the REST API listens */
#define DEFAULT_ADMIN_HTTP_PORT 8989
#define DEFAULT_ADMIN_HOST "::"
#define DEFAULT_ADMIN_HOST "127.0.0.1"
#define RELEASE_STR_LENGTH 256
#define SYSNAME_LEN 256
@ -158,6 +158,7 @@ extern const char CN_SSL_CERT_VERIFY_DEPTH[];
extern const char CN_SSL_KEY[];
extern const char CN_SSL_VERSION[];
extern const char CN_STRIP_DB_ESC[];
extern const char CN_SUBSTITUTE_VARIABLES[];
extern const char CN_THREADS[];
extern const char CN_THREAD_STACK_SIZE[];
extern const char CN_TYPE[];
@ -228,6 +229,7 @@ typedef struct
char admin_ssl_ca_cert[PATH_MAX]; /**< Admin SSL CA cert */
int query_retries; /**< Number of times a interrupted query is retried */
time_t query_retry_timeout; /**< Timeout for query retries */
bool substitute_variables; /**< Should environment variables be substituted */
} MXS_CONFIG;
/**

View File

@ -340,7 +340,7 @@ json_t* monitor_list_to_json(const char* host);
* @param server Server to inspect
* @param host Hostname of this server
*
* @return Array of monitor links
* @return Array of monitor links or NULL if no relations exist
*/
json_t* monitor_relations_to_server(const SERVER* server, const char* host);

View File

@ -341,7 +341,7 @@ json_t* service_listener_to_json(const SERVICE* service, const char* name, const
* @param server Server to inspect
* @param host Hostname of this server
*
* @return Array of service links
* @return Array of service links or NULL if no relations exist
*/
json_t* service_relations_to_server(const SERVER* server, const char* host);

View File

@ -157,14 +157,14 @@ describe('Cluster Sync', function() {
before(startDoubleMaxScale)
it('sync after server creation', function() {
return doCommand('create server server5 127.0.0.1 3003 --hosts 127.0.0.1:8990')
.then(() => verifyCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989',
return doCommand('create server server5 127.0.0.1 3003 --hosts ' + secondary_host)
.then(() => verifyCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host,
'servers/server5'))
})
it('sync after server alteration', function() {
return doCommand('alter server server2 port 3000 --hosts 127.0.0.1:8990')
.then(() => verifyCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989',
return doCommand('alter server server2 port 3000 --hosts ' + secondary_host)
.then(() => verifyCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host,
'servers/server2'))
.then(function(res) {
res.data.attributes.parameters.port.should.equal(3000)
@ -172,21 +172,21 @@ describe('Cluster Sync', function() {
})
it('sync after server deletion', function() {
return doCommand('destroy server server5 --hosts 127.0.0.1:8990')
.then(() => verifyCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989',
return doCommand('destroy server server5 --hosts ' + secondary_host)
.then(() => verifyCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host,
'servers/server5'))
.should.be.rejected
})
it('sync after monitor creation', function() {
return doCommand('create monitor my-monitor-2 mysqlmon --hosts 127.0.0.1:8990')
.then(() => verifyCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989',
return doCommand('create monitor my-monitor-2 mysqlmon --hosts ' + secondary_host)
.then(() => verifyCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host,
'monitors/my-monitor-2'))
})
it('sync after monitor alteration', function() {
return doCommand('alter monitor MySQL-Monitor monitor_interval 12345 --hosts 127.0.0.1:8990')
.then(() => verifyCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989',
return doCommand('alter monitor MySQL-Monitor monitor_interval 12345 --hosts ' + secondary_host)
.then(() => verifyCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host,
'monitors/MySQL-Monitor'))
.then(function(res) {
res.data.attributes.parameters.monitor_interval.should.equal(12345)
@ -194,17 +194,17 @@ describe('Cluster Sync', function() {
})
it('sync after monitor deletion', function() {
return doCommand('destroy monitor my-monitor-2 --hosts 127.0.0.1:8990')
.then(() => doCommand('show monitor my-monitor-2 --hosts 127.0.0.1:8989'))
.then(() => doCommand('show monitor my-monitor-2 --hosts 127.0.0.1:8990').should.be.rejected)
.then(() => doCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989'))
.then(() => doCommand('show monitor my-monitor-2 --hosts 127.0.0.1:8989').should.be.rejected)
.then(() => doCommand('show monitor my-monitor-2 --hosts 127.0.0.1:8990').should.be.rejected)
return doCommand('destroy monitor my-monitor-2 --hosts ' + secondary_host)
.then(() => doCommand('show monitor my-monitor-2 --hosts ' + primary_host))
.then(() => doCommand('show monitor my-monitor-2 --hosts ' + secondary_host).should.be.rejected)
.then(() => doCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host))
.then(() => doCommand('show monitor my-monitor-2 --hosts ' + primary_host).should.be.rejected)
.then(() => doCommand('show monitor my-monitor-2 --hosts ' + secondary_host).should.be.rejected)
})
it('sync after service alteration', function() {
return doCommand('alter service RW-Split-Router enable_root_user true --hosts 127.0.0.1:8990')
.then(() => verifyCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989',
return doCommand('alter service RW-Split-Router enable_root_user true --hosts ' + secondary_host)
.then(() => verifyCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host,
'services/RW-Split-Router'))
.then(function(res) {
res.data.attributes.parameters.enable_root_user.should.be.true
@ -214,16 +214,28 @@ describe('Cluster Sync', function() {
// As the listeners cannot be truly deleted, since there's no code for actually closing a socket at runtime,
// we do the listener tests last
it('sync listener creation/deletion', function() {
return doCommand('create listener RW-Split-Router my-listener-2 5999 --hosts 127.0.0.1:8990')
// As both MaxScales are on the same machine, both can't listen on the same port. The sync should fail due to this
.then(() => doCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989').should.be.rejected)
// Create the listener on the second MaxScale to avoid it being synced later on
.then(() => doCommand('create listener RW-Split-Router my-listener-2 5998 --hosts 127.0.0.1:8989'))
// Sync after creation should succeed
.then(() => doCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989'))
// Destroy the created server, should succeed
.then(() => doCommand('destroy listener RW-Split-Router my-listener-2'))
.then(() => doCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989'))
if (primary_host == '127.0.0.1:8989' && secondary_host == '127.0.0.1:8990') {
// Test with both MaxScales on the same machine
return doCommand('create listener RW-Split-Router my-listener-2 5999 --hosts ' + secondary_host)
// As both MaxScales are on the same machine, both can't listen on the same port. The sync should fail due to this
.then(() => doCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host).should.be.rejected)
// Create the listener on the second MaxScale to avoid it being synced later on
.then(() => doCommand('create listener RW-Split-Router my-listener-2 5998 --hosts ' + primary_host))
// Sync after creation should succeed
.then(() => doCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host))
// Destroy the created server, should succeed
.then(() => doCommand('destroy listener RW-Split-Router my-listener-2'))
.then(() => doCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host))
} else {
// MaxScales are on different machines
return doCommand('create listener RW-Split-Router my-listener-2 5999 --hosts ' + secondary_host)
// As both MaxScales are on the same machine, both can't listen on the same port. The sync should fail due to this
.then(() => doCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host))
.then(() => doCommand('destroy listener RW-Split-Router my-listener-2'))
.then(() => doCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host))
}
})
after(stopDoubleMaxScale)
@ -273,36 +285,36 @@ describe('Cluster Diff', function() {
before(startDoubleMaxScale)
it('diff after server creation', function() {
return doCommand('create server server5 127.0.0.1 3003 --hosts 127.0.0.1:8990')
.then(() => doCommand('cluster diff 127.0.0.1:8990 --hosts 127.0.0.1:8989'))
return doCommand('create server server5 127.0.0.1 3003 --hosts ' + secondary_host)
.then(() => doCommand('cluster diff ' + secondary_host + ' --hosts ' + primary_host))
.then(function(res) {
var d = parseDiff(res)
d.removed.servers.length.should.equal(1)
d.removed.servers[0].id.should.equal('server5')
})
.then(() => doCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989'))
.then(() => doCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host))
})
it('diff after server alteration', function() {
return doCommand('alter server server2 port 3000 --hosts 127.0.0.1:8990')
.then(() => doCommand('cluster diff 127.0.0.1:8990 --hosts 127.0.0.1:8989'))
return doCommand('alter server server2 port 3000 --hosts ' + secondary_host)
.then(() => doCommand('cluster diff ' + secondary_host + ' --hosts ' + primary_host))
.then(function(res) {
var d = parseDiff(res)
d.changed.servers.length.should.equal(1)
d.changed.servers[0].id.should.equal('server2')
})
.then(() => doCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989'))
.then(() => doCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host))
})
it('diff after server deletion', function() {
return doCommand('destroy server server5 --hosts 127.0.0.1:8990')
.then(() => doCommand('cluster diff 127.0.0.1:8990 --hosts 127.0.0.1:8989'))
return doCommand('destroy server server5 --hosts ' + secondary_host)
.then(() => doCommand('cluster diff ' + secondary_host + ' --hosts ' + primary_host))
.then(function(res) {
var d = parseDiff(res)
d.added.servers.length.should.equal(1)
d.added.servers[0].id.should.equal('server5')
})
.then(() => doCommand('cluster sync 127.0.0.1:8990 --hosts 127.0.0.1:8989'))
.then(() => doCommand('cluster sync ' + secondary_host + ' --hosts ' + primary_host))
})
after(stopDoubleMaxScale)

View File

@ -15,12 +15,19 @@ module.exports = function() {
this.expect = chai.expect
this.host = 'http://localhost:8989/v1/'
this.primary_host = '127.0.0.1:8989'
this.secondary_host = '127.0.0.1:8990'
if (process.env.maxscale2_API) {
this.secondary_host = process.env.maxscale2_API
}
// Start MaxScale, this should be called in the `before` handler of each test unit
this.startMaxScale = function() {
return new Promise(function(resolve, reject) {
child_process.execFile("./start_maxscale.sh", function(err, stdout, stderr) {
if (err) {
reject()
reject(err)
} else {
resolve()
}
@ -33,7 +40,7 @@ module.exports = function() {
return new Promise(function(resolve, reject) {
child_process.execFile("./start_double_maxscale.sh", function(err, stdout, stderr) {
if (err) {
reject()
reject(err)
} else {
resolve()
}
@ -46,7 +53,7 @@ module.exports = function() {
return new Promise(function(resolve, reject) {
child_process.execFile("./stop_maxscale.sh", function(err, stdout, stderr) {
if (err) {
reject()
reject(err)
} else {
resolve()
}
@ -59,7 +66,7 @@ module.exports = function() {
return new Promise(function(resolve, reject) {
child_process.execFile("./stop_double_maxscale.sh", function(err, stdout, stderr) {
if (err) {
reject()
reject(err)
} else {
resolve()
}

View File

@ -501,6 +501,10 @@ add_test_executable(mxs1451_skip_auth.cpp mxs1451_skip_auth mxs1451_skip_auth LA
# https://jira.mariadb.org/browse/MXS-1457
add_test_executable(mxs1457_ignore_deleted.cpp mxs1457_ignore_deleted mxs1457_ignore_deleted LABELS REPL_BACKEND)
# MXS-1468: Using dynamic commands to create readwritesplit configs fail after restart
# https://jira.mariadb.org/browse/MXS-1468
add_test_executable(mxs1468.cpp mxs1468 mxs1468 LABELS REPL_BACKEND)
# MXS-1493: Use replication heartbeat in mysqlmon
# https://jira.mariadb.org/browse/MXS-1493
add_test_executable(verify_master_failure.cpp verify_master_failure verify_master_failure LABELS REPL_BACKEND)
@ -567,7 +571,7 @@ add_test_executable(rwsplit_multi_stmt.cpp rwsplit_multi_stmt rwsplit_multi_stmt
add_test_executable(rwsplit_read_only_trx.cpp rwsplit_read_only_trx rwsplit_read_only_trx LABELS readwritesplit REPL_BACKEND)
# Test replication-manager with MaxScale
#add_test_executable(replication_manager.cpp replication_manager replication_manager LABELS maxscale REPL_BACKEND)
add_test_executable_notest(replication_manager.cpp replication_manager replication_manager LABELS maxscale REPL_BACKEND)
#add_test_executable_notest(replication_manager_2nodes.cpp replication_manager_2nodes replication_manager_2nodes LABELS maxscale REPL_BACKEND)
#add_test_executable_notest(replication_manager_3nodes.cpp replication_manager_3nodes replication_manager_3nodes LABELS maxscale REPL_BACKEND)
@ -640,6 +644,9 @@ add_test_executable(temporal_tables.cpp temporal_tables replication LABELS readw
# Test routing hints
add_test_executable(test_hints.cpp test_hints hints2 LABELS hintfilter LIGHT REPL_BACKEND)
# Run MaxCtrl test suite
add_test_executable(test_maxctrl.cpp test_maxctrl maxctrl LABELS REPL_BACKEND)
# Binlogrouter tests, these heavily alter the replication so they are run last
add_test_executable(avro.cpp avro avro LABELS avrorouter binlogrouter LIGHT BREAKS_REPL)
add_test_executable(avro_alter.cpp avro_alter avro LABELS avrorouter binlogrouter LIGHT BREAKS_REPL)

View File

@ -0,0 +1,132 @@
[maxscale]
threads=4
admin_auth=false
log_info=1
admin_host=::
[MySQL Monitor]
type=monitor
module=mysqlmon
servers=server1,server2,server3,server4
user=maxskysql
password=skysql
monitor_interval=10000
[RW Split Router]
type=service
router=readwritesplit
servers=server1,server2,server3,server4
user=maxskysql
password=skysql
max_slave_connections=100%
[SchemaRouter Router]
type=service
router=schemarouter
servers=server1,server2,server3,server4
user=maxskysql
password=skysql
auth_all_servers=1
[RW Split Hint Router]
type=service
router=readwritesplit
servers=server1,server2,server3,server4
user=maxskysql
password=skysql
max_slave_connections=100%
filters=Hint
[Read Connection Router]
type=service
router=readconnroute
router_options=master
servers=server1
user=maxskysql
password=skysql
filters=QLA
[Hint]
type=filter
module=hintfilter
[recurse3]
type=filter
module=tee
service=RW Split Router
[recurse2]
type=filter
module=tee
service=Read Connection Router
[recurse1]
type=filter
module=tee
service=RW Split Hint Router
[QLA]
type=filter
module=qlafilter
log_type=unified
append=false
flush=true
filebase=/tmp/qla.log
[CLI]
type=service
router=cli
[Read Connection Listener]
type=listener
service=Read Connection Router
protocol=MySQLClient
port=4008
[RW Split Listener]
type=listener
service=RW Split Router
protocol=MySQLClient
port=4006
[SchemaRouter Listener]
type=listener
service=SchemaRouter Router
protocol=MySQLClient
port=4010
[RW Split Hint Listener]
type=listener
service=RW Split Hint Router
protocol=MySQLClient
port=4009
[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
socket=default
[server1]
type=server
address=###node_server_IP_1###
port=###node_server_port_1###
protocol=MySQLBackend
[server2]
type=server
address=###node_server_IP_2###
port=###node_server_port_2###
protocol=MySQLBackend
[server3]
type=server
address=###node_server_IP_3###
port=###node_server_port_3###
protocol=MySQLBackend
[server4]
type=server
address=###node_server_IP_4###
port=###node_server_port_4###
protocol=MySQLBackend

View File

@ -0,0 +1,18 @@
[maxscale]
threads=###threads###
[rwsplit-service]
type=service
router=readwritesplit
user=maxskysql
passwd=skysql
[CLI]
type=service
router=cli
[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
socket=default

View File

@ -12,6 +12,8 @@ monitor_interval=1000
detect_standalone_master=true
failcount=2
allow_cluster_recovery=true
events=master_down
script=/home/vagrant/replication-manager --hosts=$LIST --user=skysql:skysql --rpluser=skysql:skysql --switchover-at-sync=false --log-level=3 --logfile=/tmp/mrm.log switchover
[RW Split Router]
type=service

View File

@ -117,9 +117,6 @@ EOF
do_ssh <<EOF
sudo mkdir -p /etc/replication-manager/
sudo cp ./config.toml /etc/replication-manager/config.toml
sudo systemctl stop replication-manager
sudo replication-manager bootstrap --clean-all
sudo systemctl restart replication-manager
EOF
}

View File

@ -1270,6 +1270,7 @@ static void wait_until_pos(MYSQL *mysql, int filenum, int pos)
{
int slave_filenum = 0;
int slave_pos = 0;
bool error = false;
do
{
@ -1284,17 +1285,23 @@ static void wait_until_pos(MYSQL *mysql, int filenum, int pos)
if (res)
{
MYSQL_ROW row = mysql_fetch_row(res);
error = true;
if (row && row[6] && row[21])
if (row && row[5] && row[21])
{
char *file_suffix = strchr(row[5], '.') + 1;
slave_filenum = atoi(file_suffix);
slave_pos = atoi(row[21]);
char *file_suffix = strchr(row[5], '.');
if (file_suffix)
{
file_suffix++;
slave_filenum = atoi(file_suffix);
slave_pos = atoi(row[21]);
error = false;
}
}
mysql_free_result(res);
}
}
while (slave_filenum < filenum || slave_pos < pos);
while ((slave_filenum < filenum || slave_pos < pos) && !error);
}
void Mariadb_nodes::sync_slaves(int node)

View File

@ -0,0 +1,47 @@
#!/bin/bash
cat <<EOF > start_maxscale.sh
#!/bin/bash
sudo systemctl start maxscale
EOF
cat <<EOF >stop_maxscale.sh
#!/bin/bash
sudo systemctl stop maxscale
sudo rm -rf /var/lib/maxscale/*
sudo rm -rf /var/cache/maxscale/*
sudo rm -rf /var/run/maxscale/*
if [ -f /tmp/maxadmin.sock ]
then
sudo rm /tmp/maxadmin.sock
fi
EOF
cat <<EOF >start_double_maxscale.sh
#!/bin/bash
sudo systemctl start maxscale
ssh -i ~/maxscale_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=quiet $galera_003_whoami@$galera_003_network "sudo systemctl start maxscale"
EOF
cat <<EOF >stop_double_maxscale.sh
#!/bin/bash
sudo systemctl stop maxscale
sudo rm -rf /var/lib/maxscale/*
sudo rm -rf /var/cache/maxscale/*
sudo rm -rf /var/run/maxscale/*
test ! -f /tmp/maxadmin.sock || sudo rm /tmp/maxadmin.sock
ssh -i ~/maxscale_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=quiet $galera_003_whoami@$galera_003_network "sudo systemctl stop maxscale"
ssh -i ~/maxscale_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=quiet $galera_003_whoami@$galera_003_network "sudo rm -rf /var/lib/maxscale/*"
ssh -i ~/maxscale_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=quiet $galera_003_whoami@$galera_003_network "sudo rm -rf /var/cache/maxscale/*"
ssh -i ~/maxscale_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=quiet $galera_003_whoami@$galera_003_network "sudo rm -rf /var/run/maxscale/*"
ssh -i ~/maxscale_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=quiet $galera_003_whoami@$galera_003_network "sudo rm -rf /tmp/maxadmin.sock"
EOF
chmod +x *.sh

View File

@ -0,0 +1,37 @@
/**
* MXS-1468: Using dynamic commands to create readwritesplit configs fail after restart
*
* https://jira.mariadb.org/browse/MXS-1468
*/
#include "testconnections.h"
int main(int argc, char** argv)
{
TestConnections test(argc, argv);
test.verbose = true;
test.ssh_maxscale(true,
"maxadmin create monitor cluster-monitor mysqlmon;"
"maxadmin alter monitor cluster-monitor user=maxskysql password=skysql monitor_interval=1000;"
"maxadmin restart monitor cluster-monitor;"
"maxadmin create listener rwsplit-service rwsplit-listener 0.0.0.0 4006;"
"maxadmin create listener rwsplit-service rwsplit-listener2 0.0.0.0 4008;"
"maxadmin create listener rwsplit-service rwsplit-listener3 0.0.0.0 4009;"
"maxadmin list listeners;"
"maxadmin create server prod_mysql01 %s 3306;"
"maxadmin create server prod_mysql02 %s 3306;"
"maxadmin create server prod_mysql03 %s 3306;"
"maxadmin list servers;"
"maxadmin add server prod_mysql02 cluster-monitor rwsplit-service;"
"maxadmin add server prod_mysql01 cluster-monitor rwsplit-service;"
"maxadmin add server prod_mysql03 cluster-monitor rwsplit-service;"
"maxadmin list servers;", test.repl->IP[0], test.repl->IP[1], test.repl->IP[2]);
test.verbose = false;
test.tprintf("Restarting MaxScale");
test.add_result(test.restart_maxscale(), "Restart should succeed");
test.check_maxscale_alive();
return test.global_result;
}

View File

@ -21,9 +21,9 @@ void get_output(TestConnections& test)
test.tprintf("%s", output);
free(output);
test.tprintf("replication-manager output:");
test.tprintf("MaxScale output:");
output = test.ssh_maxscale_output(true,
"cat /var/log/replication-manager.log && sudo truncate -s 0 /var/log/replication-manager.log");
"cat /var/log/maxscale/maxscale.log && sudo truncate -s 0 /var/log/maxscale/maxscale.log");
test.tprintf("%s", output);
free(output);
}
@ -66,6 +66,21 @@ void check(TestConnections& test)
mysql_close(conn);
}
int get_server_id(TestConnections& test)
{
MYSQL *conn = test.open_rwsplit_connection();
int id = -1;
char str[1024];
if (find_field(conn, "SELECT @@server_id", "@@server_id", str) == 0)
{
id = atoi(str);
}
mysql_close(conn);
return id;
}
static bool interactive = false;
void get_input()
@ -83,8 +98,9 @@ int main(int argc, char** argv)
prepare();
TestConnections test(argc, argv);
test.tprintf("Installing replication-manager");
int rc = system("./manage_mrm.sh install > manage_mrm.log");
int rc = system("new_replication_manager=yes ./manage_mrm.sh install > manage_mrm.log");
if (!WIFEXITED(rc) || WEXITSTATUS(rc) != 0)
{
test.tprintf("Failed to install replication-manager, see manage_mrm.log for more details");
@ -98,6 +114,7 @@ int main(int argc, char** argv)
get_input();
test.connect_maxscale();
test.try_query(test.conn_rwsplit, "CREATE OR REPLACE TABLE test.t1(id INT)");
test.repl->sync_slaves();
check(test);
get_output(test);
@ -127,31 +144,34 @@ int main(int argc, char** argv)
check(test);
get_output(test);
test.tprintf("Starting all nodes and wait for replication-manager to fix the replication");
get_input();
test.repl->start_node(0, (char*)"");
sleep(5);
test.repl->start_node(1, (char*)"");
sleep(5);
test.repl->start_node(2, (char*)"");
sleep(5);
test.tprintf("Fix replication and recreate table");
test.close_maxscale_connections();
test.repl->fix_replication();
test.connect_maxscale();
test.try_query(test.conn_rwsplit, "CREATE OR REPLACE TABLE test.t1(id INT)");
test.repl->sync_slaves();
inserts = 0;
check(test);
get_output(test);
test.tprintf("Dropping tables");
get_input();
test.close_maxscale_connections();
test.connect_maxscale();
test.try_query(test.conn_rwsplit, "DROP TABLE test.t1");
test.close_maxscale_connections();
test.tprintf("Disable replication on a slave and kill master, check that it is not promoted");
execute_query(test.repl->nodes[1], "STOP SLAVE; RESET SLAVE; RESET SLAVE ALL;");
test.repl->stop_node(0);
sleep(10);
check(test);
get_output(test);
test.tprintf("Removing replication-manager");
get_input();
system("./manage_mrm.sh remove >> manage_mrm.log");
int id = get_server_id(test);
test.add_result(id == test.repl->get_server_id(1), "Invalid slave should not be used");
// TODO: Figure this also out, remove the component if it's not needed
// test.tprintf("Removing replication-manager");
// get_input();
// system("./manage_mrm.sh remove >> manage_mrm.log");
test.repl->fix_replication();
return test.global_result;
}

View File

@ -0,0 +1,37 @@
/**
* Run MaxCtrl test suite on the MaxScale machine
*/
#include "testconnections.h"
int main(int argc, char *argv[])
{
// Use galera_003 as the secondary MaxScale node
TestConnections::set_secondary_maxscale("galera_003_network", "galera_003_network6");
TestConnections test(argc, argv);
// This is not very nice as it's a bit too intrusive
system("envsubst < maxctrl_scripts.sh.in > maxctrl_scripts.sh");
system("chmod +x maxctrl_scripts.sh");
test.copy_to_maxscale("test_maxctrl.sh", "~");
test.copy_to_maxscale("maxctrl_scripts.sh", "~");
test.ssh_maxscale(true,"ssh-keygen -f maxscale_key -P \"\"");
test.copy_from_maxscale((char*)"~/maxscale_key.pub", (char*)".");
test.galera->copy_to_node("./maxscale_key.pub", "~", 3);
test.galera->ssh_node(3, false, "cat ~/maxscale_key.pub >> ~/.ssh/authorized_keys;"
"sudo iptables -I INPUT -p tcp --dport 8989 -j ACCEPT;");
// TODO: Don't handle test dependencies in tests
test.tprintf("Installing NPM");
test.ssh_maxscale(true,"yum -y install epel-release;yum -y install npm git;");
test.tprintf("Starting test");
test.verbose = true;
int rv = test.ssh_maxscale(true, "export maxscale2_API=%s:8989; ./test_maxctrl.sh", test.galera->IP[3]);
test.verbose = false;
test.tprintf("Removing NPM");
test.ssh_maxscale(true, "yum -y remove npm epel-release");
return rv;
}

View File

@ -0,0 +1,32 @@
#!/bin/bash
# Check branch name
ref=$(maxscale --version-full 2>&1|grep -o ' - .*'|sed 's/ - //')
if [ -z "$ref" ]
then
echo "Error: No commit ID in --version-full output"
exit 1
fi
if [ ! -d MaxScale ]
then
git clone https://www.github.com/mariadb-corporation/MaxScale.git
cd MaxScale
git checkout $ref
cd ..
fi
cd MaxScale/maxctrl
# Create the scripts that start and stop MaxScale
~/maxctrl_scripts.sh
chmod +x *.sh
npm i
# Export the value for --basedir where maxscale binaries are located
export MAXSCALE_DIR=/usr
./stop_maxscale.sh
npm test

View File

@ -68,6 +68,25 @@ void TestConnections::multiple_maxscales(bool value)
maxscale::multiple_maxscales = value;
}
void TestConnections::set_secondary_maxscale(const char* ip_var, const char* ip6_var)
{
const char* ip = getenv(ip_var);
const char* ip6 = getenv(ip6_var);
if (ip || ip6)
{
TestConnections::multiple_maxscales(true);
if (ip)
{
setenv("maxscale2_IP", ip, 1);
}
if (ip6)
{
setenv("maxscale2_network6", ip6, 1);
}
}
}
TestConnections::TestConnections(int argc, char *argv[]):
enable_timeouts(true),
global_result(0),
@ -1484,7 +1503,7 @@ int TestConnections::ssh_maxscale(bool sudo, const char* format, ...)
free(sys);
free(cmd);
return rc;
return WEXITSTATUS(rc);
}
int TestConnections::copy_to_maxscale(const char* src, const char* dest)

View File

@ -338,7 +338,10 @@ public:
static void require_galera_version(const char *version);
/** Initialize multiple MaxScale instances */
void multiple_maxscales(bool value);
static void multiple_maxscales(bool value);
/** Set secondary MaxScale address */
static void set_secondary_maxscale(const char* ip_var, const char* ip6_var);
/**
* @brief add_result adds result to global_result and prints error message if result is not 0

View File

@ -137,6 +137,7 @@ const char CN_SSL_CERT_VERIFY_DEPTH[] = "ssl_cert_verify_depth";
const char CN_SSL_KEY[] = "ssl_key";
const char CN_SSL_VERSION[] = "ssl_version";
const char CN_STRIP_DB_ESC[] = "strip_db_esc";
const char CN_SUBSTITUTE_VARIABLES[] = "substitute_variables";
const char CN_THREADS[] = "threads";
const char CN_THREAD_STACK_SIZE[] = "thread_stack_size";
const char CN_TYPE[] = "type";
@ -458,11 +459,27 @@ void fix_section_name(char *section)
* @param value The Parameter value
* @return zero on error
*/
static int
ini_handler(void *userdata, const char *section, const char *name, const char *value)
static int ini_handler(void *userdata, const char *section, const char *name, const char *value)
{
CONFIG_CONTEXT *cntxt = (CONFIG_CONTEXT *)userdata;
CONFIG_CONTEXT *ptr = cntxt;
CONFIG_CONTEXT *cntxt = (CONFIG_CONTEXT *)userdata;
CONFIG_CONTEXT *ptr = cntxt;
if (config_get_global_options()->substitute_variables)
{
if (*value == '$')
{
char* env_value = getenv(value + 1);
if (!env_value)
{
MXS_ERROR("The environment variable %s, used as value for parameter %s "
"in section %s, does not exist.", value + 1, name, section);
return 0;
}
value = env_value;
}
}
if (strcmp(section, CN_GATEWAY) == 0 || strcasecmp(section, CN_MAXSCALE) == 0)
{

View File

@ -998,51 +998,61 @@ bool runtime_destroy_monitor(MXS_MONITOR *monitor)
}
static bool extract_relations(json_t* json, StringSet& relations,
const char** relation_types,
const char* relation_type,
bool (*relation_check)(const std::string&, const std::string&))
{
bool rval = true;
json_t* arr = mxs_json_pointer(json, relation_type);
for (int i = 0; relation_types[i]; i++)
if (arr && json_is_array(arr))
{
json_t* arr = mxs_json_pointer(json, relation_types[i]);
size_t size = json_array_size(arr);
if (arr && json_is_array(arr))
for (size_t j = 0; j < size; j++)
{
size_t size = json_array_size(arr);
json_t* obj = json_array_get(arr, j);
json_t* id = json_object_get(obj, CN_ID);
json_t* type = mxs_json_pointer(obj, CN_TYPE);
for (size_t j = 0; j < size; j++)
if (id && json_is_string(id) &&
type && json_is_string(type))
{
json_t* obj = json_array_get(arr, j);
json_t* id = json_object_get(obj, CN_ID);
json_t* type = mxs_json_pointer(obj, CN_TYPE);
std::string id_value = json_string_value(id);
std::string type_value = json_string_value(type);
if (id && json_is_string(id) &&
type && json_is_string(type))
if (relation_check(type_value, id_value))
{
std::string id_value = json_string_value(id);
std::string type_value = json_string_value(type);
if (relation_check(type_value, id_value))
{
relations.insert(id_value);
}
else
{
rval = false;
}
relations.insert(id_value);
}
else
{
rval = false;
}
}
else
{
rval = false;
}
}
}
return rval;
}
static inline bool is_null_relation(json_t* json, const char* relation)
{
std::string str(relation);
size_t pos = str.rfind("/data");
ss_dassert(pos != std::string::npos);
str = str.substr(0, pos);
json_t* data = mxs_json_pointer(json, relation);
json_t* base = mxs_json_pointer(json, str.c_str());
return (data && json_is_null(data)) || (base && json_is_null(base));
}
static inline const char* get_string_or_null(json_t* json, const char* path)
{
const char* rval = NULL;
@ -1158,13 +1168,6 @@ static bool server_contains_required_fields(json_t* json)
return rval;
}
const char* server_relation_types[] =
{
MXS_JSON_PTR_RELATIONSHIPS_SERVICES,
MXS_JSON_PTR_RELATIONSHIPS_MONITORS,
NULL
};
static bool server_relation_is_valid(const std::string& type, const std::string& value)
{
return (type == CN_SERVICES && service_find(value.c_str())) ||
@ -1314,7 +1317,8 @@ SERVER* runtime_create_server_from_json(json_t* json)
StringSet relations;
if (extract_relations(json, relations, server_relation_types, server_relation_is_valid))
if (extract_relations(json, relations, MXS_JSON_PTR_RELATIONSHIPS_SERVICES, server_relation_is_valid) &&
extract_relations(json, relations, MXS_JSON_PTR_RELATIONSHIPS_MONITORS, server_relation_is_valid))
{
if (runtime_create_server(name, address, port.c_str(), protocol, authenticator, authenticator_options))
{
@ -1347,12 +1351,33 @@ bool server_to_object_relations(SERVER* server, json_t* old_json, json_t* new_js
return true;
}
bool rval = false;
const char* server_relation_types[] =
{
MXS_JSON_PTR_RELATIONSHIPS_SERVICES,
MXS_JSON_PTR_RELATIONSHIPS_MONITORS,
NULL
};
bool rval = true;
StringSet old_relations;
StringSet new_relations;
if (extract_relations(old_json, old_relations, server_relation_types, server_relation_is_valid) &&
extract_relations(new_json, new_relations, server_relation_types, server_relation_is_valid))
for (int i = 0; server_relation_types[i]; i++)
{
// Extract only changed or deleted relationships
if (is_null_relation(new_json, server_relation_types[i]) ||
mxs_json_pointer(new_json, server_relation_types[i]))
{
if (!extract_relations(new_json, new_relations, server_relation_types[i], server_relation_is_valid) ||
!extract_relations(old_json, old_relations, server_relation_types[i], server_relation_is_valid))
{
rval = false;
break;
}
}
}
if (rval)
{
StringSet removed_relations;
StringSet added_relations;
@ -1365,10 +1390,10 @@ bool server_to_object_relations(SERVER* server, json_t* old_json, json_t* new_js
old_relations.begin(), old_relations.end(),
std::inserter(added_relations, added_relations.begin()));
if (unlink_server_from_objects(server, removed_relations) &&
link_server_to_objects(server, added_relations))
if (!unlink_server_from_objects(server, removed_relations) ||
!link_server_to_objects(server, added_relations))
{
rval = true;
rval = false;
}
}
@ -1415,11 +1440,46 @@ bool runtime_alter_server_from_json(SERVER* server, json_t* new_json)
return rval;
}
const char* object_relation_types[] =
static bool is_valid_relationship_body(json_t* json)
{
MXS_JSON_PTR_RELATIONSHIPS_SERVERS,
NULL
};
bool rval = true;
json_t* obj = mxs_json_pointer(json, MXS_JSON_PTR_DATA);
if (!obj)
{
runtime_error("Field '%s' is not defined", MXS_JSON_PTR_DATA);
rval = false;
}
else if (!json_is_array(obj))
{
runtime_error("Field '%s' is not an array", MXS_JSON_PTR_DATA);
rval = false;
}
return rval;
}
bool runtime_alter_server_relationships_from_json(SERVER* server, const char* type, json_t* json)
{
bool rval = false;
mxs::Closer<json_t*> old_json(server_to_json(server, ""));
ss_dassert(old_json.get());
if (is_valid_relationship_body(json))
{
mxs::Closer<json_t*> j(json_pack("{s: {s: {s: {s: O}}}}", "data",
"relationships", type, "data",
json_object_get(json, "data")));
if (server_to_object_relations(server, old_json.get(), j.get()))
{
rval = true;
}
}
return rval;
}
static bool object_relation_is_valid(const std::string& type, const std::string& value)
{
@ -1459,7 +1519,7 @@ static bool validate_monitor_json(json_t* json)
else
{
StringSet relations;
if (extract_relations(json, relations, object_relation_types, object_relation_is_valid))
if (extract_relations(json, relations, MXS_JSON_PTR_RELATIONSHIPS_SERVERS, object_relation_is_valid))
{
rval = true;
}
@ -1542,9 +1602,10 @@ bool object_to_server_relations(const char* target, json_t* old_json, json_t* ne
bool rval = false;
StringSet old_relations;
StringSet new_relations;
const char* object_relation = MXS_JSON_PTR_RELATIONSHIPS_SERVERS;
if (extract_relations(old_json, old_relations, object_relation_types, object_relation_is_valid) &&
extract_relations(new_json, new_relations, object_relation_types, object_relation_is_valid))
if (extract_relations(old_json, old_relations, object_relation, object_relation_is_valid) &&
extract_relations(new_json, new_relations, object_relation, object_relation_is_valid))
{
StringSet removed_relations;
StringSet added_relations;
@ -1623,6 +1684,48 @@ bool runtime_alter_monitor_from_json(MXS_MONITOR* monitor, json_t* new_json)
return rval;
}
bool runtime_alter_monitor_relationships_from_json(MXS_MONITOR* monitor, json_t* json)
{
bool rval = false;
mxs::Closer<json_t*> old_json(monitor_to_json(monitor, ""));
ss_dassert(old_json.get());
if (is_valid_relationship_body(json))
{
mxs::Closer<json_t*> j(json_pack("{s: {s: {s: {s: O}}}}", "data",
"relationships", "servers", "data",
json_object_get(json, "data")));
if (object_to_server_relations(monitor->name, old_json.get(), j.get()))
{
rval = true;
}
}
return rval;
}
bool runtime_alter_service_relationships_from_json(SERVICE* service, json_t* json)
{
bool rval = false;
mxs::Closer<json_t*> old_json(service_to_json(service, ""));
ss_dassert(old_json.get());
if (is_valid_relationship_body(json))
{
mxs::Closer<json_t*> j(json_pack("{s: {s: {s: {s: O}}}}", "data",
"relationships", "servers", "data",
json_object_get(json, "data")));
if (object_to_server_relations(service->name, old_json.get(), j.get()))
{
rval = true;
}
}
return rval;
}
/**
* @brief Check if the service parameter can be altered at runtime
*
@ -1686,7 +1789,22 @@ bool runtime_alter_service_from_json(SERVICE* service, json_t* new_json)
}
else
{
runtime_error("Parameter '%s' cannot be modified", key);
const MXS_MODULE *mod = get_module(service->routerModule, MODULE_ROUTER);
std::string v = mxs::json_to_string(value);
if (config_param_is_valid(mod->parameters, key, v.c_str(), NULL))
{
runtime_error("Runtime modifications to router parameters is not supported: %s=%s", key, v.c_str());
}
else if (!is_dynamic_param(key))
{
runtime_error("Runtime modifications to static service parameters is not supported: %s=%s", key, v.c_str());
}
else
{
runtime_error("Parameter '%s' cannot be modified at runtime", key);
}
rval = false;
}
}

View File

@ -1045,7 +1045,8 @@ static void usage(void)
"if '--basedir /path/maxscale' is specified, then, for instance, the log\n"
"dir will be '/path/maxscale/var/log/maxscale', the config dir will be\n"
"'/path/maxscale/etc' and the default config file will be\n"
"'/path/maxscale/etc/maxscale.cnf'.\n",
"'/path/maxscale/etc/maxscale.cnf'.\n\n"
"MaxScale documentation: https://mariadb.com/kb/en/mariadb-enterprise/mariadb-maxscale-21/ \n",
get_configdir(), default_cnf_fname,
get_configdir(), get_logdir(), get_cachedir(), get_libdir(),
get_datadir(), get_execdir(), get_langdir(), get_piddir(),
@ -2609,20 +2610,47 @@ void set_log_augmentation(const char* value)
/**
* Pre-parse the configuration file for various directory paths.
* @param data Parameter passed by inih
* @param data Pointer to variable where custom dynamically allocated
* error message can be stored.
* @param section Section name
* @param name Parameter name
* @param value Parameter value
* @param name Parameter name
* @param value Parameter value
* @return 0 on error, 1 when successful
*/
static int cnf_preparser(void* data, const char* section, const char* name, const char* value)
{
MXS_CONFIG* cnf = config_get_global_options();
char *tmp;
/** These are read from the configuration file. These will not override
* command line parameters but will override default values. */
if (strcasecmp(section, "maxscale") == 0)
{
if (cnf->substitute_variables)
{
if (*value == '$')
{
char* env_value = getenv(value + 1);
if (!env_value)
{
char** s = (char**)data;
static const char FORMAT[] = "The environment variable %s does not exist.";
*s = (char*)MXS_MALLOC(sizeof(FORMAT) + strlen(value));
if (*s)
{
sprintf(*s, FORMAT, value + 1);
}
return 0;
}
value = env_value;
}
}
if (strcmp(name, "logdir") == 0)
{
if (strcmp(get_logdir(), default_logdir) == 0)
@ -2791,6 +2819,10 @@ static int cnf_preparser(void* data, const char* section, const char* name, cons
cnf->log_to_shm = config_truth_value((char*)value);
}
}
else if (strcmp(name, CN_SUBSTITUTE_VARIABLES) == 0)
{
cnf->substitute_variables = config_truth_value(value);
}
}
return 1;
@ -2960,23 +2992,36 @@ static bool daemonize(void)
*/
static bool sniff_configuration(const char* filepath)
{
int rv = ini_parse(filepath, cnf_preparser, NULL);
char* s = NULL;
int rv = ini_parse(filepath, cnf_preparser, &s);
if (rv != 0)
{
const char FORMAT_CUSTOM[] =
"Failed to pre-parse configuration file %s. Error on line %d. %s";
const char FORMAT_SYNTAX[] =
"Error: Failed to pre-parse configuration file %s. Error on line %d.";
"Failed to pre-parse configuration file %s. Error on line %d.";
const char FORMAT_OPEN[] =
"Error: Failed to pre-parse configuration file %s. Failed to open file.";
"Failed to pre-parse configuration file %s. Failed to open file.";
const char FORMAT_MALLOC[] =
"Error: Failed to pre-parse configuration file %s. Memory allocation failed.";
"Failed to pre-parse configuration file %s. Memory allocation failed.";
size_t extra = strlen(filepath) + UINTLEN(abs(rv)) + (s ? strlen(s) : 0);
// We just use the largest one.
char errorbuffer[sizeof(FORMAT_MALLOC) + strlen(filepath) + UINTLEN(abs(rv))];
char errorbuffer[sizeof(FORMAT_MALLOC) + extra];
if (rv > 0)
{
snprintf(errorbuffer, sizeof(errorbuffer), FORMAT_SYNTAX, filepath, rv);
if (s)
{
snprintf(errorbuffer, sizeof(errorbuffer), FORMAT_CUSTOM, filepath, rv, s);
MXS_FREE(s);
}
else
{
snprintf(errorbuffer, sizeof(errorbuffer), FORMAT_SYNTAX, filepath, rv);
}
}
else if (rv == -1)
{

View File

@ -220,6 +220,17 @@ SERVER* runtime_create_server_from_json(json_t* json);
*/
bool runtime_alter_server_from_json(SERVER* server, json_t* new_json);
/**
* @brief Alter server relationships
*
* @param server Server to alter
* @param type Type of the relation, either @c services or @c monitors
* @param json JSON that defines the relationship data
*
* @return True if the relationships were successfully modified
*/
bool runtime_alter_server_relationships_from_json(SERVER* server, const char* type, json_t* json);
/**
* @brief Create a new monitor from JSON
*
@ -239,6 +250,16 @@ MXS_MONITOR* runtime_create_monitor_from_json(json_t* json);
*/
bool runtime_alter_monitor_from_json(MXS_MONITOR* monitor, json_t* new_json);
/**
* @brief Alter monitor relationships
*
* @param monitor Monitor to alter
* @param json JSON that defines the new relationships
*
* @return True if the relationships were successfully modified
*/
bool runtime_alter_monitor_relationships_from_json(MXS_MONITOR* monitor, json_t* json);
/**
* @brief Alter a service using JSON
*
@ -249,6 +270,16 @@ bool runtime_alter_monitor_from_json(MXS_MONITOR* monitor, json_t* new_json);
*/
bool runtime_alter_service_from_json(SERVICE* service, json_t* new_json);
/**
* @brief Alter service relationships
*
* @param service Service to alter
* @param json JSON that defines the new relationships
*
* @return True if the relationships were successfully modified
*/
bool runtime_alter_service_relationships_from_json(SERVICE* service, json_t* json);
/**
* @brief Create a listener from JSON
*

View File

@ -25,6 +25,7 @@
#include <set>
#include <zlib.h>
#include <sys/stat.h>
#include <vector>
#include <maxscale/alloc.h>
#include <maxscale/hk_heartbeat.h>
@ -1929,8 +1930,7 @@ json_t* monitor_list_to_json(const char* host)
json_t* monitor_relations_to_server(const SERVER* server, const char* host)
{
json_t* rel = mxs_json_relationship(host, MXS_JSON_API_MONITORS);
std::vector<std::string> names;
spinlock_acquire(&monLock);
for (MXS_MONITOR* mon = allMonitors; mon; mon = mon->next)
@ -1943,7 +1943,7 @@ json_t* monitor_relations_to_server(const SERVER* server, const char* host)
{
if (db->server == server)
{
mxs_json_add_relation(rel, mon->name, CN_MONITORS);
names.push_back(mon->name);
break;
}
}
@ -1954,6 +1954,19 @@ json_t* monitor_relations_to_server(const SERVER* server, const char* host)
spinlock_release(&monLock);
json_t* rel = NULL;
if (!names.empty())
{
rel = mxs_json_relationship(host, MXS_JSON_API_MONITORS);
for (std::vector<std::string>::iterator it = names.begin();
it != names.end(); it++)
{
mxs_json_add_relation(rel, it->c_str(), CN_MONITORS);
}
}
return rel;
}

View File

@ -285,6 +285,29 @@ HttpResponse cb_alter_server(const HttpRequest& request)
return HttpResponse(MHD_HTTP_FORBIDDEN, runtime_get_json_error());
}
HttpResponse do_alter_server_relationship(const HttpRequest& request, const char* type)
{
SERVER* server = server_find_by_unique_name(request.uri_part(1).c_str());
ss_dassert(server && request.get_json());
if (runtime_alter_server_relationships_from_json(server, type, request.get_json()))
{
return HttpResponse(MHD_HTTP_NO_CONTENT);
}
return HttpResponse(MHD_HTTP_FORBIDDEN, runtime_get_json_error());
}
HttpResponse cb_alter_server_service_relationship(const HttpRequest& request)
{
return do_alter_server_relationship(request, "services");
}
HttpResponse cb_alter_server_monitor_relationship(const HttpRequest& request)
{
return do_alter_server_relationship(request, "monitors");
}
HttpResponse cb_create_monitor(const HttpRequest& request)
{
ss_dassert(request.get_json());
@ -323,6 +346,19 @@ HttpResponse cb_alter_monitor(const HttpRequest& request)
return HttpResponse(MHD_HTTP_FORBIDDEN, runtime_get_json_error());
}
HttpResponse cb_alter_monitor_server_relationship(const HttpRequest& request)
{
MXS_MONITOR* monitor = monitor_find(request.uri_part(1).c_str());
ss_dassert(monitor && request.get_json());
if (runtime_alter_monitor_relationships_from_json(monitor, request.get_json()))
{
return HttpResponse(MHD_HTTP_NO_CONTENT);
}
return HttpResponse(MHD_HTTP_FORBIDDEN, runtime_get_json_error());
}
HttpResponse cb_alter_service(const HttpRequest& request)
{
SERVICE* service = service_find(request.uri_part(1).c_str());
@ -336,6 +372,19 @@ HttpResponse cb_alter_service(const HttpRequest& request)
return HttpResponse(MHD_HTTP_FORBIDDEN, runtime_get_json_error());
}
HttpResponse cb_alter_service_server_relationship(const HttpRequest& request)
{
SERVICE* service = service_find(request.uri_part(1).c_str());
ss_dassert(service && request.get_json());
if (runtime_alter_service_relationships_from_json(service, request.get_json()))
{
return HttpResponse(MHD_HTTP_NO_CONTENT);
}
return HttpResponse(MHD_HTTP_FORBIDDEN, runtime_get_json_error());
}
HttpResponse cb_alter_logs(const HttpRequest& request)
{
ss_dassert(request.get_json());
@ -792,6 +841,16 @@ public:
m_patch.push_back(SResource(new Resource(cb_alter_logs, 2, "maxscale", "logs")));
m_patch.push_back(SResource(new Resource(cb_alter_maxscale, 1, "maxscale")));
/** Update resource relationships directly */
m_patch.push_back(SResource(new Resource(cb_alter_server_service_relationship, 4,
"servers", ":server", "relationships", "services")));
m_patch.push_back(SResource(new Resource(cb_alter_server_monitor_relationship, 4,
"servers", ":server", "relationships", "monitors")));
m_patch.push_back(SResource(new Resource(cb_alter_monitor_server_relationship, 4,
"monitors", ":monitor", "relationships", "servers")));
m_patch.push_back(SResource(new Resource(cb_alter_service_server_relationship, 4,
"services", ":service", "relationships", "servers")));
/** All patch resources require a request body */
for (ResourceList::iterator it = m_patch.begin(); it != m_patch.end(); it++)
{

View File

@ -1519,8 +1519,19 @@ static json_t* server_to_json_data(const SERVER* server, const char* host)
/** Relationships */
json_t* rel = json_object();
json_object_set_new(rel, CN_SERVICES, service_relations_to_server(server, host));
json_object_set_new(rel, CN_MONITORS, monitor_relations_to_server(server, host));
json_t* service_rel = service_relations_to_server(server, host);
json_t* monitor_rel = monitor_relations_to_server(server, host);
if (service_rel)
{
json_object_set_new(rel, CN_SERVICES, service_rel);
}
if (monitor_rel)
{
json_object_set_new(rel, CN_MONITORS, monitor_rel);
}
json_object_set_new(rval, CN_RELATIONSHIPS, rel);
/** Attributes */
json_object_set_new(rval, CN_ATTRIBUTES, server_json_attributes(server));

View File

@ -28,6 +28,7 @@
#include <fcntl.h>
#include <string>
#include <set>
#include <vector>
#include <maxscale/service.h>
#include <maxscale/alloc.h>
@ -2700,8 +2701,7 @@ json_t* service_relations_to_filter(const MXS_FILTER_DEF* filter, const char* ho
json_t* service_relations_to_server(const SERVER* server, const char* host)
{
json_t* rel = mxs_json_relationship(host, MXS_JSON_API_SERVICES);
std::vector<std::string> names;
spinlock_acquire(&service_spin);
for (SERVICE *service = allServices; service; service = service->next)
@ -2712,7 +2712,7 @@ json_t* service_relations_to_server(const SERVER* server, const char* host)
{
if (ref->server == server && SERVER_REF_IS_ACTIVE(ref))
{
mxs_json_add_relation(rel, service->name, CN_SERVICES);
names.push_back(service->name);
}
}
@ -2721,6 +2721,19 @@ json_t* service_relations_to_server(const SERVER* server, const char* host)
spinlock_release(&service_spin);
json_t* rel = NULL;
if (!names.empty())
{
rel = mxs_json_relationship(host, MXS_JSON_API_SERVICES);
for (std::vector<std::string>::iterator it = names.begin();
it != names.end(); it++)
{
mxs_json_add_relation(rel, it->c_str(), CN_SERVICES);
}
}
return rel;
}

View File

@ -48,54 +48,105 @@ describe("Monitor Relationships", function() {
})
it("remove relationships from old monitor", function() {
return request.get(base_url + "/monitors/MySQL-Monitor")
.then(function(resp) {
var mon = JSON.parse(resp)
delete mon.data.relationships.servers
return request.patch(base_url + "/monitors/MySQL-Monitor", {json: mon})
})
.should.be.fulfilled
var mon = { data: {
relationships: {
servers: null
}}}
return request.patch(base_url + "/monitors/MySQL-Monitor", {json: mon})
.then(() => request.get(base_url + "/monitors/MySQL-Monitor", { json: true }))
.then((res) => {
res.data.relationships.should.not.have.keys("servers")
})
});
it("add relationships to new monitor", function() {
return request.get(base_url + "/monitors/" + monitor.data.id)
.then(function(resp) {
var mon = JSON.parse(resp)
mon.data.relationships.servers = [
{id: "server1", type: "servers"},
{id: "server2", type: "servers"},
{id: "server3", type: "servers"},
{id: "server4", type: "servers"},
]
return request.patch(base_url + "/monitors/" + monitor.data.id, {json: mon})
var mon = { data: {
relationships: {
servers: {
data:[
{id: "server1", type: "servers"},
{id: "server2", type: "servers"},
{id: "server3", type: "servers"},
{id: "server4", type: "servers"},
]
}
}}}
return request.patch(base_url + "/monitors/" + monitor.data.id, {json: mon})
.then(() => request.get(base_url + "/monitors/" + monitor.data.id, { json: true }))
.then((res) => {
res.data.relationships.servers.data.should.have.lengthOf(4)
})
.should.be.fulfilled
});
it("move relationships back to old monitor", function() {
return request.get(base_url + "/monitors/" + monitor.data.id)
.then(function(resp) {
var mon = JSON.parse(resp)
delete mon.data.relationships.servers
return request.patch(base_url + "/monitors/" + monitor.data.id, {json: mon})
var mon = {data: {relationships: {servers: null}}}
return request.patch(base_url + "/monitors/" + monitor.data.id, {json: mon})
.then(() => request.get(base_url + "/monitors/" + monitor.data.id, { json: true }))
.then((res) => {
res.data.relationships.should.not.have.keys("servers")
})
.then(function() {
return request.get(base_url + "/monitors/MySQL-Monitor")
})
.then(function(resp) {
var mon = JSON.parse(resp)
mon.data.relationships.servers = [
{id: "server1", type: "servers"},
{id: "server2", type: "servers"},
{id: "server3", type: "servers"},
{id: "server4", type: "servers"},
]
mon.data.relationships.servers = {
data: [
{id: "server1", type: "servers"},
{id: "server2", type: "servers"},
{id: "server3", type: "servers"},
{id: "server4", type: "servers"},
]}
return request.patch(base_url + "/monitors/MySQL-Monitor", {json: mon})
})
.should.be.fulfilled
.then(() => request.get(base_url + "/monitors/MySQL-Monitor", { json: true }))
.then((res) => {
res.data.relationships.servers.data.should.have.lengthOf(4)
})
});
it("add relationships via `relationships` endpoint", function() {
var old = { data: [
{ id: "server2", type: "servers" },
{ id: "server3", type: "servers" },
{ id: "server4", type: "servers" }
]}
var created = { data: [
{ id: "server1", type: "servers" }
]}
return request.patch(base_url + "/monitors/MySQL-Monitor/relationships/servers", {json: old})
.then(() => request.patch(base_url + "/monitors/" + monitor.data.id + "/relationships/servers", {json: created}))
.then(() => request.get(base_url + "/monitors/MySQL-Monitor", { json: true }))
.then((res) => {
res.data.relationships.servers.data.should.have.lengthOf(3)
})
.then(() => request.get(base_url + "/monitors/" + monitor.data.id , { json: true }))
.then((res) => {
res.data.relationships.servers.data.should.have.lengthOf(1)
.that.deep.includes({ id: "server1", type: "servers" })
})
});
it("bad request body with `relationships` endpoint should be rejected", function() {
return request.patch(base_url + "/monitors/" + monitor.data.id + "/relationships/servers", {json: {data: null}})
.should.be.rejected
})
it("remove relationships via `relationships` endpoint", function() {
var old = { data: [
{ id: "server1", type: "servers" },
{ id: "server2", type: "servers" },
{ id: "server3", type: "servers" },
{ id: "server4", type: "servers" }
]}
return request.patch(base_url + "/monitors/" + monitor.data.id + "/relationships/servers", {json: {data: []}})
.then(() => request.patch(base_url + "/monitors/MySQL-Monitor/relationships/servers", {json: old}))
.then(() => request.get(base_url + "/monitors/MySQL-Monitor", { json: true }))
.then((res) => {
res.data.relationships.servers.data.should.have.lengthOf(4)
})
.then(() => request.get(base_url + "/monitors/" + monitor.data.id , { json: true }))
.then((res) => {
res.data.relationships.should.not.have.keys("servers")
})
});
it("destroy created monitor", function() {

View File

@ -57,19 +57,48 @@ describe("Server Relationships", function() {
var rel_server = JSON.parse(JSON.stringify(server))
rel_server.data.relationships = rel
it("create new server", function() {
it("create new server with relationships", function() {
return request.post(base_url + "/servers/", {json: rel_server})
.should.be.fulfilled
});
it("request server", function() {
return request.get(base_url + "/servers/" + rel_server.data.id)
.should.be.fulfilled
return request.get(base_url + "/servers/" + rel_server.data.id, { json: true })
.then((res) => {
res.data.relationships.services.data.should.have.lengthOf(2)
})
});
it("add relationships with `relationships` endpoint", function() {
return request.patch(base_url + "/servers/" + rel_server.data.id + "/relationships/monitors",
{ json: { data: [ { "id": "MySQL-Monitor", "type": "monitors" }]}})
.then(() => request.get(base_url + "/servers/" + rel_server.data.id, {json: true}))
.then((res) => {
res.data.relationships.monitors.data.should.have.lengthOf(1)
.that.has.deep.include({ "id": "MySQL-Monitor", "type": "monitors" })
})
});
it("bad request body with `relationships` endpoint should be rejected", function() {
var body = {data: null}
return request.patch(base_url + "/servers/" + rel_server.data.id + "/relationships/monitors", { json: body })
.should.be.rejected
});
it("remove relationships with `relationships` endpoint", function() {
var body = {data: []}
return request.patch(base_url + "/servers/" + rel_server.data.id + "/relationships/monitors", { json: body })
.then(() => request.get(base_url + "/servers/" + rel_server.data.id, {json: true}))
.then((res) => {
// Only monitor relationship should be undefined
res.data.relationships.should.not.have.keys("monitors")
res.data.relationships.should.have.keys("services")
})
});
it("remove relationships", function() {
delete rel_server.data.relationships["services"]
delete rel_server.data.relationships["monitors"]
rel_server.data.relationships["services"] = null
rel_server.data.relationships["monitors"] = null
return request.patch(base_url + "/servers/" + rel_server.data.id, {json: rel_server})
.should.be.fulfilled
});

View File

@ -63,6 +63,33 @@ describe("Service", function() {
})
});
it("bad request body with `relationships` endpoint should be rejected", function() {
return request.patch(base_url + "/services/RW-Split-Router/relationships/servers", {json: {data: null}})
.should.be.rejected
})
it("remove service relationship via `relationships` endpoint", function() {
return request.patch(base_url + "/services/RW-Split-Router/relationships/servers", { json: {data: []}})
.then(() => request.get(base_url + "/services/RW-Split-Router", { json: true }))
.then((res) => {
res.data.relationships.should.not.have.keys("servers")
})
});
it("add service relationship via `relationships` endpoint", function() {
return request.patch(base_url + "/services/RW-Split-Router/relationships/servers",
{ json: { data: [
{id: "server1", type: "servers"},
{id: "server2", type: "servers"},
{id: "server3", type: "servers"},
{id: "server4", type: "servers"},
]}})
.then(() => request.get(base_url + "/services/RW-Split-Router", { json: true}))
.then((res) => {
res.data.relationships.servers.data.should.have.lengthOf(4)
})
});
const listener = {
"links": {
"self": "http://localhost:8989/v1/services/RW-Split-Router/listeners"

View File

@ -230,7 +230,7 @@ int validate_mysql_user(MYSQL_AUTH* instance, DCB *dcb, MYSQL_session *session,
* Try authentication with the hostname instead of the IP. We do this only
* as a last resort so we avoid the high cost of the DNS lookup.
*/
char client_hostname[MYSQL_HOST_MAXLEN];
char client_hostname[MYSQL_HOST_MAXLEN] = "";
get_hostname(dcb, client_hostname, sizeof(client_hostname) - 1);
sprintf(sql, mysqlauth_validate_user_query, session->user, client_hostname,

View File

@ -291,73 +291,95 @@ int CacheFilterSession::routeQuery(GWBUF* pPacket)
{
if (m_pCache->should_store(m_zDefaultDb, pPacket))
{
if (m_pCache->should_use(m_pSession))
cache_result_t result = m_pCache->get_key(m_zDefaultDb, pPacket, &m_key);
if (CACHE_RESULT_IS_OK(result))
{
GWBUF* pResponse;
cache_result_t result = get_cached_response(pPacket, &pResponse);
if (CACHE_RESULT_IS_OK(result))
if (m_pCache->should_use(m_pSession))
{
if (CACHE_RESULT_IS_STALE(result))
uint32_t flags = CACHE_FLAGS_INCLUDE_STALE;
GWBUF* pResponse;
result = m_pCache->get_value(m_key, flags, &pResponse);
if (CACHE_RESULT_IS_OK(result))
{
// The value was found, but it was stale. Now we need to
// figure out whether somebody else is already fetching it.
if (m_pCache->must_refresh(m_key, this))
if (CACHE_RESULT_IS_STALE(result))
{
// We were the first ones who hit the stale item. It's
// our responsibility now to fetch it.
if (log_decisions())
// The value was found, but it was stale. Now we need to
// figure out whether somebody else is already fetching it.
if (m_pCache->must_refresh(m_key, this))
{
MXS_NOTICE("Cache data is stale, fetching fresh from server.");
// We were the first ones who hit the stale item. It's
// our responsibility now to fetch it.
if (log_decisions())
{
MXS_NOTICE("Cache data is stale, fetching fresh from server.");
}
// As we don't use the response it must be freed.
gwbuf_free(pResponse);
m_refreshing = true;
fetch_from_server = true;
}
else
{
// Somebody is already fetching the new value. So, let's
// use the stale value. No point in hitting the server twice.
if (log_decisions())
{
MXS_NOTICE("Cache data is stale but returning it, fresh "
"data is being fetched already.");
}
fetch_from_server = false;
}
// As we don't use the response it must be freed.
gwbuf_free(pResponse);
m_refreshing = true;
fetch_from_server = true;
}
else
{
// Somebody is already fetching the new value. So, let's
// use the stale value. No point in hitting the server twice.
if (log_decisions())
{
MXS_NOTICE("Cache data is stale but returning it, fresh "
"data is being fetched already.");
MXS_NOTICE("Using fresh data from cache.");
}
fetch_from_server = false;
}
}
else
{
if (log_decisions())
{
MXS_NOTICE("Using fresh data from cache.");
}
fetch_from_server = false;
fetch_from_server = true;
}
if (fetch_from_server)
{
m_state = CACHE_EXPECTING_RESPONSE;
}
else
{
m_state = CACHE_EXPECTING_NOTHING;
gwbuf_free(pPacket);
DCB *dcb = m_pSession->client_dcb;
// TODO: This is not ok. Any filters before this filter, will not
// TODO: see this data.
rv = dcb->func.write(dcb, pResponse);
}
}
else
{
fetch_from_server = true;
}
if (fetch_from_server)
{
// We will not use any value in the cache, but we will update
// the existing value.
if (log_decisions())
{
MXS_NOTICE("Unconditionally fetching data from the server, "
"refreshing cache entry.");
}
m_state = CACHE_EXPECTING_RESPONSE;
}
else
{
m_state = CACHE_EXPECTING_NOTHING;
gwbuf_free(pPacket);
DCB *dcb = m_pSession->client_dcb;
// TODO: This is not ok. Any filters before this filter, will not
// TODO: see this data.
rv = dcb->func.write(dcb, pResponse);
}
}
else
{
MXS_ERROR("Could not create cache key.");
m_state = CACHE_IGNORING_RESPONSE;
}
}
else
@ -775,31 +797,6 @@ void CacheFilterSession::reset_response_state()
m_res.offset = 0;
}
/**
* Route a query via the cache.
*
* @param key A SELECT packet.
* @param value The result.
* @return True if the query was satisfied from the query.
*/
cache_result_t CacheFilterSession::get_cached_response(const GWBUF *pQuery, GWBUF **ppResponse)
{
cache_result_t result = m_pCache->get_key(m_zDefaultDb, pQuery, &m_key);
if (CACHE_RESULT_IS_OK(result))
{
uint32_t flags = CACHE_FLAGS_INCLUDE_STALE;
result = m_pCache->get_value(m_key, flags, ppResponse);
}
else
{
MXS_ERROR("Could not create cache key.");
}
return result;
}
/**
* Store the data.
*

View File

@ -102,8 +102,6 @@ private:
void reset_response_state();
cache_result_t get_cached_response(const GWBUF *pQuery, GWBUF **ppResponse);
bool log_decisions() const
{
return m_pCache->config().debug & CACHE_DEBUG_DECISIONS ? true : false;

View File

@ -569,7 +569,7 @@ int extract_type_length(const char* ptr, char *dest)
/** Skip characters until we either hit a whitespace character or the start
* of the length definition. */
while (*ptr && !isspace(*ptr) && *ptr != '(')
while (*ptr && isalpha(*ptr))
{
ptr++;
}

View File

@ -190,7 +190,6 @@ MXS_MODULE* MXS_CREATE_MODULE()
MXS_MODULE_OPT_NONE, enc_algo_values
},
{"encryption_key_file", MXS_MODULE_PARAM_PATH, NULL, MXS_MODULE_OPT_PATH_R_OK},
{"mariadb10_slave_gtid", MXS_MODULE_PARAM_BOOL, "false"},
{"mariadb10_master_gtid", MXS_MODULE_PARAM_BOOL, "false"},
{
"binlog_structure", MXS_MODULE_PARAM_ENUM, "flat",
@ -359,8 +358,8 @@ createInstance(SERVICE *service, char **options)
inst->request_semi_sync = config_get_bool(params, "semisync");
inst->master_semi_sync = 0;
/* Enable MariaDB GTID tracking for slaves */
inst->mariadb10_gtid = config_get_bool(params, "mariadb10_slave_gtid");
/* Enable MariaDB GTID tracking for slaves if MariaDB 10 compat is set */
inst->mariadb10_gtid = inst->mariadb10_compat;
/* Enable MariaDB GTID registration to master */
inst->mariadb10_master_gtid = config_get_bool(params, "mariadb10_master_gtid");
@ -379,10 +378,8 @@ createInstance(SERVICE *service, char **options)
/* Set router uuid */
inst->uuid = config_copy_string(params, "uuid");
/* Enable Flat or Tree storage of binlog files */
inst->storage_type = config_get_enum(params,
"binlog_structure",
binlog_storage_values);
/* Set Flat storage of binlog files as default */
inst->storage_type = BLR_BINLOG_STORAGE_FLAT;
if (inst->uuid == NULL)
{
@ -541,21 +538,10 @@ createInstance(SERVICE *service, char **options)
{
inst->encryption.enabled = config_truth_value(value);
}
else if (strcmp(options[i], "mariadb10_slave_gtid") == 0)
{
inst->mariadb10_gtid = config_truth_value(value);
}
else if (strcmp(options[i], "mariadb10_master_gtid") == 0)
{
inst->mariadb10_master_gtid = config_truth_value(value);
}
else if (strcmp(options[i], "binlog_structure") == 0)
{
/* Enable Flat or Tree storage of binlog files */
inst->storage_type = strcasecmp(value, "tree") == 0 ?
BLR_BINLOG_STORAGE_TREE :
BLR_BINLOG_STORAGE_FLAT;
}
else if (strcmp(options[i], "encryption_algorithm") == 0)
{
int ret = blr_check_encryption_algorithm(value);
@ -780,24 +766,12 @@ createInstance(SERVICE *service, char **options)
inst->mariadb10_compat = true;
}
/**
* Force GTID slave request handling if GTID Master registration is On
*/
if (inst->mariadb10_master_gtid)
{
/* Force GTID slave request handling */
inst->mariadb10_gtid = true;
}
if (!inst->mariadb10_master_gtid &&
inst->storage_type == BLR_BINLOG_STORAGE_TREE)
{
MXS_ERROR("%s: binlog_structure 'tree' mode can be enabled only"
" with MariaDB Master GTID registration feature."
" Please enable it with option"
" 'mariadb10_master_gtid = on'",
service->name);
free_instance(inst);
return NULL;
/* Force binlog storage as tree */
inst->storage_type = BLR_BINLOG_STORAGE_TREE;
}
/* Log binlog structure storage mode */
@ -806,6 +780,7 @@ createInstance(SERVICE *service, char **options)
inst->storage_type == BLR_BINLOG_STORAGE_FLAT ?
"'flat' mode" :
"'tree' mode using GTID domain_id and server_id");
/* Enable MariaDB the GTID maps store */
if (inst->mariadb10_compat &&
inst->mariadb10_gtid)

View File

@ -62,8 +62,7 @@
* 11/07/2016 Massimiliano Pinto Added SSL backend support
* 24/08/2016 Massimiliano Pinto Added slave notification via CS_WAIT_DATA
* 16/09/2016 Massimiliano Pinto Special events created by MaxScale are not sent to slaves:
* MARIADB10_START_ENCRYPTION_EVENT or IGNORABLE_EVENT
* Events with LOG_EVENT_IGNORABLE_F are skipped as well.
* MARIADB10_START_ENCRYPTION_EVENT or IGNORABLE_EVENT.
*
* @endverbatim
*/
@ -89,6 +88,7 @@
#include <zlib.h>
#include <maxscale/alloc.h>
#include <inttypes.h>
#include <maxscale/utils.h>
/**
* This struct is used by sqlite3_exec callback routine
@ -1169,6 +1169,20 @@ static const char *mariadb10_gtid_status_columns[] =
NULL
};
/*
* Extra Columns to send in "SHOW ALL SLAVES STATUS" MariaDB 10 command
*/
static const char *mariadb10_extra_status_columns[] =
{
"Retried_transactions",
"Max_relay_log_size",
"Executed_log_entries",
"Slave_received_heartbeats",
"Slave_heartbeat_period",
"Gtid_Slave_Pos",
NULL
};
/**
* Send the response to the SQL command "SHOW SLAVE STATUS" or
* SHOW ALL SLAVES STATUS
@ -1193,19 +1207,13 @@ blr_slave_send_slave_status(ROUTER_INSTANCE *router,
int gtid_cols = 0;
/* Count SHOW SLAVE STATUS the columns */
while (slave_status_columns[ncols])
{
ncols++;
}
ncols += MXS_ARRAY_NELEMS(slave_status_columns) - 1;
/* Add the new SHOW ALL SLAVES STATUS columns */
if (all_slaves)
{
int k = 0;
while (all_slaves_status_columns[k++])
{
ncols++;
}
ncols += MXS_ARRAY_NELEMS(all_slaves_status_columns) - 1;
ncols += MXS_ARRAY_NELEMS(mariadb10_extra_status_columns) - 1;
}
/* Get the right GTID columns array */
@ -1258,6 +1266,20 @@ blr_slave_send_slave_status(ROUTER_INSTANCE *router,
seqno++);
}
/* Send extra columns for SHOW ALL SLAVES STATUS */
if (all_slaves)
{
for (i = 0; mariadb10_extra_status_columns[i]; i++)
{
blr_slave_send_columndef(router,
slave,
mariadb10_extra_status_columns[i],
BLR_TYPE_STRING,
40,
seqno++);
}
}
/* Send EOF for columns def */
blr_slave_send_eof(router, slave, seqno++);
@ -1650,6 +1672,50 @@ blr_slave_send_slave_status(ROUTER_INSTANCE *router,
ptr += col_len;
}
if (all_slaves)
{
// Retried_transactions
sprintf(column, "%d", 0);
col_len = strlen(column);
*ptr++ = col_len; // Length of result string
memcpy((char *)ptr, column, col_len); // Result string
ptr += col_len;
*ptr++ = 0; // Max_relay_log_size
*ptr++ = 0; // Executed_log_entries
// Slave_received_heartbeats
sprintf(column, "%d", router->stats.n_heartbeats);
col_len = strlen(column);
*ptr++ = col_len; // Length of result string
memcpy((char *)ptr, column, col_len); // Result string
ptr += col_len;
// Slave_heartbeat_period
sprintf(column, "%lu", router->heartbeat);
col_len = strlen(column);
*ptr++ = col_len; // Length of result string
memcpy((char *)ptr, column, col_len); // Result string
ptr += col_len;
//Gtid_Slave_Pos
if (!router->mariadb10_gtid)
{
// No GTID support send empty values
*ptr++ = 0;
}
else
{
sprintf(column,
"%s",
router->last_mariadb_gtid);
col_len = strlen(column);
*ptr++ = col_len; // Length of result string
memcpy(ptr, column, col_len); // Result string
ptr += col_len;
}
}
*ptr++ = 0;
actual_len = ptr - (uint8_t *)GWBUF_DATA(pkt);
@ -2308,8 +2374,7 @@ blr_slave_catchup(ROUTER_INSTANCE *router, ROUTER_SLAVE *slave, bool large)
/* Don't sent special events generated by MaxScale */
if (hdr.event_type == MARIADB10_START_ENCRYPTION_EVENT ||
hdr.event_type == IGNORABLE_EVENT ||
(hdr.flags & LOG_EVENT_IGNORABLE_F))
hdr.event_type == IGNORABLE_EVENT)
{
/* In case of file rotation or pos = 4 the events
* are sent from position 4 and the new FDE at pos 4 is read.

View File

@ -2511,6 +2511,14 @@ static void enable_log_priority(DCB *dcb, char *arg1)
if (priority != -1)
{
mxs_log_set_priority_enabled(priority, true);
#if !defined(SS_DEBUG)
if (priority == LOG_DEBUG)
{
dcb_printf(dcb,
"Enabling '%s' has no effect, as MaxScale has been built in release mode.\n", arg1);
}
#endif
}
else
{