Merge branch '2.1' into 2.2

This commit is contained in:
Markus Mäkelä
2017-12-12 13:23:02 +02:00
97 changed files with 5945 additions and 361 deletions

View File

@ -49,6 +49,7 @@ For more details, please refer to:
* MaxScale now supports IPv6 * MaxScale now supports IPv6
For more details, please refer to: For more details, please refer to:
* [MariaDB MaxScale 2.1.12 Release Notes](Release-Notes/MaxScale-2.1.12-Release-Notes.md)
* [MariaDB MaxScale 2.1.11 Release Notes](Release-Notes/MaxScale-2.1.11-Release-Notes.md) * [MariaDB MaxScale 2.1.11 Release Notes](Release-Notes/MaxScale-2.1.11-Release-Notes.md)
* [MariaDB MaxScale 2.1.10 Release Notes](Release-Notes/MaxScale-2.1.10-Release-Notes.md) * [MariaDB MaxScale 2.1.10 Release Notes](Release-Notes/MaxScale-2.1.10-Release-Notes.md)
* [MariaDB MaxScale 2.1.9 Release Notes](Release-Notes/MaxScale-2.1.9-Release-Notes.md) * [MariaDB MaxScale 2.1.9 Release Notes](Release-Notes/MaxScale-2.1.9-Release-Notes.md)

View File

@ -120,6 +120,23 @@ This functionality is similar to the [Multi-Master Monitor](MM-Monitor.md)
functionality. The only difference is that the MySQL monitor will also detect functionality. The only difference is that the MySQL monitor will also detect
traditional Master-Slave topologies. traditional Master-Slave topologies.
### `ignore_external_masters`
Ignore any servers that are not monitored by this monitor but are a part of the
replication topology. This option was added in MaxScale 2.1.12 and is disabled
by default.
MaxScale detects if a master server replicates from an external server. When
this is detected, the server is assigned the `Slave` and `Slave of External
Server` labels and will be treated as a slave server. Most of the time this
topology is used when MaxScale is used for read scale-out without master
servers, a Galera cluster with read replicas being a prime example of this
setup. Sometimes this is not the desired behavior and the external master server
should be ignored. Most of the time this is due to multi-source replication.
When this option is enabled, all servers that have the `Master, Slave, Slave of
External Server, Running` labels will instead get the `Master, Running` labels.
### `detect_standalone_master` ### `detect_standalone_master`
Detect standalone master servers. This feature takes a boolean parameter and is Detect standalone master servers. This feature takes a boolean parameter and is

View File

@ -36,6 +36,13 @@ where only parameters are used with the binlogrouter.
[Here is a list of bugs fixed in MaxScale 2.1.12.](https://jira.mariadb.org/issues/?jql=project%20%3D%20MXS%20AND%20issuetype%20%3D%20Bug%20AND%20status%20%3D%20Closed%20AND%20fixVersion%20%3D%202.1.12) [Here is a list of bugs fixed in MaxScale 2.1.12.](https://jira.mariadb.org/issues/?jql=project%20%3D%20MXS%20AND%20issuetype%20%3D%20Bug%20AND%20status%20%3D%20Closed%20AND%20fixVersion%20%3D%202.1.12)
* [MXS-1555](https://jira.mariadb.org/browse/MXS-1555) Protocol command tracking doesn't work with readwritesplit
* [MXS-1553](https://jira.mariadb.org/browse/MXS-1553) GaleraMon ignores server's SSL configuration
* [MXS-1540](https://jira.mariadb.org/browse/MXS-1540) Race conditions in Galeramon server parameter handling
* [MXS-1536](https://jira.mariadb.org/browse/MXS-1536) Fatal: MaxScale 2.1.10 received fatal signal 11. Attempting backtrace. Commit ID: 96c3f0dda3b5a9640c4995f46ac8efec77686269 System name: Linux Release string: NAME=CentOS Linux
* [MXS-1529](https://jira.mariadb.org/browse/MXS-1529) OOM: mxs_realloc can be repeated this way
* [MXS-1509](https://jira.mariadb.org/browse/MXS-1509) Show correct server state for multisource replication
## Packaging ## Packaging
RPM and Debian packages are provided for the Linux distributions supported by RPM and Debian packages are provided for the Linux distributions supported by

View File

@ -362,7 +362,6 @@ follows.
[Replication] [Replication]
type=service type=service
router=binlogrouter router=binlogrouter
servers=masterdb
user=maxscale user=maxscale
passwd=maxpwd passwd=maxpwd
server_id=3 server_id=3

View File

@ -372,6 +372,11 @@ Examples with SSL options:
Binlog Router Plugin is compatible with MariaDB 5.5, 10.0, 10.1 and 10.2 as well Binlog Router Plugin is compatible with MariaDB 5.5, 10.0, 10.1 and 10.2 as well
as MySQL 5.6 and 5.7. as MySQL 5.6 and 5.7.
Note: When using MariaDB 10.2 or MySQL 5.7 the `send_slave_heartbeat` option
must be set to On as the slave servers request the hearbeat to MaxScale.
As an alternative use `CHANGE MASTER TO MASTER_HEARTBEAT_PERIOD=0` in
the slave server in order to disable the heartbeat request.
## Enabling MariaDB 10 compatibility ## Enabling MariaDB 10 compatibility
MariaDB 10 has different slave registration phase so an extra option is required: MariaDB 10 has different slave registration phase so an extra option is required:
@ -396,6 +401,11 @@ with MySQL 5.7 slaves the `send_slave_heartbeat` option must be set to on.
Binlog Router currently does not work for MySQL 5.5 due to missing Binlog Router currently does not work for MySQL 5.5 due to missing
*@@global.binlog_checksum* variable. *@@global.binlog_checksum* variable.
## MariaDB Limitations
Starting from version 10.2 there are new replication events related
to binlog event compression: these new events are not supported yet.
Be sure that `log_bin_compress` is not set in any MariaDB 10.2 server.
# MariaDB MaxScale Replication Diagnostics # MariaDB MaxScale Replication Diagnostics
The binlog router module of MariaDB MaxScale produces diagnostic output that can The binlog router module of MariaDB MaxScale produces diagnostic output that can

View File

@ -7,6 +7,7 @@ For more information about MariaDB MaxScale 2.1, please refer to the
[ChangeLog](../Changelog.md). [ChangeLog](../Changelog.md).
For a complete list of changes in MaxScale 2.1, refer to the For a complete list of changes in MaxScale 2.1, refer to the
* [MaxScale 2.1.12 Release Notes](../Release-Notes/MaxScale-2.1.12-Release-Notes.md)
* [MaxScale 2.1.11 Release Notes](../Release-Notes/MaxScale-2.1.11-Release-Notes.md) * [MaxScale 2.1.11 Release Notes](../Release-Notes/MaxScale-2.1.11-Release-Notes.md)
* [MaxScale 2.1.10 Release Notes](../Release-Notes/MaxScale-2.1.10-Release-Notes.md) * [MaxScale 2.1.10 Release Notes](../Release-Notes/MaxScale-2.1.10-Release-Notes.md)
* [MaxScale 2.1.9 Release Notes](../Release-Notes/MaxScale-2.1.9-Release-Notes.md). * [MaxScale 2.1.9 Release Notes](../Release-Notes/MaxScale-2.1.9-Release-Notes.md).

View File

@ -5,7 +5,7 @@
set(MAXSCALE_VERSION_MAJOR "2" CACHE STRING "Major version") set(MAXSCALE_VERSION_MAJOR "2" CACHE STRING "Major version")
set(MAXSCALE_VERSION_MINOR "1" CACHE STRING "Minor version") set(MAXSCALE_VERSION_MINOR "1" CACHE STRING "Minor version")
set(MAXSCALE_VERSION_PATCH "11" CACHE STRING "Patch version") set(MAXSCALE_VERSION_PATCH "12" CACHE STRING "Patch version")
# This should only be incremented if a package is rebuilt # This should only be incremented if a package is rebuilt
set(MAXSCALE_BUILD_NUMBER 1 CACHE STRING "Release number") set(MAXSCALE_BUILD_NUMBER 1 CACHE STRING "Release number")

View File

@ -138,6 +138,10 @@ typedef struct server
int last_event; /**< The last event that occurred on this server */ int last_event; /**< The last event that occurred on this server */
bool active_event; /**< Event observed when MaxScale was active */ bool active_event; /**< Event observed when MaxScale was active */
int64_t triggered_at; /**< Time when the last event was triggered */ int64_t triggered_at; /**< Time when the last event was triggered */
struct
{
bool ssl_not_enabled; /**< SSL not used for an SSL enabled server */
} log_warning; /**< Whether a specific warning was logged */
#if defined(SS_DEBUG) #if defined(SS_DEBUG)
skygw_chk_t server_chk_tail; skygw_chk_t server_chk_tail;
#endif #endif
@ -234,6 +238,10 @@ enum
(((server)->status & (SERVER_RUNNING|SERVER_MASTER|SERVER_SLAVE|SERVER_MAINT)) == \ (((server)->status & (SERVER_RUNNING|SERVER_MASTER|SERVER_SLAVE|SERVER_MAINT)) == \
(SERVER_RUNNING|SERVER_MASTER|SERVER_SLAVE)) (SERVER_RUNNING|SERVER_MASTER|SERVER_SLAVE))
#define SERVER_IS_SLAVE_OF_EXTERNAL_MASTER(s) (((s)->status & \
(SERVER_RUNNING|SERVER_SLAVE_OF_EXTERNAL_MASTER)) == \
(SERVER_RUNNING|SERVER_SLAVE_OF_EXTERNAL_MASTER))
/** /**
* @brief Allocate a new server * @brief Allocate a new server
* *

View File

@ -205,7 +205,6 @@ int main(int argc, char** argv)
test.maxscales->connect_maxscale(0); test.maxscales->connect_maxscale(0);
test.repl->connect(); test.repl->connect();
test.tprintf("Testing column-wise binding with a direct connection"); test.tprintf("Testing column-wise binding with a direct connection");
test.add_result(bind_by_column(test.repl->nodes[0]), "Bulk inserts with a direct connection should work"); test.add_result(bind_by_column(test.repl->nodes[0]), "Bulk inserts with a direct connection should work");
test.tprintf("Testing column-wise binding with readwritesplit"); test.tprintf("Testing column-wise binding with readwritesplit");

View File

@ -0,0 +1,45 @@
[maxscale]
threads=###threads###
[MySQL Monitor]
type=monitor
module=mysqlmon
servers=server1,server2
user=maxskysql
passwd=skysql
monitor_interval=1000
[RW Split Router]
type=service
router=readwritesplit
servers=server1,server2
user=maxskysql
passwd=skysql
[RW Split Listener]
type=listener
service=RW Split Router
protocol=MySQLClient
port=4006
[CLI]
type=service
router=cli
[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
socket=default
[server1]
type=server
address=###node_server_IP_1###
port=###node_server_port_1###
protocol=MySQLBackend
[server2]
type=server
address=###node_server_IP_2###
port=###node_server_port_2###
protocol=MySQLBackend

View File

@ -7,9 +7,9 @@ type=service
router=binlogrouter router=binlogrouter
user=skysql user=skysql
passwd=skysql passwd=skysql
version_string=5.6.15-log #version_string=5.6.15-log
#router_options=server-id=3,user=repl,password=repl,master-id=1 #router_options=server-id=3,user=repl,password=repl,master-id=1
router_options=server-id=3,user=repl,password=repl,longburst=500,heartbeat=30,binlogdir=/var/lib/maxscale/Binlog_Service,mariadb10-compatibility=1 router_options=server-id=9993,send_slave_heartbeat=On,user=repl,password=repl,longburst=500,heartbeat=10,binlogdir=/var/lib/maxscale/Binlog_Service,mariadb10-compatibility=1
[Binlog Listener] [Binlog Listener]

View File

@ -7,9 +7,9 @@ type=service
router=binlogrouter router=binlogrouter
user=skysql user=skysql
passwd=skysql passwd=skysql
version_string=5.6.15-log #version_string=5.6.15-log
#router_options=server-id=3,user=repl,password=repl,master-id=1 #router_options=server-id=3,user=repl,password=repl,master-id=1
router_options=server-id=3,user=repl,password=repl,longburst=500,heartbeat=30,binlogdir=/var/lib/maxscale/Binlog_Service,mariadb10-compatibility=1 router_options=server-id=9993,send_slave_heartbeat=On,user=repl,password=repl,longburst=500,heartbeat=10,binlogdir=/var/lib/maxscale/Binlog_Service,mariadb10-compatibility=1
[Binlog Listener] [Binlog Listener]

View File

@ -7,9 +7,9 @@ type=service
router=binlogrouter router=binlogrouter
user=skysql user=skysql
passwd=skysql passwd=skysql
version_string=5.6.15-log #version_string=5.6.15-log
#router_options=server-id=3,user=repl,password=repl,master-id=1 #router_options=server-id=3,user=repl,password=repl,master-id=1
router_options=server-id=3,longburst=500,heartbeat=30,binlogdir=/var/lib/maxscale/Binlog_Service,mariadb10-compatibility=1 router_options=server-id=9993,send_slave_heartbeat=On,longburst=500,heartbeat=10,binlogdir=/var/lib/maxscale/Binlog_Service,mariadb10-compatibility=1

View File

@ -8,9 +8,9 @@ type=service
router=binlogrouter router=binlogrouter
user=skysql user=skysql
passwd=skysql passwd=skysql
version_string=5.6.15-log #version_string=5.6.15-log
#router_options=server-id=3,user=repl,password=repl,master-id=1 #router_options=server-id=3,user=repl,password=repl,master-id=1
router_options=server-id=3,user=repl,password=repl,longburst=500,heartbeat=30,binlogdir=/var/lib/maxscale/Binlog_Service,transaction_safety=1,mariadb10-compatibility=1 router_options=server-id=9993,send_slave_heartbeat=On,user=repl,password=repl,longburst=500,heartbeat=10,binlogdir=/var/lib/maxscale/Binlog_Service,transaction_safety=1,mariadb10-compatibility=1

View File

@ -8,9 +8,9 @@ router=binlogrouter
#servers=master #servers=master
user=skysql user=skysql
passwd=skysql passwd=skysql
version_string=5.6.15-log #version_string=5.6.15-log
#router_options=server-id=3,user=repl,password=repl,master-id=1 #router_options=server-id=3,user=repl,password=repl,master-id=1
router_options=server-id=3,user=repl,password=repl,longburst=500,heartbeat=30,binlogdir=/var/lib/maxscale/Binlog_Service,semisync=1,transaction_safety=1,mariadb10-compatibility=1 router_options=server-id=9993,send_slave_heartbeat=On,user=repl,password=repl,longburst=500,heartbeat=10,binlogdir=/var/lib/maxscale/Binlog_Service,semisync=1,transaction_safety=1,mariadb10-compatibility=1

View File

@ -8,9 +8,9 @@ router=binlogrouter
#servers=master #servers=master
user=skysql user=skysql
passwd=skysql passwd=skysql
version_string=5.6.15-log #version_string=5.6.15-log
#router_options=server-id=3,user=repl,password=repl,master-id=1 #router_options=server-id=3,user=repl,password=repl,master-id=1
router_options=server-id=3,user=repl,password=repl,longburst=500,heartbeat=30,binlogdir=/var/lib/maxscale/Binlog_Service,semisync=0,transaction_safety=0,mariadb10-compatibility=1 router_options=server-id=9993,send_slave_heartbeat=On,user=repl,password=repl,longburst=500,heartbeat=10,binlogdir=/var/lib/maxscale/Binlog_Service,semisync=0,transaction_safety=0,mariadb10-compatibility=1

View File

@ -8,9 +8,9 @@ router=binlogrouter
#servers=master #servers=master
user=skysql user=skysql
passwd=skysql passwd=skysql
version_string=5.6.15-log #version_string=5.6.15-log
#router_options=server-id=3,user=repl,password=repl,master-id=1 #router_options=server-id=3,user=repl,password=repl,master-id=1
router_options=server-id=3,user=repl,password=repl,longburst=500,heartbeat=30,binlogdir=/var/lib/maxscale/Binlog_Service,semisync=1,transaction_safety=0,mariadb10-compatibility=1 router_options=server-id=9993,send_slave_heartbeat=On,user=repl,password=repl,longburst=500,heartbeat=10,binlogdir=/var/lib/maxscale/Binlog_Service,semisync=1,transaction_safety=0,mariadb10-compatibility=1

View File

@ -8,9 +8,9 @@ router=binlogrouter
#servers=master #servers=master
user=skysql user=skysql
passwd=skysql passwd=skysql
version_string=5.6.15-log #version_string=5.6.15-log
#router_options=server-id=3,user=repl,password=repl,master-id=1 #router_options=server-id=3,user=repl,password=repl,master-id=1
router_options=server-id=3,user=repl,password=repl,longburst=500,heartbeat=30,binlogdir=/var/lib/maxscale/Binlog_Service,semisync=0,transaction_safety=1,mariadb10-compatibility=1 router_options=server-id=9993,send_slave_heartbeat=On,user=repl,password=repl,longburst=500,heartbeat=10,binlogdir=/var/lib/maxscale/Binlog_Service,semisync=0,transaction_safety=1,mariadb10-compatibility=1

View File

@ -8,9 +8,9 @@ router=binlogrouter
#servers=master #servers=master
user=skysql user=skysql
passwd=skysql passwd=skysql
version_string=5.6.15-log #version_string=5.6.15-log
#router_options=server-id=3,user=repl,password=repl,master-id=1 #router_options=server-id=3,user=repl,password=repl,master-id=1
router_options=server-id=3,user=repl,password=repl,longburst=500,heartbeat=30,binlogdir=/var/lib/maxscale/Binlog_Service,transaction_safety=1,mariadb10-compatibility=1 router_options=server-id=9993,send_slave_heartbeat=On,user=repl,password=repl,longburst=500,heartbeat=10,binlogdir=/var/lib/maxscale/Binlog_Service,transaction_safety=1,mariadb10-compatibility=1

View File

@ -44,7 +44,7 @@ int main(int argc, char *argv[])
Test->repl->close_connections(); Test->repl->close_connections();
Test->stop_timeout(); Test->stop_timeout();
Test->check_log_err(0, (char *) "refresh rate limit exceeded", false); Test->check_log_err(0, (char *) "Refresh rate limit exceeded", false);
Test->check_maxscale_alive(0); Test->check_maxscale_alive(0);
int rval = Test->global_result; int rval = Test->global_result;
delete Test; delete Test;

View File

@ -293,10 +293,9 @@ int Mariadb_nodes::stop_nodes()
{ {
printf("Stopping node %d\n", i); printf("Stopping node %d\n", i);
fflush(stdout); fflush(stdout);
local_result += execute_query(nodes[i], (char *) "stop slave;"); local_result += execute_query(nodes[i], "stop slave;");
fflush(stdout);
local_result += stop_node(i); local_result += stop_node(i);
fflush(stdout); local_result += ssh_node_f(i, true, "rm -f /var/lib/mysql/*master*.info");
} }
return local_result; return local_result;
} }
@ -346,10 +345,14 @@ int Mariadb_nodes::start_replication()
for (int i = 0; i < N; i++) for (int i = 0; i < N; i++)
{ {
local_result += start_node(i, (char*)""); local_result += start_node(i, (char*)"");
sprintf(str, ssh_node_f(i, true,
"mysql -u root %s -e \"STOP SLAVE; RESET SLAVE; RESET SLAVE ALL; RESET MASTER; SET GLOBAL read_only=OFF;\"", "mysql --force -u root %s -e \"STOP SLAVE; STOP ALL SLAVES; RESET SLAVE; RESET SLAVE ALL; RESET MASTER; SET GLOBAL read_only=OFF;\"",
socket_cmd[i]); socket_cmd[i]);
ssh_node(i, str, true); ssh_node_f(i, true, "sudo rm -f /etc/my.cnf.d/kerb.cnf");
ssh_node_f(i, true,
"for i in `mysql -ss --force -u root %s -e \"SHOW DATABASES\"|grep -iv 'mysql\\|information_schema\\|performance_schema'`; "
"do mysql --force -u root %s -e \"DROP DATABASE $i\";"
"done", socket_cmd[i], socket_cmd[i]);
} }
sprintf(str, "%s/create_user.sh", test_dir); sprintf(str, "%s/create_user.sh", test_dir);
@ -361,10 +364,9 @@ int Mariadb_nodes::start_replication()
ssh_node(0, str, false); ssh_node(0, str, false);
// Create a database dump from the master and distribute it to the slaves // Create a database dump from the master and distribute it to the slaves
sprintf(str, ssh_node_f(0, true, "mysql --force -u root %s -e \"CREATE DATABASE test\"; "
"mysqldump --all-databases --add-drop-database --flush-privileges --master-data=1 --gtid %s > /tmp/master_backup.sql", "mysqldump --all-databases --add-drop-database --flush-privileges --master-data=1 --gtid %s > /tmp/master_backup.sql",
socket_cmd[0]); socket_cmd[0], socket_cmd[0]);
ssh_node(0, str, true);
sprintf(str, "%s/master_backup.sql", test_dir); sprintf(str, "%s/master_backup.sql", test_dir);
copy_from_node_legacy("/tmp/master_backup.sql", str, 0); copy_from_node_legacy("/tmp/master_backup.sql", str, 0);
@ -374,16 +376,11 @@ int Mariadb_nodes::start_replication()
printf("Starting node %d\n", i); printf("Starting node %d\n", i);
fflush(stdout); fflush(stdout);
copy_to_node_legacy(str, "/tmp/master_backup.sql", i); copy_to_node_legacy(str, "/tmp/master_backup.sql", i);
sprintf(dtr, ssh_node_f(i, true, "mysql --force -u root %s < /tmp/master_backup.sql",
"mysql -u root %s < /tmp/master_backup.sql",
socket_cmd[i]); socket_cmd[i]);
ssh_node(i, dtr, true); ssh_node_f(i, true, "mysql --force -u root %s -e \"CHANGE MASTER TO MASTER_HOST=\\\"%s\\\", MASTER_PORT=%d, "
char query[512];
sprintf(query, "mysql -u root %s -e \"CHANGE MASTER TO MASTER_HOST=\\\"%s\\\", MASTER_PORT=%d, "
"MASTER_USER=\\\"repl\\\", MASTER_PASSWORD=\\\"repl\\\";" "MASTER_USER=\\\"repl\\\", MASTER_PASSWORD=\\\"repl\\\";"
"START SLAVE;\"", socket_cmd[i], IP_private[0], port[0]); "START SLAVE;\"", socket_cmd[i], IP_private[0], port[0]);
ssh_node(i, query, true);
} }
return local_result; return local_result;
@ -614,6 +611,28 @@ static bool bad_slave_thread_status(MYSQL *conn, const char *field, int node)
return rval; return rval;
} }
static bool multi_source_replication(MYSQL *conn, int node)
{
bool rval = true;
MYSQL_RES *res;
if (mysql_query(conn, "SHOW ALL SLAVES STATUS") == 0 &&
(res = mysql_store_result(conn)))
{
if (mysql_num_rows(res) == 1)
{
rval = false;
}
else
{
printf("Node %d: More than one configured slave\n", node);
fflush(stdout);
}
}
return rval;
}
int Mariadb_nodes::check_replication() int Mariadb_nodes::check_replication()
{ {
int master = 0; int master = 0;
@ -643,7 +662,8 @@ int Mariadb_nodes::check_replication()
} }
} }
else if (bad_slave_thread_status(nodes[i], "Slave_IO_Running", i) || else if (bad_slave_thread_status(nodes[i], "Slave_IO_Running", i) ||
bad_slave_thread_status(nodes[i], "Slave_SQL_Running", i)) bad_slave_thread_status(nodes[i], "Slave_SQL_Running", i) ||
multi_source_replication(nodes[i], i))
{ {
res = 1; res = 1;
} }
@ -1050,7 +1070,7 @@ static void wait_until_pos(MYSQL *mysql, int filenum, int pos)
{ {
MYSQL_ROW row = mysql_fetch_row(res); MYSQL_ROW row = mysql_fetch_row(res);
if (row && row[6] && row[21]) if (row && row[5] && strchr(row[5], '.') && row[21])
{ {
char *file_suffix = strchr(row[5], '.') + 1; char *file_suffix = strchr(row[5], '.') + 1;
slave_filenum = atoi(file_suffix); slave_filenum = atoi(file_suffix);

View File

@ -0,0 +1,23 @@
set -x
chmod 777 /tmp/
echo 2 > /proc/sys/fs/suid_dumpable
sed -i "s/start() {/start() { \n export DAEMON_COREFILE_LIMIT='unlimited'; ulimit -c unlimited; /" /etc/init.d/maxscale
sed -i "s/log_daemon_msg \"Starting MaxScale\"/export DAEMON_COREFILE_LIMIT='unlimited'; ulimit -c unlimited; log_daemon_msg \"Starting MaxScale\" /" /etc/init.d/maxscale
echo /tmp/core-%e-%s-%u-%g-%p-%t > /proc/sys/kernel/core_pattern
echo "kernel.core_pattern = /tmp/core-%e-sig%s-user%u-group%g-pid%p-time%t" >> /etc/sysctl.d/core.conf
echo "kernel.core_uses_pid = 1" >> /etc/sysctl.d/core.conf
echo "fs.suid_dumpable = 2" >> /etc/sysctl.d/core.conf
echo "DefaultLimitCORE=infinity" >> /etc/systemd/system.conf
echo "* hard core unlimited" >> /etc/security/limits.d/core.conf
echo "* soft core unlimited" >> /etc/security/limits.d/core.conf
echo "* soft nofile 65536" >> /etc/security/limits.d/core.conf
echo "* hard nofile 65536" >> /etc/security/limits.d/core.conf
echo "fs.file-max = 65536" >> /etc/sysctl.conf
systemctl daemon-reexec
sysctl -p

View File

@ -0,0 +1,4 @@
#create user repl@'%' identified by 'repl';
grant replication slave on *.* to repl@'%' identified by 'repl';
FLUSH PRIVILEGES;

View File

@ -0,0 +1,18 @@
create user skysql@'%' identified by 'skysql';
create user skysql@'localhost' identified by 'skysql';
GRANT ALL PRIVILEGES ON *.* TO skysql@'%' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO skysql@'localhost' WITH GRANT OPTION;
create user maxuser@'%' identified by 'maxpwd';
create user maxuser@'localhost' identified by 'maxpwd';
GRANT ALL PRIVILEGES ON *.* TO maxuser@'%' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO maxuser@'localhost' WITH GRANT OPTION;
create user maxskysql@'%' identified by 'skysql';
create user maxskysql@'localhost' identified by 'skysql';
GRANT ALL PRIVILEGES ON *.* TO maxskysql@'%' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO maxskysql@'localhost' WITH GRANT OPTION;
FLUSH PRIVILEGES;
CREATE DATABASE IF NOT EXISTS test;

View File

@ -0,0 +1,18 @@
#!/bin/bash
N=$galera_N
x=`expr $N - 1`
for i in $(seq 0 $x)
do
num=`printf "%03d" $i`
sshkey_var=galera_"$num"_keyfile
user_var=galera_"$num"_whoami
IP_var=galera_"$num"_network
sshkey=${!sshkey_var}
user=${!user_var}
IP=${!IP_var}
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo mysql_install_db; sudo chown -R mysql:mysql /var/lib/mysql"
done

View File

@ -0,0 +1,61 @@
#!/bin/bash
set -x
x=`expr $node_N - 1`
for i in $(seq 0 $x)
do
num=`printf "%03d" $i`
sshkey_var=node_"$num"_keyfile
user_var=node_"$num"_whoami
IP_var=node_"$num"_network
start_cmd_var=node_"$num"_start_db_command
stop_cmd_var=node_"$num"_stop_db_command
sshkey=${!sshkey_var}
user=${!user_var}
IP=${!IP_var}
start_cmd=${!start_cmd_var}
stop_cmd=${!stop_cmd_var}
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo $stop_cmd"
sleep 5
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP 'sudo sed -i "s/bind-address/#bind-address/g" /etc/mysql/my.cnf'
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP 'sudo ln -s /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/disable/usr.sbin.mysqld; sudo service apparmor restart'
mysql_version=`ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP 'mysql --version'`
echo $mysql_version | grep "5\."
if [ $? == 0 ] ; then
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo sed -i \"s/binlog_row_image=full//\" /etc/my.cnf.d/*.cnf"
fi
echo $mysql_version | grep "5\.7"
if [ $? == 0 ] ; then
# ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo sed -i \"s/## x001/validate-password=OFF/\" /etc/my.cnf.d/*.cnf"
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo mysqld --initialize; sudo chown -R mysql:mysql /var/lib/mysql"
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo $start_cmd"
mysql_root_password=`ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo cat /var/log/mysqld.log | grep \"temporary password\" | sed -n -e 's/^.*: //p'"`
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo mysqladmin -uroot -p'$mysql_root_password' password '$mysql_root_password'"
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "echo \"UNINSTALL PLUGIN validate_password\" | sudo mysql -uroot -p'$mysql_root_password' "
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo $stop_cmd"
# ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo sed -i \"s/## x001/validate-password=OFF/\" /etc/my.cnf.d/*.cnf"
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo $start_cmd"
# mysql_root_password=`ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo cat /var/log/mysqld.log | grep \"temporary password\" | sed -n -e 's/^.*: //p'"`
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "echo \"show plugins\" | sudo mysql -uroot -p'$mysql_root_password' "
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo mysqladmin -uroot -p'$mysql_root_password' password ''"
# ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo $start_cmd"
else
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo mysql_install_db; sudo chown -R mysql:mysql /var/lib/mysql"
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo $start_cmd"
fi
sleep 15
scp -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ${script_dir}/create_*_user.sql $user@$IP://home/$user/
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo mysql < /home/$user/create_repl_user.sql"
ssh -i $sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $user@$IP "sudo mysql < /home/$user/create_skysql_user.sql"
done

View File

@ -0,0 +1,130 @@
[mysqld]
expire_logs_days=7
user=mysql
server_id=1
wsrep_on=ON
# Row binary log format is required by Galera
binlog_format=ROW
log-bin
# InnoDB is currently the only storage engine supported by Galera
default-storage-engine=innodb
innodb_file_per_table
# To avoid issues with 'bulk mode inserts' using autoincrement fields
innodb_autoinc_lock_mode=2
# Required to prevent deadlocks on parallel transaction execution
innodb_locks_unsafe_for_binlog=1
# Query Cache is not supported by Galera wsrep replication
query_cache_size=0
query_cache_type=0
# INITIAL SETUP
# In some systems bind-address defaults to 127.0.0.1, and with mysqldump SST
# it will have (most likely) disastrous consequences on donor node
bind-address=###NODE-ADDRESS###
##
## WSREP options
##
# INITIAL SETUP
# For the initial setup, wsrep should be disabled
wsrep_provider=none
# After initial setup, parameter should have full path to wsrep provider library
wsrep_provider=###GALERA-LIB-PATH###
# Provider specific configuration options
wsrep_provider_options = "evs.keepalive_period = PT3S; evs.inactive_check_period = PT10S; evs.suspect_timeout = PT30S; evs.inactive_timeout = PT1M; evs.install_timeout = PT1M"
# Logical cluster name. Should be the same for all nodes in the same cluster.
wsrep_cluster_name=skycluster
# INITIAL SETUP
# Group communication system handle: for the first node to be launched, the value should be "gcomm://", indicating creation of a new cluster;
# for the other nodes joining the cluster, the value should be "gcomm://xxx.xxx.xxx.xxx:4567", where xxx.xxx.xxx.xxx should be the ip of a node
# already on the cluster (usually the first one)
# DEPRECATED
# wsrep_cluster_address=gcomm://
# Human-readable node name (non-unique). Hostname by default.
#wsrep_node_name=###NODE-NAME###
wsrep_node_name=galera000
# INITIAL SETUP
# Base replication <address|hostname>[:port] of the node.
# The values supplied will be used as defaults for state transfer receiving,
# listening ports and so on. Default: address of the first network interface.
wsrep_node_address=###NODE-ADDRESS###
# INITIAL SETUP
# Address for incoming client connections. Autodetect by default.
wsrep_node_incoming_address=###NODE-ADDRESS###
# Number of threads that will process writesets from other nodes
wsrep_slave_threads=1
# Generate fake primary keys for non-PK tables (required for multi-master
# and parallel applying operation)
wsrep_certify_nonPK=1
# Debug level logging (1 = enabled)
wsrep_debug=1
# Convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=0
# Number of retries for deadlocked autocommits
wsrep_retry_autocommit=1
# Change auto_increment_increment and auto_increment_offset automatically
wsrep_auto_increment_control=1
# Retry autoinc insert, when the insert failed for "duplicate key error"
wsrep_drupal_282555_workaround=0
# Enable "strictly synchronous" semantics for read operations
wsrep_causal_reads=1
# Command to call when node status or cluster membership changes.
# Will be passed all or some of the following options:
# --status - new status of this node
# --uuid - UUID of the cluster
# --primary - whether the component is primary or not ("yes"/"no")
# --members - comma-separated list of members
# --index - index of this node in the list
wsrep_notify_cmd=
##
## WSREP State Transfer options
##
# State Snapshot Transfer method
#wsrep_sst_method=mysqldump
#wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync
# INITIAL SETUP
# Address which donor should send State Snapshot to.
# Should be the address of the CURRENT node. DON'T SET IT TO DONOR ADDRESS!!!
# (SST method dependent. Defaults to the first IP of the first interface)
wsrep_sst_receive_address=###NODE-ADDRESS###
# INITIAL SETUP
# SST authentication string. This will be used to send SST to joining nodes.
# Depends on SST method. For mysqldump method it is root:<root password>
#wsrep_sst_auth=###REP-USERNAME###:###REP-PASSWORD###
wsrep_sst_auth=repl:repl
# Desired SST donor name.
#wsrep_sst_donor=
# Reject client queries when donating SST (false)
#wsrep_sst_donor_rejects_queries=0
# Protocol version to use
# wsrep_protocol_version=

View File

@ -0,0 +1,130 @@
[mysqld]
expire_logs_days=7
user=mysql
server_id=2
wsrep_on=ON
# Row binary log format is required by Galera
binlog_format=ROW
log-bin
# InnoDB is currently the only storage engine supported by Galera
default-storage-engine=innodb
innodb_file_per_table
# To avoid issues with 'bulk mode inserts' using autoincrement fields
innodb_autoinc_lock_mode=2
# Required to prevent deadlocks on parallel transaction execution
innodb_locks_unsafe_for_binlog=1
# Query Cache is not supported by Galera wsrep replication
query_cache_size=0
query_cache_type=0
# INITIAL SETUP
# In some systems bind-address defaults to 127.0.0.1, and with mysqldump SST
# it will have (most likely) disastrous consequences on donor node
bind-address=###NODE-ADDRESS###
##
## WSREP options
##
# INITIAL SETUP
# For the initial setup, wsrep should be disabled
wsrep_provider=none
# After initial setup, parameter should have full path to wsrep provider library
wsrep_provider=###GALERA-LIB-PATH###
# Provider specific configuration options
wsrep_provider_options = "evs.keepalive_period = PT3S; evs.inactive_check_period = PT10S; evs.suspect_timeout = PT30S; evs.inactive_timeout = PT1M; evs.install_timeout = PT1M"
# Logical cluster name. Should be the same for all nodes in the same cluster.
wsrep_cluster_name=skycluster
# INITIAL SETUP
# Group communication system handle: for the first node to be launched, the value should be "gcomm://", indicating creation of a new cluster;
# for the other nodes joining the cluster, the value should be "gcomm://xxx.xxx.xxx.xxx:4567", where xxx.xxx.xxx.xxx should be the ip of a node
# already on the cluster (usually the first one)
# DEPRECATED
# wsrep_cluster_address=gcomm://
# Human-readable node name (non-unique). Hostname by default.
#wsrep_node_name=###NODE-NAME###
wsrep_node_name=galera001
# INITIAL SETUP
# Base replication <address|hostname>[:port] of the node.
# The values supplied will be used as defaults for state transfer receiving,
# listening ports and so on. Default: address of the first network interface.
wsrep_node_address=###NODE-ADDRESS###
# INITIAL SETUP
# Address for incoming client connections. Autodetect by default.
wsrep_node_incoming_address=###NODE-ADDRESS###
# Number of threads that will process writesets from other nodes
wsrep_slave_threads=1
# Generate fake primary keys for non-PK tables (required for multi-master
# and parallel applying operation)
wsrep_certify_nonPK=1
# Debug level logging (1 = enabled)
wsrep_debug=1
# Convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=0
# Number of retries for deadlocked autocommits
wsrep_retry_autocommit=1
# Change auto_increment_increment and auto_increment_offset automatically
wsrep_auto_increment_control=1
# Retry autoinc insert, when the insert failed for "duplicate key error"
wsrep_drupal_282555_workaround=0
# Enable "strictly synchronous" semantics for read operations
wsrep_causal_reads=1
# Command to call when node status or cluster membership changes.
# Will be passed all or some of the following options:
# --status - new status of this node
# --uuid - UUID of the cluster
# --primary - whether the component is primary or not ("yes"/"no")
# --members - comma-separated list of members
# --index - index of this node in the list
wsrep_notify_cmd=
##
## WSREP State Transfer options
##
# State Snapshot Transfer method
#wsrep_sst_method=mysqldump
#wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync
# INITIAL SETUP
# Address which donor should send State Snapshot to.
# Should be the address of the CURRENT node. DON'T SET IT TO DONOR ADDRESS!!!
# (SST method dependent. Defaults to the first IP of the first interface)
wsrep_sst_receive_address=###NODE-ADDRESS###
# INITIAL SETUP
# SST authentication string. This will be used to send SST to joining nodes.
# Depends on SST method. For mysqldump method it is root:<root password>
#wsrep_sst_auth=###REP-USERNAME###:###REP-PASSWORD###
wsrep_sst_auth=repl:repl
# Desired SST donor name.
#wsrep_sst_donor=
# Reject client queries when donating SST (false)
#wsrep_sst_donor_rejects_queries=0
# Protocol version to use
# wsrep_protocol_version=

View File

@ -0,0 +1,130 @@
[mysqld]
expire_logs_days=7
user=mysql
server_id=3
wsrep_on=ON
# Row binary log format is required by Galera
binlog_format=ROW
log-bin
# InnoDB is currently the only storage engine supported by Galera
default-storage-engine=innodb
innodb_file_per_table
# To avoid issues with 'bulk mode inserts' using autoincrement fields
innodb_autoinc_lock_mode=2
# Required to prevent deadlocks on parallel transaction execution
innodb_locks_unsafe_for_binlog=1
# Query Cache is not supported by Galera wsrep replication
query_cache_size=0
query_cache_type=0
# INITIAL SETUP
# In some systems bind-address defaults to 127.0.0.1, and with mysqldump SST
# it will have (most likely) disastrous consequences on donor node
bind-address=###NODE-ADDRESS###
##
## WSREP options
##
# INITIAL SETUP
# For the initial setup, wsrep should be disabled
wsrep_provider=none
# After initial setup, parameter should have full path to wsrep provider library
wsrep_provider=###GALERA-LIB-PATH###
# Provider specific configuration options
wsrep_provider_options = "evs.keepalive_period = PT3S; evs.inactive_check_period = PT10S; evs.suspect_timeout = PT30S; evs.inactive_timeout = PT1M; evs.install_timeout = PT1M"
# Logical cluster name. Should be the same for all nodes in the same cluster.
wsrep_cluster_name=skycluster
# INITIAL SETUP
# Group communication system handle: for the first node to be launched, the value should be "gcomm://", indicating creation of a new cluster;
# for the other nodes joining the cluster, the value should be "gcomm://xxx.xxx.xxx.xxx:4567", where xxx.xxx.xxx.xxx should be the ip of a node
# already on the cluster (usually the first one)
# DEPRECATED
# wsrep_cluster_address=gcomm://
# Human-readable node name (non-unique). Hostname by default.
#wsrep_node_name=###NODE-NAME###
wsrep_node_name=galera002
# INITIAL SETUP
# Base replication <address|hostname>[:port] of the node.
# The values supplied will be used as defaults for state transfer receiving,
# listening ports and so on. Default: address of the first network interface.
wsrep_node_address=###NODE-ADDRESS###
# INITIAL SETUP
# Address for incoming client connections. Autodetect by default.
wsrep_node_incoming_address=###NODE-ADDRESS###
# Number of threads that will process writesets from other nodes
wsrep_slave_threads=1
# Generate fake primary keys for non-PK tables (required for multi-master
# and parallel applying operation)
wsrep_certify_nonPK=1
# Debug level logging (1 = enabled)
wsrep_debug=1
# Convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=0
# Number of retries for deadlocked autocommits
wsrep_retry_autocommit=1
# Change auto_increment_increment and auto_increment_offset automatically
wsrep_auto_increment_control=1
# Retry autoinc insert, when the insert failed for "duplicate key error"
wsrep_drupal_282555_workaround=0
# Enable "strictly synchronous" semantics for read operations
wsrep_causal_reads=1
# Command to call when node status or cluster membership changes.
# Will be passed all or some of the following options:
# --status - new status of this node
# --uuid - UUID of the cluster
# --primary - whether the component is primary or not ("yes"/"no")
# --members - comma-separated list of members
# --index - index of this node in the list
wsrep_notify_cmd=
##
## WSREP State Transfer options
##
# State Snapshot Transfer method
#wsrep_sst_method=mysqldump
#wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync
# INITIAL SETUP
# Address which donor should send State Snapshot to.
# Should be the address of the CURRENT node. DON'T SET IT TO DONOR ADDRESS!!!
# (SST method dependent. Defaults to the first IP of the first interface)
wsrep_sst_receive_address=###NODE-ADDRESS###
# INITIAL SETUP
# SST authentication string. This will be used to send SST to joining nodes.
# Depends on SST method. For mysqldump method it is root:<root password>
#wsrep_sst_auth=###REP-USERNAME###:###REP-PASSWORD###
wsrep_sst_auth=repl:repl
# Desired SST donor name.
#wsrep_sst_donor=
# Reject client queries when donating SST (false)
#wsrep_sst_donor_rejects_queries=0
# Protocol version to use
# wsrep_protocol_version=

View File

@ -0,0 +1,130 @@
[mysqld]
expire_logs_days=7
user=mysql
server_id=4
wsrep_on=ON
# Row binary log format is required by Galera
binlog_format=ROW
log-bin
# InnoDB is currently the only storage engine supported by Galera
default-storage-engine=innodb
innodb_file_per_table
# To avoid issues with 'bulk mode inserts' using autoincrement fields
innodb_autoinc_lock_mode=2
# Required to prevent deadlocks on parallel transaction execution
innodb_locks_unsafe_for_binlog=1
# Query Cache is not supported by Galera wsrep replication
query_cache_size=0
query_cache_type=0
# INITIAL SETUP
# In some systems bind-address defaults to 127.0.0.1, and with mysqldump SST
# it will have (most likely) disastrous consequences on donor node
bind-address=###NODE-ADDRESS###
##
## WSREP options
##
# INITIAL SETUP
# For the initial setup, wsrep should be disabled
wsrep_provider=none
# After initial setup, parameter should have full path to wsrep provider library
wsrep_provider=###GALERA-LIB-PATH###
# Provider specific configuration options
wsrep_provider_options = "evs.keepalive_period = PT3S; evs.inactive_check_period = PT10S; evs.suspect_timeout = PT30S; evs.inactive_timeout = PT1M; evs.install_timeout = PT1M"
# Logical cluster name. Should be the same for all nodes in the same cluster.
wsrep_cluster_name=skycluster
# INITIAL SETUP
# Group communication system handle: for the first node to be launched, the value should be "gcomm://", indicating creation of a new cluster;
# for the other nodes joining the cluster, the value should be "gcomm://xxx.xxx.xxx.xxx:4567", where xxx.xxx.xxx.xxx should be the ip of a node
# already on the cluster (usually the first one)
# DEPRECATED
# wsrep_cluster_address=gcomm://
# Human-readable node name (non-unique). Hostname by default.
#wsrep_node_name=###NODE-NAME###
wsrep_node_name=galera003
# INITIAL SETUP
# Base replication <address|hostname>[:port] of the node.
# The values supplied will be used as defaults for state transfer receiving,
# listening ports and so on. Default: address of the first network interface.
wsrep_node_address=###NODE-ADDRESS###
# INITIAL SETUP
# Address for incoming client connections. Autodetect by default.
wsrep_node_incoming_address=###NODE-ADDRESS###
# Number of threads that will process writesets from other nodes
wsrep_slave_threads=1
# Generate fake primary keys for non-PK tables (required for multi-master
# and parallel applying operation)
wsrep_certify_nonPK=1
# Debug level logging (1 = enabled)
wsrep_debug=1
# Convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=0
# Number of retries for deadlocked autocommits
wsrep_retry_autocommit=1
# Change auto_increment_increment and auto_increment_offset automatically
wsrep_auto_increment_control=1
# Retry autoinc insert, when the insert failed for "duplicate key error"
wsrep_drupal_282555_workaround=0
# Enable "strictly synchronous" semantics for read operations
wsrep_causal_reads=1
# Command to call when node status or cluster membership changes.
# Will be passed all or some of the following options:
# --status - new status of this node
# --uuid - UUID of the cluster
# --primary - whether the component is primary or not ("yes"/"no")
# --members - comma-separated list of members
# --index - index of this node in the list
wsrep_notify_cmd=
##
## WSREP State Transfer options
##
# State Snapshot Transfer method
#wsrep_sst_method=mysqldump
#wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync
# INITIAL SETUP
# Address which donor should send State Snapshot to.
# Should be the address of the CURRENT node. DON'T SET IT TO DONOR ADDRESS!!!
# (SST method dependent. Defaults to the first IP of the first interface)
wsrep_sst_receive_address=###NODE-ADDRESS###
# INITIAL SETUP
# SST authentication string. This will be used to send SST to joining nodes.
# Depends on SST method. For mysqldump method it is root:<root password>
#wsrep_sst_auth=###REP-USERNAME###:###REP-PASSWORD###
wsrep_sst_auth=repl:repl
# Desired SST donor name.
#wsrep_sst_donor=
# Reject client queries when donating SST (false)
#wsrep_sst_donor_rejects_queries=0
# Protocol version to use
# wsrep_protocol_version=

View File

@ -0,0 +1,135 @@
[mysqld]
expire_logs_days=7
user=mysql
server_id=1
# Row binary log format is required by Galera
binlog_format=ROW
log-bin
# InnoDB is currently the only storage engine supported by Galera
default-storage-engine=innodb
innodb_file_per_table
# To avoid issues with 'bulk mode inserts' using autoincrement fields
innodb_autoinc_lock_mode=2
# Required to prevent deadlocks on parallel transaction execution
innodb_locks_unsafe_for_binlog=1
# Query Cache is not supported by Galera wsrep replication
query_cache_size=0
query_cache_type=0
# INITIAL SETUP
# In some systems bind-address defaults to 127.0.0.1, and with mysqldump SST
# it will have (most likely) disastrous consequences on donor node
bind-address=###NODE-ADDRESS###
##
## WSREP options
##
# INITIAL SETUP
# For the initial setup, wsrep should be disabled
wsrep_provider=none
# After initial setup, parameter should have full path to wsrep provider library
wsrep_provider=###GALERA-LIB-PATH###
# Provider specific configuration options
wsrep_provider_options = "evs.keepalive_period = PT3S; evs.inactive_check_period = PT10S; evs.suspect_timeout = PT30S; evs.inactive_timeout = PT1M; evs.install_timeout = PT1M"
# Logical cluster name. Should be the same for all nodes in the same cluster.
wsrep_cluster_name=skycluster
# INITIAL SETUP
# Group communication system handle: for the first node to be launched, the value should be "gcomm://", indicating creation of a new cluster;
# for the other nodes joining the cluster, the value should be "gcomm://xxx.xxx.xxx.xxx:4567", where xxx.xxx.xxx.xxx should be the ip of a node
# already on the cluster (usually the first one)
# DEPRECATED
# wsrep_cluster_address=gcomm://
# Human-readable node name (non-unique). Hostname by default.
#wsrep_node_name=###NODE-NAME###
wsrep_node_name=galera000
# INITIAL SETUP
# Base replication <address|hostname>[:port] of the node.
# The values supplied will be used as defaults for state transfer receiving,
# listening ports and so on. Default: address of the first network interface.
wsrep_node_address=###NODE-ADDRESS###
# INITIAL SETUP
# Address for incoming client connections. Autodetect by default.
wsrep_node_incoming_address=###NODE-ADDRESS###
# Number of threads that will process writesets from other nodes
wsrep_slave_threads=1
# Generate fake primary keys for non-PK tables (required for multi-master
# and parallel applying operation)
wsrep_certify_nonPK=1
# Maximum number of rows in write set
wsrep_max_ws_rows=131072
# Maximum size of write set
wsrep_max_ws_size=1073741824
# Debug level logging (1 = enabled)
wsrep_debug=1
# Convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=0
# Number of retries for deadlocked autocommits
wsrep_retry_autocommit=1
# Change auto_increment_increment and auto_increment_offset automatically
wsrep_auto_increment_control=1
# Retry autoinc insert, when the insert failed for "duplicate key error"
wsrep_drupal_282555_workaround=0
# Enable "strictly synchronous" semantics for read operations
wsrep_causal_reads=0
# Command to call when node status or cluster membership changes.
# Will be passed all or some of the following options:
# --status - new status of this node
# --uuid - UUID of the cluster
# --primary - whether the component is primary or not ("yes"/"no")
# --members - comma-separated list of members
# --index - index of this node in the list
wsrep_notify_cmd=
##
## WSREP State Transfer options
##
# State Snapshot Transfer method
#wsrep_sst_method=mysqldump
#wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync
# INITIAL SETUP
# Address which donor should send State Snapshot to.
# Should be the address of the CURRENT node. DON'T SET IT TO DONOR ADDRESS!!!
# (SST method dependent. Defaults to the first IP of the first interface)
wsrep_sst_receive_address=###NODE-ADDRESS###
# INITIAL SETUP
# SST authentication string. This will be used to send SST to joining nodes.
# Depends on SST method. For mysqldump method it is root:<root password>
#wsrep_sst_auth=###REP-USERNAME###:###REP-PASSWORD###
wsrep_sst_auth=repl:repl
# Desired SST donor name.
#wsrep_sst_donor=
# Reject client queries when donating SST (false)
#wsrep_sst_donor_rejects_queries=0
# Protocol version to use
# wsrep_protocol_version=

View File

@ -0,0 +1,135 @@
[mysqld]
expire_logs_days=7
user=mysql
server_id=2
# Row binary log format is required by Galera
binlog_format=ROW
log-bin
# InnoDB is currently the only storage engine supported by Galera
default-storage-engine=innodb
innodb_file_per_table
# To avoid issues with 'bulk mode inserts' using autoincrement fields
innodb_autoinc_lock_mode=2
# Required to prevent deadlocks on parallel transaction execution
innodb_locks_unsafe_for_binlog=1
# Query Cache is not supported by Galera wsrep replication
query_cache_size=0
query_cache_type=0
# INITIAL SETUP
# In some systems bind-address defaults to 127.0.0.1, and with mysqldump SST
# it will have (most likely) disastrous consequences on donor node
bind-address=###NODE-ADDRESS###
##
## WSREP options
##
# INITIAL SETUP
# For the initial setup, wsrep should be disabled
wsrep_provider=none
# After initial setup, parameter should have full path to wsrep provider library
wsrep_provider=###GALERA-LIB-PATH###
# Provider specific configuration options
wsrep_provider_options = "evs.keepalive_period = PT3S; evs.inactive_check_period = PT10S; evs.suspect_timeout = PT30S; evs.inactive_timeout = PT1M; evs.install_timeout = PT1M"
# Logical cluster name. Should be the same for all nodes in the same cluster.
wsrep_cluster_name=skycluster
# INITIAL SETUP
# Group communication system handle: for the first node to be launched, the value should be "gcomm://", indicating creation of a new cluster;
# for the other nodes joining the cluster, the value should be "gcomm://xxx.xxx.xxx.xxx:4567", where xxx.xxx.xxx.xxx should be the ip of a node
# already on the cluster (usually the first one)
# DEPRECATED
# wsrep_cluster_address=gcomm://
# Human-readable node name (non-unique). Hostname by default.
#wsrep_node_name=###NODE-NAME###
wsrep_node_name=galera001
# INITIAL SETUP
# Base replication <address|hostname>[:port] of the node.
# The values supplied will be used as defaults for state transfer receiving,
# listening ports and so on. Default: address of the first network interface.
wsrep_node_address=###NODE-ADDRESS###
# INITIAL SETUP
# Address for incoming client connections. Autodetect by default.
wsrep_node_incoming_address=###NODE-ADDRESS###
# Number of threads that will process writesets from other nodes
wsrep_slave_threads=1
# Generate fake primary keys for non-PK tables (required for multi-master
# and parallel applying operation)
wsrep_certify_nonPK=1
# Maximum number of rows in write set
wsrep_max_ws_rows=131072
# Maximum size of write set
wsrep_max_ws_size=1073741824
# Debug level logging (1 = enabled)
wsrep_debug=1
# Convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=0
# Number of retries for deadlocked autocommits
wsrep_retry_autocommit=1
# Change auto_increment_increment and auto_increment_offset automatically
wsrep_auto_increment_control=1
# Retry autoinc insert, when the insert failed for "duplicate key error"
wsrep_drupal_282555_workaround=0
# Enable "strictly synchronous" semantics for read operations
wsrep_causal_reads=0
# Command to call when node status or cluster membership changes.
# Will be passed all or some of the following options:
# --status - new status of this node
# --uuid - UUID of the cluster
# --primary - whether the component is primary or not ("yes"/"no")
# --members - comma-separated list of members
# --index - index of this node in the list
wsrep_notify_cmd=
##
## WSREP State Transfer options
##
# State Snapshot Transfer method
#wsrep_sst_method=mysqldump
#wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync
# INITIAL SETUP
# Address which donor should send State Snapshot to.
# Should be the address of the CURRENT node. DON'T SET IT TO DONOR ADDRESS!!!
# (SST method dependent. Defaults to the first IP of the first interface)
wsrep_sst_receive_address=###NODE-ADDRESS###
# INITIAL SETUP
# SST authentication string. This will be used to send SST to joining nodes.
# Depends on SST method. For mysqldump method it is root:<root password>
#wsrep_sst_auth=###REP-USERNAME###:###REP-PASSWORD###
wsrep_sst_auth=repl:repl
# Desired SST donor name.
#wsrep_sst_donor=
# Reject client queries when donating SST (false)
#wsrep_sst_donor_rejects_queries=0
# Protocol version to use
# wsrep_protocol_version=

View File

@ -0,0 +1,135 @@
[mysqld]
expire_logs_days=7
user=mysql
server_id=3
# Row binary log format is required by Galera
binlog_format=ROW
log-bin
# InnoDB is currently the only storage engine supported by Galera
default-storage-engine=innodb
innodb_file_per_table
# To avoid issues with 'bulk mode inserts' using autoincrement fields
innodb_autoinc_lock_mode=2
# Required to prevent deadlocks on parallel transaction execution
innodb_locks_unsafe_for_binlog=1
# Query Cache is not supported by Galera wsrep replication
query_cache_size=0
query_cache_type=0
# INITIAL SETUP
# In some systems bind-address defaults to 127.0.0.1, and with mysqldump SST
# it will have (most likely) disastrous consequences on donor node
bind-address=###NODE-ADDRESS###
##
## WSREP options
##
# INITIAL SETUP
# For the initial setup, wsrep should be disabled
wsrep_provider=none
# After initial setup, parameter should have full path to wsrep provider library
wsrep_provider=###GALERA-LIB-PATH###
# Provider specific configuration options
wsrep_provider_options = "evs.keepalive_period = PT3S; evs.inactive_check_period = PT10S; evs.suspect_timeout = PT30S; evs.inactive_timeout = PT1M; evs.install_timeout = PT1M"
# Logical cluster name. Should be the same for all nodes in the same cluster.
wsrep_cluster_name=skycluster
# INITIAL SETUP
# Group communication system handle: for the first node to be launched, the value should be "gcomm://", indicating creation of a new cluster;
# for the other nodes joining the cluster, the value should be "gcomm://xxx.xxx.xxx.xxx:4567", where xxx.xxx.xxx.xxx should be the ip of a node
# already on the cluster (usually the first one)
# DEPRECATED
# wsrep_cluster_address=gcomm://
# Human-readable node name (non-unique). Hostname by default.
#wsrep_node_name=###NODE-NAME###
wsrep_node_name=galera002
# INITIAL SETUP
# Base replication <address|hostname>[:port] of the node.
# The values supplied will be used as defaults for state transfer receiving,
# listening ports and so on. Default: address of the first network interface.
wsrep_node_address=###NODE-ADDRESS###
# INITIAL SETUP
# Address for incoming client connections. Autodetect by default.
wsrep_node_incoming_address=###NODE-ADDRESS###
# Number of threads that will process writesets from other nodes
wsrep_slave_threads=1
# Generate fake primary keys for non-PK tables (required for multi-master
# and parallel applying operation)
wsrep_certify_nonPK=1
# Maximum number of rows in write set
wsrep_max_ws_rows=131072
# Maximum size of write set
wsrep_max_ws_size=1073741824
# Debug level logging (1 = enabled)
wsrep_debug=1
# Convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=0
# Number of retries for deadlocked autocommits
wsrep_retry_autocommit=1
# Change auto_increment_increment and auto_increment_offset automatically
wsrep_auto_increment_control=1
# Retry autoinc insert, when the insert failed for "duplicate key error"
wsrep_drupal_282555_workaround=0
# Enable "strictly synchronous" semantics for read operations
wsrep_causal_reads=0
# Command to call when node status or cluster membership changes.
# Will be passed all or some of the following options:
# --status - new status of this node
# --uuid - UUID of the cluster
# --primary - whether the component is primary or not ("yes"/"no")
# --members - comma-separated list of members
# --index - index of this node in the list
wsrep_notify_cmd=
##
## WSREP State Transfer options
##
# State Snapshot Transfer method
#wsrep_sst_method=mysqldump
#wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync
# INITIAL SETUP
# Address which donor should send State Snapshot to.
# Should be the address of the CURRENT node. DON'T SET IT TO DONOR ADDRESS!!!
# (SST method dependent. Defaults to the first IP of the first interface)
wsrep_sst_receive_address=###NODE-ADDRESS###
# INITIAL SETUP
# SST authentication string. This will be used to send SST to joining nodes.
# Depends on SST method. For mysqldump method it is root:<root password>
#wsrep_sst_auth=###REP-USERNAME###:###REP-PASSWORD###
wsrep_sst_auth=repl:repl
# Desired SST donor name.
#wsrep_sst_donor=
# Reject client queries when donating SST (false)
#wsrep_sst_donor_rejects_queries=0
# Protocol version to use
# wsrep_protocol_version=

View File

@ -0,0 +1,135 @@
[mysqld]
expire_logs_days=7
user=mysql
server_id=4
# Row binary log format is required by Galera
binlog_format=ROW
log-bin
# InnoDB is currently the only storage engine supported by Galera
default-storage-engine=innodb
innodb_file_per_table
# To avoid issues with 'bulk mode inserts' using autoincrement fields
innodb_autoinc_lock_mode=2
# Required to prevent deadlocks on parallel transaction execution
innodb_locks_unsafe_for_binlog=1
# Query Cache is not supported by Galera wsrep replication
query_cache_size=0
query_cache_type=0
# INITIAL SETUP
# In some systems bind-address defaults to 127.0.0.1, and with mysqldump SST
# it will have (most likely) disastrous consequences on donor node
bind-address=###NODE-ADDRESS###
##
## WSREP options
##
# INITIAL SETUP
# For the initial setup, wsrep should be disabled
wsrep_provider=none
# After initial setup, parameter should have full path to wsrep provider library
wsrep_provider=###GALERA-LIB-PATH###
# Provider specific configuration options
wsrep_provider_options = "evs.keepalive_period = PT3S; evs.inactive_check_period = PT10S; evs.suspect_timeout = PT30S; evs.inactive_timeout = PT1M; evs.install_timeout = PT1M"
# Logical cluster name. Should be the same for all nodes in the same cluster.
wsrep_cluster_name=skycluster
# INITIAL SETUP
# Group communication system handle: for the first node to be launched, the value should be "gcomm://", indicating creation of a new cluster;
# for the other nodes joining the cluster, the value should be "gcomm://xxx.xxx.xxx.xxx:4567", where xxx.xxx.xxx.xxx should be the ip of a node
# already on the cluster (usually the first one)
# DEPRECATED
# wsrep_cluster_address=gcomm://
# Human-readable node name (non-unique). Hostname by default.
#wsrep_node_name=###NODE-NAME###
wsrep_node_name=galera003
# INITIAL SETUP
# Base replication <address|hostname>[:port] of the node.
# The values supplied will be used as defaults for state transfer receiving,
# listening ports and so on. Default: address of the first network interface.
wsrep_node_address=###NODE-ADDRESS###
# INITIAL SETUP
# Address for incoming client connections. Autodetect by default.
wsrep_node_incoming_address=###NODE-ADDRESS###
# Number of threads that will process writesets from other nodes
wsrep_slave_threads=1
# Generate fake primary keys for non-PK tables (required for multi-master
# and parallel applying operation)
wsrep_certify_nonPK=1
# Maximum number of rows in write set
wsrep_max_ws_rows=131072
# Maximum size of write set
wsrep_max_ws_size=1073741824
# Debug level logging (1 = enabled)
wsrep_debug=1
# Convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=0
# Number of retries for deadlocked autocommits
wsrep_retry_autocommit=1
# Change auto_increment_increment and auto_increment_offset automatically
wsrep_auto_increment_control=1
# Retry autoinc insert, when the insert failed for "duplicate key error"
wsrep_drupal_282555_workaround=0
# Enable "strictly synchronous" semantics for read operations
wsrep_causal_reads=0
# Command to call when node status or cluster membership changes.
# Will be passed all or some of the following options:
# --status - new status of this node
# --uuid - UUID of the cluster
# --primary - whether the component is primary or not ("yes"/"no")
# --members - comma-separated list of members
# --index - index of this node in the list
wsrep_notify_cmd=
##
## WSREP State Transfer options
##
# State Snapshot Transfer method
#wsrep_sst_method=mysqldump
#wsrep_sst_method=xtrabackup
wsrep_sst_method=rsync
# INITIAL SETUP
# Address which donor should send State Snapshot to.
# Should be the address of the CURRENT node. DON'T SET IT TO DONOR ADDRESS!!!
# (SST method dependent. Defaults to the first IP of the first interface)
wsrep_sst_receive_address=###NODE-ADDRESS###
# INITIAL SETUP
# SST authentication string. This will be used to send SST to joining nodes.
# Depends on SST method. For mysqldump method it is root:<root password>
#wsrep_sst_auth=###REP-USERNAME###:###REP-PASSWORD###
wsrep_sst_auth=repl:repl
# Desired SST donor name.
#wsrep_sst_donor=
# Reject client queries when donating SST (false)
#wsrep_sst_donor_rejects_queries=0
# Protocol version to use
# wsrep_protocol_version=

View File

@ -0,0 +1,39 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=1
#slave_parallel_threads=2
user=mysql
## x001
#max_long_data_size=1000000000
#innodb_log_file_size=2000000000
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=10
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=11
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=12
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=13
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=14
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=15
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,39 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=2
#slave_parallel_threads=2
user=mysql
## x001
#max_long_data_size=1000000000
#innodb_log_file_size=2000000000
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,39 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=3
#slave_parallel_threads=2
user=mysql
## x001
#max_long_data_size=1000000000
#innodb_log_file_size=2000000000
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,39 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=4
#slave_parallel_threads=2
user=mysql
## x001
#max_long_data_size=1000000000
#innodb_log_file_size=2000000000
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=5
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=6
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=7
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=8
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#log-basename=mar
log-bin=mar-bin
#binlog-format=row
binlog-format=STATEMENT
server_id=9
#slave_parallel_threads=2
user=mysql
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,39 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=row
binlog_row_image=full
server_id=1
user=mysql
## x001
max_long_data_size=1000000000
innodb_log_file_size=2000000000
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=10
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=11
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=12
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=13
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=14
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=15
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,39 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=row
binlog_row_image=full
server_id=2
user=mysql
## x001
max_long_data_size=1000000000
innodb_log_file_size=2000000000
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,39 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=row
binlog_row_image=full
server_id=3
user=mysql
## x001
max_long_data_size=1000000000
innodb_log_file_size=2000000000
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,39 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=row
binlog_row_image=full
server_id=4
user=mysql
## x001
max_long_data_size=1000000000
innodb_log_file_size=2000000000
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=5
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=6
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=7
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=8
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,36 @@
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
log-slave-updates
log-bin=mar-bin
binlog-format=STATEMENT
server_id=9
user=mysql
## x001
slave-skip-errors=all
# this is only for embedded server
[embedded]
# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]
# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
[mariadb-5.5]

View File

@ -0,0 +1,9 @@
. ${script_dir}/set_env.sh $name
${script_dir}/backend/setup_repl.sh
${script_dir}/backend/galera/setup_galera.sh
${script_dir}/configure_core.sh
rm ~/vagrant_lock

View File

@ -0,0 +1,11 @@
#!/bin/bash
set -x
ssh -i $maxscale_sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $maxscale_access_user@$maxscale_IP '$maxscale_access_sudo service iptables stop'
ssh -i $maxscale_sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $maxscale_access_user@$maxscale_IP "$maxscale_access_sudo mkdir ccore; $maxscale_access_sudo chown $maxscale_access_user ccore"
scp -i $maxscale_sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ${script_dir}/add_core_cnf.sh $maxscale_access_user@$maxscale_IP:./ccore/
ssh -i $maxscale_sshkey -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $maxscale_access_user@$maxscale_IP "$maxscale_access_sudo /home/$maxscale_access_user/ccore/add_core_cnf.sh"
set +x

View File

@ -0,0 +1,15 @@
set -x
LOGS_DIR=${logs_dir:-$HOME/LOGS}
echo $JOB_NAME | grep "/"
if [ $? == 0 ] ; then
export job_name_buildID=`echo $JOB_NAME | sed "s|/|-$BUILD_NUMBER/|"`
export logs_publish_dir="${LOGS_DIR}/${job_name_buildID}/"
else
export logs_publish_dir="${LOGS_DIR}/${JOB_NAME}-${BUILD_NUMBER}"
fi
export job_name_buildID=`echo ${JOB_NAME} | sed "s|/|-${BUILD_NUMBER}/|"`
export logs_publish_dir="${LOGS_DIR}/${job_name_buildID}-${BUILD_NUMBER}"
echo "Logs go to ${logs_publish_dir}"
mkdir -p ${logs_publish_dir}

View File

@ -0,0 +1,5 @@
set -x
rsync -a --no-o --no-g LOGS ${logs_publish_dir}
chmod a+r ${logs_publish_dir}/*
cp -r ${MDBCI_VM_PATH}/$name ${logs_publish_dir}
cp ${MDBCI_VM_PATH}/${name}.json ${logs_publish_dir}

View File

@ -0,0 +1,59 @@
#!/bin/bash
set -x
export dir=`pwd`
# read the name of build scripts directory
export script_dir="$(dirname $(readlink -f $0))"
. ${script_dir}/set_run_test_variables.sh
${mdbci_dir}/repository-config/generate_all.sh repo.d
${mdbci_dir}/repository-config/maxscale-ci.sh $target repo.d
export repo_dir=$dir/repo.d/
export provider=`${mdbci_dir}/mdbci show provider $box --silent 2> /dev/null`
export backend_box=${backend_box:-"centos_7_"$provider}
if [ "$product" == "mysql" ] ; then
export cnf_path=${script_dir}/cnf/mysql56
fi
mkdir -p ${MDBCI_VM_PATH}/$name
cd ${MDBCI_VM_PATH}/$name
vagrant destroy -f
cd $dir
mkdir ${MDBCI_VM_PATH}/$name/cnf
cp -r ${cnf_path}/* ${MDBCI_VM_PATH}/$name/cnf/
export cnd_path="${MDBCI_VM_PATH}/$name/cnf/"
eval "cat <<EOF
$(<${script_dir}/templates/${template}.json.template)
" 2> /dev/null > ${MDBCI_VM_PATH}/${name}.json
${mdbci_dir}/mdbci --override --template ${MDBCI_VM_PATH}/${name}.json --repo-dir ${repo_dir} generate $name
while [ -f ~/vagrant_lock ]
do
echo "vagrant is locked, waiting ..."
sleep 5
done
touch ~/vagrant_lock
echo ${JOB_NAME}-${BUILD_NUMBER} >> ~/vagrant_lock
echo "running vagrant up $provider"
${mdbci_dir}/mdbci up $name --attempts 3
if [ $? != 0 ]; then
echo "Error creating configuration"
rm ~/vagrant_lock
exit 1
fi
cp ~/build-scripts/team_keys .
${mdbci_dir}/mdbci public_keys --key ${team_keys} $name
rm ~/vagrant_lock
exit 0

View File

@ -0,0 +1,10 @@
#!/bin/bash
dir=`pwd`
cd ${MDBCI_VM_PATH}/${name}
vagrant destroy -f
cd $dir
rm -rf ${MDBCI_VM_PATH}/${name}
rm -rf ${MDBCI_VM_PATH}/${name}.json
rm -rf ${MDBCI_VM_PATH}/${name}_network_config

View File

@ -0,0 +1,103 @@
#!/bin/bash
# see set_run_test_variables.sh for default values of all variables
# $box - Name of Vagrant box for Maxscale machine
# see lists of supported boxes
# https://github.com/mariadb-corporation/mdbci/tree/integration/BOXES
# $template - name of MDBCI json template file
# Template file have to be in ./templates/, file name
# have to be '$template.json.template
# Template file can contain references to any variables -
# all ${variable_name} will be replaced with values
# $name - name of test run. It can be any string of leytters or digits
# If it is not defined, name will be automatically genereted
# using $box and current date and time
# $ci_url - URL to Maxscale CI repository
# (default "http://max-tst-01.mariadb.com/ci-repository/")
# if build is done also locally and binaries are not uploaded to
# max-tst-01.mariadb.com $ci_url should toint to local web server
# e.g. http://192.168.122.1/repository (IP should be a host IP in the
# virtual network (not 127.0.0.1))
# $product - 'mariadb' or 'mysql'
# $version - version of backend DB (e.g. '10.1', '10.2')
# $galera_version - version of Galera backend DB
# same as $version by default
# $target - name of binary repository
# (name of subdirectroy http://max-tst-01.mariadb.com/ci-repository/)
# $team_keys - path to the file with open ssh keys to be
# installed on all VMs (default ${HOME}/team_keys)
# $don_not_destroy_vm - if 'yes' VM won't be destored afther the test
# $test_set - parameters to be send to 'ctest' (e.g. '-I 1,100',
# '-LE UNSTABLE'
export vm_memory=${vm_memory:-"2048"}
export dir=`pwd`
ulimit -n
# read the name of build scripts directory
export script_dir="$(dirname $(readlink -f $0))"
. ${script_dir}/set_run_test_variables.sh
rm -rf LOGS
export target=`echo $target | sed "s/?//g"`
export name=`echo $name | sed "s/?//g"`
. ${script_dir}/configure_log_dir.sh
cd ${script_dir}/..
cmake . -DBUILDNAME=$name -DCMAKE_BUILD_TYPE=Debug
make
${script_dir}/create_config.sh
res=$?
if [ $res == 0 ] ; then
. ${script_dir}/configure_backend.sh
${mdbci_dir}/mdbci snapshot take --path-to-nodes $name --snapshot-name clean
if [ ! -z "${named_test}" ] ; then
./${named_test}
else
./check_backend
if [ $? != 0 ]; then
echo "Backend broken!"
if [ "${do_not_destroy_vm}" != "yes" ] ; then
${script_dir}/destroy.sh
fi
rm ~/vagrant_lock
exit 1
fi
ctest -VV -D Nightly ${test_set}
fi
cd $dir
${script_dir}/copy_logs.sh
else
echo "Failed to create VMs, exiting"
if [ "${do_not_destroy_vm}" != "yes" ] ; then
${script_dir}/destroy.sh
fi
rm ~/vagrant_lock
exit 1
fi
if [ "${do_not_destroy_vm}" != "yes" ] ; then
${script_dir}/destroy.sh
echo "clean up done!"
fi

View File

@ -0,0 +1,90 @@
#!/bin/bash
function checkExitStatus {
returnCode=$1
errorMessage=$2
lockFilePath=$3
if [ "$returnCode" != 0 ]; then
echo "$errorMesage"
rm $lockFilePath
echo "Snapshot lock file was deleted due to an error"
exit 1
fi
}
set -x
export dir=`pwd`
# read the name of build scripts directory
export script_dir="$(dirname $(readlink -f $0))"
. ${script_dir}/set_run_test_variables.sh
export name="$box-$product-$version-permanent"
export snapshot_name=${snapshot_name:-"clean"}
rm -rf LOGS
export target=`echo $target | sed "s/?//g"`
export name=`echo $name | sed "s/?//g"`
. ${script_dir}/configure_log_dir.sh
# Setting snapshot_lock
export snapshot_lock_file=${MDBCI_VM_PATH}/${name}_snapshot_lock
if [ -f ${snapshot_lock_file} ]; then
echo "Snapshot is locked, waiting ..."
fi
while [ -f ${snapshot_lock_file} ]
do
sleep 5
done
touch ${snapshot_lock_file}
echo $JOB_NAME-$BUILD_NUMBER >> ${snapshot_lock_file}
export repo_dir=$dir/repo.d/
${mdbci_dir}/mdbci snapshot revert --path-to-nodes $name --snapshot-name $snapshot_name
if [ $? != 0 ]; then
${script_dir}/destroy.sh
${MDBCI_VM_PATH}/scripts/clean_vms.sh $name
${script_dir}/create_config.sh
checkExitStatus $? "Error creating configuration" $snapshot_lock_file
. ${script_dir}/configure_backend.sh
echo "Creating snapshot from new config"
$HOME/mdbci/mdbci snapshot take --path-to-nodes $name --snapshot-name $snapshot_name
fi
. ${script_dir}/set_env.sh "$name"
${mdbci_dir}/repository-config/maxscale-ci.sh $target repo.d
${mdbci_dir}/mdbci sudo --command 'yum remove maxscale -y' $name/maxscale
${mdbci_dir}/mdbci sudo --command 'yum clean all' $name/maxscale
${mdbci_dir}/mdbci setup_repo --product maxscale $name/maxscale --repo-dir $repo_dir
${mdbci_dir}/mdbci install_product --product maxscale $name/maxscale --repo-dir $repo_dir
checkExitStatus $? "Error installing Maxscale" $snapshot_lock_file
cd ${script_dir}/..
cmake . -DBUILDNAME=$JOB_NAME-$BUILD_NUMBER-$target
make
./check_backend --restart-galera
checkExitStatus $? "Failed to check backends" $snapshot_lock_file
ctest $test_set -VV -D Nightly
${script_dir}/copy_logs.sh
# Removing snapshot_lock
rm ${snapshot_lock_file}

View File

@ -0,0 +1,97 @@
#!/bin/bash
set -x
echo $*
export MDBCI_VM_PATH=${MDBCI_VM_PATH:-$HOME/vms}
export mdbci_dir=${mdbci_dir:-"$HOME/mdbci/"}
export config_name="$1"
if [ -z $1 ] ; then
config_name="test1"
fi
export curr_dir=`pwd`
export maxscale_binlog_dir="/var/lib/maxscale/Binlog_Service"
export maxdir="/usr/bin/"
export maxdir_bin="/usr/bin/"
export maxscale_cnf="/etc/maxscale.cnf"
export maxscale_log_dir="/var/log/maxscale/"
# Number of nodes
export galera_N=`cat "$MDBCI_VM_PATH/$config_name"_network_config | grep galera | grep network | wc -l`
export node_N=`cat "$MDBCI_VM_PATH/$config_name"_network_config | grep node | grep network | wc -l`
sed "s/^/export /g" "$MDBCI_VM_PATH/$config_name"_network_config > "$curr_dir"/"$config_name"_network_config_export
source "$curr_dir"/"$config_name"_network_config_export
# IP Of MaxScale machine
export maxscale_IP=$maxscale_network
export maxscale_sshkey=$maxscale_keyfile
# User name and Password for Master/Slave replication setup (should have all PRIVILEGES)
export node_user="skysql"
export node_password="skysql"
# User name and Password for Galera setup (should have all PRIVILEGES)
export galera_user="skysql"
export galera_password="skysql"
export maxscale_user="skysql"
export maxscale_password="skysql"
export maxadmin_password="mariadb"
for prefix in "node" "galera"
do
N_var="$prefix"_N
Nx=${!N_var}
N=`expr $Nx - 1`
for i in $(seq 0 $N)
do
num=`printf "%03d" $i`
eval 'export "$prefix"_"$num"_port=3306'
eval 'export "$prefix"_"$num"_access_sudo=sudo'
start_cmd_var="$prefix"_"$num"_start_db_command
stop_cmd_var="$prefix"_"$num"_stop_db_command
mysql_exe=`${mdbci_dir}/mdbci ssh --command 'ls /etc/init.d/mysql* 2> /dev/null | tr -cd "[:print:]"' $config_name/node_$num --silent 2> /dev/null`
echo $mysql_exe | grep -i "mysql"
if [ $? != 0 ] ; then
service_name=`${mdbci_dir}/mdbci ssh --command 'systemctl list-unit-files | grep mysql' $config_name/node_$num --silent`
echo $service_name | grep mysql
if [ $? == 0 ] ; then
echo $service_name | grep mysqld
if [ $? == 0 ] ; then
eval 'export $start_cmd_var="service mysqld start "'
eval 'export $stop_cmd_var="service mysqld stop "'
else
eval 'export $start_cmd_var="service mysql start "'
eval 'export $stop_cmd_var="service mysql stop "'
fi
else
${mdbci_dir}/mdbci ssh --command 'echo \"/usr/sbin/mysqld \$* 2> stderr.log > stdout.log &\" > mysql_start.sh; echo \"sleep 20\" >> mysql_start.sh; echo \"disown\" >> mysql_start.sh; chmod a+x mysql_start.sh' $config_name/node_$num --silent
eval 'export $start_cmd_var="/home/$au/mysql_start.sh "'
eval 'export $start_cmd_var="killall mysqld "'
fi
else
eval 'export $start_cmd_var="$mysql_exe start "'
eval 'export $stop_cmd_var="$mysql_exe stop "'
fi
eval 'export "$prefix"_"$num"_start_vm_command="cd $mdbci_dir/$config_name;vagrant up node_$num --provider=$provider; cd $curr_dir"'
eval 'export "$prefix"_"$num"_kill_vm_command="cd $mdbci_dir/$config_name;vagrant halt node_$num --provider=$provider; cd $curr_dir"'
done
done
export maxscale_access_user=$maxscale_whoami
export maxscale_access_sudo="sudo "
# Sysbench directory (should be sysbench >= 0.5)
export sysbench_dir=${sysbench_dir:-"$HOME/sysbench_deb7/sysbench/"}
export ssl=true
export take_snapshot_command="${mdbci_dir}/mdbci snapshot take --path-to-nodes $name --snapshot-name "
export revert_snapshot_command="${mdbci_dir}/mdbci snapshot revert --path-to-nodes $name --snapshot-name "
#export use_snapshots=yes
set +x

View File

@ -0,0 +1,32 @@
#!/bin/bash
export MDBCI_VM_PATH=${MDBCI_VM_PATH:-$HOME/vms}
mkdir -p $MDBCI_VM_PATH
echo "MDBCI_VM_PATH=$MDBCI_VM_PATH"
export box=${box:-"centos_7_libvirt"}
echo "box=$box"
export template=${template:-"default"}
export curr_date=`date '+%Y-%m-%d_%H-%M'`
export name=${name:-$box-${curr_date}}
export mdbci_dir=${mdbci_dir:-"$HOME/mdbci/"}
export ci_url=${ci_url:-"http://max-tst-01.mariadb.com/ci-repository/"}
export product=${product:-"mariadb"}
export version=${version:-"10.2"}
export target=${target:-"develop"}
export vm_memory=${vm_memory:-"2048"}
export cnf_path=${script_dir}/cnf
export JOB_NAME=${JOB_NAME:-"local_test"}
export BUILD_NUMBER=${BUILD_NUMBER:-`date '+%Y%m%d%H%M'`}
export BUILD_TAG=${BUILD_TAG:-jenkins-${JOB_NAME}-${BUILD_NUMBER}}
export team_keys=${team_keys:-${HOME}/team_keys}
export galera_version=${galera_version:-$version}
export do_not_destroy_vm=${do_not_destroy_vm:-"no"}
#export test_set=${test_set:-"-LE UNSTABLE"}
export test_set=${test_set:-"-I 1,5"}

View File

@ -0,0 +1,152 @@
{
"node_000" :
{
"hostname" : "node000",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server1.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_001" :
{
"hostname" : "node001",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server2.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_002" :
{
"hostname" : "node002",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server3.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_003" :
{
"hostname" : "node003",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server4.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_004" :
{
"hostname" : "node004",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server5.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_005" :
{
"hostname" : "node005",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server6.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_006" :
{
"hostname" : "node006",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server7.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_007" :
{
"hostname" : "node007",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server8.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_000" :
{
"hostname" : "galera000",
"box" : "centos_7_aws",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server1.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_001" :
{
"hostname" : "galera001",
"box" : "centos_7_aws",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server2.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_002" :
{
"hostname" : "galera002",
"box" : "centos_7_aws",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server3.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_003" :
{
"hostname" : "galera003",
"box" : "centos_7_aws",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server4.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"maxscale" :
{
"hostname" : "maxscale",
"box" : "centos_7_aws_large",
"product" : {
"name": "maxscale"
}
}
}

View File

@ -0,0 +1,229 @@
{
"node_000" :
{
"hostname" : "node_000",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server1.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_001" :
{
"hostname" : "node_001",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server2.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_002" :
{
"hostname" : "node_002",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server3.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_003" :
{
"hostname" : "node_003",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server4.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_004" :
{
"hostname" : "node_004",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server5.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_005" :
{
"hostname" : "node_005",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server6.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_006" :
{
"hostname" : "node_006",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server7.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_007" :
{
"hostname" : "node_007",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server8.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_008" :
{
"hostname" : "node_008",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server9.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_009" :
{
"hostname" : "node_009",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server10.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_0010" :
{
"hostname" : "node_0010",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server11.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_0011" :
{
"hostname" : "node_0011",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server12.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_0012" :
{
"hostname" : "node_0012",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server13.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_0013" :
{
"hostname" : "node_0013",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server14.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"node_0014" :
{
"hostname" : "node_0014",
"box" : "centos_7_aws_large",
"product" : {
"name": "###product###",
"version": "###version###",
"cnf_template" : "server15.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_000" :
{
"hostname" : "galera_000",
"box" : "centos_7_aws",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server1.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_001" :
{
"hostname" : "galera_001",
"box" : "centos_7_aws",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server2.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_002" :
{
"hostname" : "galera_002",
"box" : "centos_7_aws",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server3.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_003" :
{
"hostname" : "galera_003",
"box" : "centos_7_aws",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server4.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"maxscale" :
{
"hostname" : "maxscale",
"box" : "centos_7_aws_large",
"product" : {
"name": "maxscale"
}
}
}

View File

@ -0,0 +1,117 @@
{
"node_000" :
{
"hostname" : "node000",
"box" : "${backend_box}",
"memory_size" : "${vm_memory}",
"product" : {
"name": "${product}",
"version": "${version}",
"cnf_template" : "server1.cnf",
"cnf_template_path": "${cnf_path}"
}
},
"node_001" :
{
"hostname" : "node001",
"box" : "${backend_box}",
"memory_size" : "${vm_memory}",
"product" : {
"name": "${product}",
"version": "${version}",
"cnf_template" : "server2.cnf",
"cnf_template_path": "${cnf_path}"
}
},
"node_002" :
{
"hostname" : "node002",
"box" : "${backend_box}",
"memory_size" : "${vm_memory}",
"product" : {
"name": "${product}",
"version": "${version}",
"cnf_template" : "server3.cnf",
"cnf_template_path": "${cnf_path}"
}
},
"node_003" :
{
"hostname" : "node003",
"box" : "${backend_box}",
"memory_size" : "${vm_memory}",
"product" : {
"name": "${product}",
"version": "${version}",
"cnf_template" : "server4.cnf",
"cnf_template_path": "${cnf_path}"
}
},
"galera_000" :
{
"hostname" : "galera000",
"box" : "${backend_box}",
"memory_size" : "${vm_memory}",
"product" : {
"name": "galera",
"version": "${galera_version}",
"cnf_template" : "galera_server1.cnf",
"cnf_template_path": "${cnf_path}"
}
},
"galera_001" :
{
"hostname" : "galera001",
"box" : "${backend_box}",
"memory_size" : "${vm_memory}",
"product" : {
"name": "galera",
"version": "${galera_version}",
"cnf_template" : "galera_server2.cnf",
"cnf_template_path": "${cnf_path}"
}
},
"galera_002" :
{
"hostname" : "galera002",
"box" : "${backend_box}",
"memory_size" : "${vm_memory}",
"product" : {
"name": "galera",
"version": "${galera_version}",
"cnf_template" : "galera_server3.cnf",
"cnf_template_path": "${cnf_path}"
}
},
"galera_003" :
{
"hostname" : "galera003",
"box" : "${backend_box}",
"memory_size" : "${vm_memory}",
"product" : {
"name": "galera",
"version": "${galera_version}",
"cnf_template" : "galera_server4.cnf",
"cnf_template_path": "${cnf_path}"
}
},
"maxscale" :
{
"hostname" : "maxscale",
"box" : "${box}",
"memory_size" : "${vm_memory}",
"product" : {
"name": "maxscale"
}
}
}

View File

@ -0,0 +1,67 @@
{
###nodes###
"galera_000" :
{
"hostname" : "galera000",
"box" : "centos_7_libvirt",
"memory_size" : "2048",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server1.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_001" :
{
"hostname" : "galera001",
"box" : "centos_7_libvirt",
"memory_size" : "2048",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server2.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_002" :
{
"hostname" : "galera002",
"box" : "centos_7_libvirt",
"memory_size" : "2048",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server3.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"galera_003" :
{
"hostname" : "galera003",
"box" : "centos_7_libvirt",
"memory_size" : "2048",
"product" : {
"name": "galera",
"version": "###galera_version###",
"cnf_template" : "galera_server4.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
},
"maxscale" :
{
"hostname" : "maxscale",
"box" : "###box###",
"memory_size" : "2048",
"product" : {
"name": "maxscale"
}
}
}

View File

@ -6,7 +6,6 @@
#include "testconnections.h" #include "testconnections.h"
#include <sstream> #include <sstream>
#include "maxscales.h"
void change_master(TestConnections& test, int slave, int master, const char* name = NULL) void change_master(TestConnections& test, int slave, int master, const char* name = NULL)
{ {
@ -19,8 +18,7 @@ void change_master(TestConnections& test, int slave, int master, const char* nam
source += "'"; source += "'";
} }
execute_query(test.repl->nodes[slave], execute_query(test.repl->nodes[slave], "STOP ALL SLAVES;CHANGE MASTER %s TO master_host='%s', master_port=3306, "
"STOP ALL SLAVES;CHANGE MASTER %s TO master_host='%s', master_port=3306, "
"master_user='%s', master_password='%s', master_use_gtid=slave_pos;START ALL SLAVES", "master_user='%s', master_password='%s', master_use_gtid=slave_pos;START ALL SLAVES",
source.c_str(), test.repl->IP[master], test.repl->user_name, test.repl->password, source.c_str()); source.c_str(), test.repl->IP[master], test.repl->user_name, test.repl->password, source.c_str());
} }
@ -51,8 +49,8 @@ const char* dump_status(const StringSet& current, const StringSet& expected)
void check_status(TestConnections& test, const StringSet& expected_master, const StringSet& expected_slave) void check_status(TestConnections& test, const StringSet& expected_master, const StringSet& expected_slave)
{ {
sleep(2); sleep(2);
StringSet master = test.maxscales->get_server_status(0, "server1"); StringSet master = test.get_server_status("server1");
StringSet slave = test.maxscales->get_server_status(0, "server2"); StringSet slave = test.get_server_status("server2");
test.add_result(master != expected_master, "Master status is not what was expected: %s", test.add_result(master != expected_master, "Master status is not what was expected: %s",
dump_status(master, expected_master)); dump_status(master, expected_master));
test.add_result(slave != expected_slave, "Slave status is not what was expected: %s", test.add_result(slave != expected_slave, "Slave status is not what was expected: %s",

View File

@ -5,62 +5,52 @@
* - Connect repeatedly to MaxScale with 'testdb' as the default database and execute SELECT 1 * - Connect repeatedly to MaxScale with 'testdb' as the default database and execute SELECT 1
*/ */
#include <iostream>
#include "testconnections.h" #include "testconnections.h"
using namespace std;
int main(int argc, char *argv[]) int main(int argc, char *argv[])
{ {
TestConnections * Test = new TestConnections(argc, argv); TestConnections test(argc, argv);
char str[256]; char str[256];
int iterations = Test->smoke ? 100 : 500; int iterations = 100;
Test->repl->execute_query_all_nodes((char *) "set global max_connections = 600;"); test.repl->execute_query_all_nodes((char *) "set global max_connections = 600;");
Test->set_timeout(30); test.set_timeout(200);
Test->repl->stop_slaves(); test.repl->stop_slaves();
Test->set_timeout(30); test.set_timeout(200);
Test->maxscales->restart_maxscale(0); test.restart_maxscale();
Test->set_timeout(30); test.set_timeout(200);
Test->repl->connect(); test.repl->connect();
Test->stop_timeout(); test.stop_timeout();
/** Create a database on each node */ /** Create a database on each node */
for (int i = 0; i < Test->repl->N; i++) for (int i = 0; i < test.repl->N; i++)
{ {
Test->set_timeout(20); test.set_timeout(20);
sprintf(str, "DROP DATABASE IF EXISTS shard_db%d", i); sprintf(str, "DROP DATABASE IF EXISTS shard_db%d", i);
Test->tprintf("%s\n", str); test.tprintf("%s\n", str);
execute_query(Test->repl->nodes[i], str); execute_query(test.repl->nodes[i], str);
Test->set_timeout(20); test.set_timeout(20);
sprintf(str, "CREATE DATABASE shard_db%d", i); sprintf(str, "CREATE DATABASE shard_db%d", i);
Test->tprintf("%s\n", str); test.tprintf("%s\n", str);
execute_query(Test->repl->nodes[i], str); execute_query(test.repl->nodes[i], str);
Test->stop_timeout(); test.stop_timeout();
} }
Test->repl->close_connections(); test.repl->close_connections();
for (int j = 0; j < iterations && Test->global_result == 0; j++) for (int j = 0; j < iterations && test.global_result == 0; j++)
{ {
for (int i = 0; i < Test->repl->N; i++) for (int i = 0; i < test.repl->N && test.global_result == 0; i++)
{ {
sprintf(str, "shard_db%d", i); sprintf(str, "shard_db%d", i);
Test->set_timeout(15); test.set_timeout(30);
MYSQL *conn = open_conn_db(Test->maxscales->rwsplit_port[0], Test->maxscales->IP[0], MYSQL *conn = open_conn_db(test.rwsplit_port, test.maxscale_IP,
str, Test->maxscales->user_name, str, test.maxscale_user,
Test->maxscales->password, Test->ssl); test.maxscale_password, test.ssl);
Test->set_timeout(15); test.set_timeout(30);
Test->tprintf("Trying DB %d\n", i); test.add_result(execute_query(conn, "SELECT 1"), "Trying DB %d failed at %d", i, j);
if (execute_query(conn, "SELECT 1"))
{
Test->add_result(1, "Failed at %d\n", j);
break;
}
mysql_close(conn); mysql_close(conn);
} }
} }
int rval = Test->global_result;
delete Test; return test.global_result;
return rval;
} }

View File

@ -332,29 +332,45 @@ TestConnections::~TestConnections()
} }
} }
void TestConnections::add_result(int result, const char *format, ...) void TestConnections::report_result(const char *format, va_list argp)
{ {
timeval t2; timeval t2;
gettimeofday(&t2, NULL); gettimeofday(&t2, NULL);
double elapsedTime = (t2.tv_sec - start_time.tv_sec); double elapsedTime = (t2.tv_sec - start_time.tv_sec);
elapsedTime += (double) (t2.tv_usec - start_time.tv_usec) / 1000000.0; elapsedTime += (double) (t2.tv_usec - start_time.tv_usec) / 1000000.0;
if (result != 0) global_result += 1;
{
global_result += result;
printf("%04f: TEST_FAILED! ", elapsedTime); printf("%04f: TEST_FAILED! ", elapsedTime);
va_list argp;
va_start(argp, format);
vprintf(format, argp); vprintf(format, argp);
va_end(argp);
if (format[strlen(format) - 1] != '\n') if (format[strlen(format) - 1] != '\n')
{ {
printf("\n"); printf("\n");
} }
} }
void TestConnections::add_result(bool result, const char *format, ...)
{
if (result)
{
va_list argp;
va_start(argp, format);
report_result(format, argp);
va_end(argp);
}
}
void TestConnections::assert(bool result, const char *format, ...)
{
if (!result)
{
va_list argp;
va_start(argp, format);
report_result(format, argp);
va_end(argp);
}
} }
int TestConnections::read_env() int TestConnections::read_env()
@ -1695,6 +1711,47 @@ int TestConnections::try_query_all(int m, const char *sql)
try_query(maxscales->conn_slave[m], sql); try_query(maxscales->conn_slave[m], sql);
} }
StringSet TestConnections::get_server_status(const char* name)
{
std::set<std::string> rval;
int rc;
char* res = maxscales->ssh_node_output_f(0, true, &rc, "maxadmin list servers|grep \'%s\'", name);
char* pipe = strrchr(res, '|');
if (res && pipe)
{
pipe++;
char* tok = strtok(pipe, ",");
while (tok)
{
char* p = tok;
char *end = strchr(tok, '\n');
if (!end)
end = strchr(tok, '\0');
// Trim leading whitespace
while (p < end && isspace(*p))
{
p++;
}
// Trim trailing whitespace
while (end > tok && isspace(*end))
{
*end-- = '\0';
}
rval.insert(p);
tok = strtok(NULL, ",\n");
}
free(res);
}
return rval;
}
int TestConnections::list_dirs(int m) int TestConnections::list_dirs(int m)
{ {
for (int i = 0; i < repl->N; i++) for (int i = 0; i < repl->N; i++)

View File

@ -10,6 +10,7 @@
#include <set> #include <set>
#include <string> #include <string>
typedef std::set<std::string> StringSet;
/** /**
* @brief Class contains references to Master/Slave and Galera test setups * @brief Class contains references to Master/Slave and Galera test setups
@ -223,7 +224,10 @@ public:
* @param result 0 if step PASSED * @param result 0 if step PASSED
* @param format ... message to pring if result is not 0 * @param format ... message to pring if result is not 0
*/ */
void add_result(int result, const char *format, ...); void add_result(bool result, const char *format, ...);
/** Same as add_result() but inverted */
void assert(bool result, const char *format, ...);
/** /**
* @brief ReadEnv Reads all Maxscale and Master/Slave and Galera setups info from environmental variables * @brief ReadEnv Reads all Maxscale and Master/Slave and Galera setups info from environmental variables
@ -430,6 +434,15 @@ public:
*/ */
int try_query_all(int m, const char *sql); int try_query_all(int m, const char *sql);
/**
* @brief Get the set of labels that are assigned to server @c name
*
* @param name The name of the server that must be present in the output `maxadmin list servers`
*
* @return A set of string labels assigned to this server
*/
StringSet get_server_status(const char* name);
/** /**
* @brief check_maxscale_processes Check if number of running Maxscale processes is equal to 'expected' * @brief check_maxscale_processes Check if number of running Maxscale processes is equal to 'expected'
* @param expected expected number of Maxscale processes * @param expected expected number of Maxscale processes
@ -478,6 +491,10 @@ public:
void check_current_connections(int m, int value); void check_current_connections(int m, int value);
int stop_maxscale(int m); int stop_maxscale(int m);
void process_template(const char *src, const char *dest = "/etc/maxscale.cnf");
private:
void report_result(const char *format, va_list argp);
}; };
/** /**

View File

@ -91,7 +91,7 @@ add_test_executable_notest(sysbench_example.cpp sysbench_example replication)
# Build the MariaDB Connector/C 3.0 # Build the MariaDB Connector/C 3.0
set(CONNECTOR_C_VERSION "3.0" CACHE STRING "The Connector-C version to use") set(CONNECTOR_C_VERSION "v3.0.2" CACHE STRING "The Connector-C version to use")
include(ExternalProject) include(ExternalProject)
ExternalProject_Add(connector-c ExternalProject_Add(connector-c

View File

@ -172,6 +172,19 @@ MYSQL *mxs_mysql_real_connect(MYSQL *con, SERVER *server, const char *user, cons
MY_CHARSET_INFO cs_info; MY_CHARSET_INFO cs_info;
mysql_get_character_set_info(mysql, &cs_info); mysql_get_character_set_info(mysql, &cs_info);
server->charset = cs_info.number; server->charset = cs_info.number;
if (listener && mysql_get_ssl_cipher(con) == NULL)
{
if (server->log_warning.ssl_not_enabled)
{
server->log_warning.ssl_not_enabled = false;
MXS_ERROR("An encrypted connection to '%s' could not be created, "
"ensure that TLS is enabled on the target server.",
server->unique_name);
}
// Don't close the connection as it is closed elsewhere, just set to NULL
mysql = NULL;
}
} }
return mysql; return mysql;

View File

@ -149,6 +149,9 @@ SERVER* server_alloc(const char *name, const char *address, unsigned short port,
server->last_event = SERVER_UP_EVENT; server->last_event = SERVER_UP_EVENT;
server->triggered_at = 0; server->triggered_at = 0;
// Log all warnings once
memset(&server->log_warning, 1, sizeof(server->log_warning));
spinlock_acquire(&server_spin); spinlock_acquire(&server_spin);
server->next = allServers; server->next = allServers;
allServers = server; allServers = server;

View File

@ -1607,6 +1607,7 @@ service_update(SERVICE *service, char *router, char *user, char *auth)
*/ */
int service_refresh_users(SERVICE *service) int service_refresh_users(SERVICE *service)
{ {
ss_dassert(service);
int ret = 1; int ret = 1;
int self = mxs_worker_get_current_id(); int self = mxs_worker_get_current_id();
ss_dassert(self >= 0); ss_dassert(self >= 0);

View File

@ -588,7 +588,9 @@ strip_escape_chars(char* val)
#define BUFFER_GROWTH_RATE 2.0 #define BUFFER_GROWTH_RATE 2.0
static pcre2_code* remove_comments_re = NULL; static pcre2_code* remove_comments_re = NULL;
static const PCRE2_SPTR remove_comments_pattern = (PCRE2_SPTR) static const PCRE2_SPTR remove_comments_pattern = (PCRE2_SPTR)
"(?:`[^`]*`\\K)|(\\/[*](?!(M?!)).*?[*]\\/)|(?:#.*|--[[:space:]].*)"; "(?:`[^`]*`\\K)|"
"(\\/[*](?!(M?!)).*?[*]\\/)|"
"([[:space:]](?:#.*|--[[:space:]].*(\\n|\\r\\n)))";
/** /**
* Remove SQL comments from the end of a string * Remove SQL comments from the end of a string

View File

@ -49,6 +49,7 @@ typedef struct
bool detectStaleMaster; /**< Monitor flag for MySQL replication Stale Master detection */ bool detectStaleMaster; /**< Monitor flag for MySQL replication Stale Master detection */
bool detectStaleSlave; /**< Monitor flag for MySQL replication Stale Master detection */ bool detectStaleSlave; /**< Monitor flag for MySQL replication Stale Master detection */
bool multimaster; /**< Detect and handle multi-master topologies */ bool multimaster; /**< Detect and handle multi-master topologies */
bool ignore_external_masters; /**< Ignore masters outside of the monitor configuration */
int disableMasterFailback; /**< Monitor flag for Galera Cluster Master failback */ int disableMasterFailback; /**< Monitor flag for Galera Cluster Master failback */
int availableWhenDonor; /**< Monitor flag for Galera Cluster Donor availability */ int availableWhenDonor; /**< Monitor flag for Galera Cluster Donor availability */
int disableMasterRoleSetting; /**< Monitor flag to disable setting master role */ int disableMasterRoleSetting; /**< Monitor flag to disable setting master role */

File diff suppressed because it is too large Load Diff

View File

@ -301,7 +301,7 @@ bool is_error_response(GWBUF *buffer)
* @param dcb Backend DCB where authentication failed * @param dcb Backend DCB where authentication failed
* @param buffer Buffer containing the response from the backend * @param buffer Buffer containing the response from the backend
*/ */
void log_error_response(DCB *dcb, GWBUF *buffer) static void handle_error_response(DCB *dcb, GWBUF *buffer)
{ {
uint8_t *data = (uint8_t*)GWBUF_DATA(buffer); uint8_t *data = (uint8_t*)GWBUF_DATA(buffer);
size_t len = MYSQL_GET_PAYLOAD_LEN(data); size_t len = MYSQL_GET_PAYLOAD_LEN(data);
@ -326,6 +326,16 @@ void log_error_response(DCB *dcb, GWBUF *buffer)
server_set_status(dcb->server, SERVER_MAINT); server_set_status(dcb->server, SERVER_MAINT);
} }
else if (errcode == ER_ACCESS_DENIED_ERROR ||
errcode == ER_DBACCESS_DENIED_ERROR ||
errcode == ER_ACCESS_DENIED_NO_PASSWORD_ERROR)
{
if (dcb->session->state != SESSION_STATE_DUMMY)
{
// Authentication failed, reload users
service_refresh_users(dcb->service);
}
}
} }
/** /**
@ -494,7 +504,7 @@ gw_read_backend_event(DCB *dcb)
{ {
/** The server responded with an error */ /** The server responded with an error */
proto->protocol_auth_state = MXS_AUTH_STATE_FAILED; proto->protocol_auth_state = MXS_AUTH_STATE_FAILED;
log_error_response(dcb, readbuf); handle_error_response(dcb, readbuf);
} }
if (proto->protocol_auth_state == MXS_AUTH_STATE_CONNECTED) if (proto->protocol_auth_state == MXS_AUTH_STATE_CONNECTED)
@ -887,7 +897,7 @@ gw_read_and_write(DCB *dcb)
{ {
/** The COM_CHANGE USER failed, generate a fake hangup event to /** The COM_CHANGE USER failed, generate a fake hangup event to
* close the DCB and send an error to the client. */ * close the DCB and send an error to the client. */
log_error_response(dcb, reply); handle_error_response(dcb, reply);
} }
else else
{ {

View File

@ -71,7 +71,6 @@ static void mysql_client_auth_error_handling(DCB *dcb, int auth_val, int packet_
static int gw_read_do_authentication(DCB *dcb, GWBUF *read_buffer, int nbytes_read); static int gw_read_do_authentication(DCB *dcb, GWBUF *read_buffer, int nbytes_read);
static int gw_read_normal_data(DCB *dcb, GWBUF *read_buffer, int nbytes_read); static int gw_read_normal_data(DCB *dcb, GWBUF *read_buffer, int nbytes_read);
static int gw_read_finish_processing(DCB *dcb, GWBUF *read_buffer, uint64_t capabilities); static int gw_read_finish_processing(DCB *dcb, GWBUF *read_buffer, uint64_t capabilities);
static bool ensure_complete_packet(DCB *dcb, GWBUF **read_buffer, int nbytes_read);
static void gw_process_one_new_client(DCB *client_dcb); static void gw_process_one_new_client(DCB *client_dcb);
static spec_com_res_t process_special_commands(DCB *client_dcb, GWBUF *read_buffer, int nbytes_read); static spec_com_res_t process_special_commands(DCB *client_dcb, GWBUF *read_buffer, int nbytes_read);
static spec_com_res_t handle_query_kill(DCB* dcb, GWBUF* read_buffer, spec_com_res_t current, static spec_com_res_t handle_query_kill(DCB* dcb, GWBUF* read_buffer, spec_com_res_t current,
@ -816,25 +815,6 @@ static bool process_client_commands(DCB* dcb, int bytes_available, GWBUF** buffe
int pktlen; int pktlen;
uint8_t cmd = (uint8_t)MXS_COM_QUERY; // Treat empty packets as COM_QUERY uint8_t cmd = (uint8_t)MXS_COM_QUERY; // Treat empty packets as COM_QUERY
/**
* Buffer has at least 5 bytes, the packet is in contiguous memory
* and it's the first packet in the buffer.
*/
if (offset == 0 && GWBUF_LENGTH(queue) >= MYSQL_HEADER_LEN + 1)
{
uint8_t *data = (uint8_t*)GWBUF_DATA(queue);
pktlen = gw_mysql_get_byte3(data);
if (pktlen)
{
cmd = *(data + MYSQL_HEADER_LEN);
}
}
/**
* We have more than one packet in the buffer or the first 5 bytes
* of a packet are split across two buffers.
*/
else
{
uint8_t packet_header[MYSQL_HEADER_LEN]; uint8_t packet_header[MYSQL_HEADER_LEN];
if (gwbuf_copy_data(queue, offset, MYSQL_HEADER_LEN, packet_header) != MYSQL_HEADER_LEN) if (gwbuf_copy_data(queue, offset, MYSQL_HEADER_LEN, packet_header) != MYSQL_HEADER_LEN)
@ -851,17 +831,16 @@ static bool process_client_commands(DCB* dcb, int bytes_available, GWBUF** buffe
* If we an empty packet or have at least 5 bytes of data, we can start * If we an empty packet or have at least 5 bytes of data, we can start
* sending the data to the router. * sending the data to the router.
*/ */
if (pktlen && gwbuf_copy_data(queue, MYSQL_HEADER_LEN, 1, &cmd) != 1) if (pktlen && gwbuf_copy_data(queue, offset + MYSQL_HEADER_LEN, 1, &cmd) != 1)
{ {
if ((queue = split_and_store(dcb, queue, offset)) == NULL) if ((queue = split_and_store(dcb, queue, offset)) == NULL)
{ {
ss_dassert(bytes_available == MYSQL_HEADER_LEN); ss_dassert(bytes_available - offset == MYSQL_HEADER_LEN);
return false; return false;
} }
ss_dassert(offset > 0); ss_dassert(offset > 0);
break; break;
} }
}
MySQLProtocol *proto = (MySQLProtocol*)dcb->protocol; MySQLProtocol *proto = (MySQLProtocol*)dcb->protocol;
if (dcb->protocol_packet_length - MYSQL_HEADER_LEN != GW_MYSQL_MAX_PACKET_LEN) if (dcb->protocol_packet_length - MYSQL_HEADER_LEN != GW_MYSQL_MAX_PACKET_LEN)
@ -978,25 +957,28 @@ gw_read_normal_data(DCB *dcb, GWBUF *read_buffer, int nbytes_read)
/** Ask what type of input the router/filter chain expects */ /** Ask what type of input the router/filter chain expects */
capabilities = service_get_capabilities(session->service); capabilities = service_get_capabilities(session->service);
/** Update the current protocol command being executed */ /** If the router requires statement input we need to make sure that
if (!process_client_commands(dcb, nbytes_read, &read_buffer)) * a complete SQL packet is read before continuing. The current command
{ * that is tracked by the protocol module is updated in route_by_statement() */
return 0;
}
/** If the router requires statement input or we are still authenticating
* we need to make sure that a complete SQL packet is read before continuing */
if (rcap_type_required(capabilities, RCAP_TYPE_STMT_INPUT)) if (rcap_type_required(capabilities, RCAP_TYPE_STMT_INPUT))
{ {
if (nbytes_read < 3 || nbytes_read < uint8_t pktlen[MYSQL_HEADER_LEN];
(int)(MYSQL_GET_PAYLOAD_LEN((uint8_t *) GWBUF_DATA(read_buffer)) + 4)) size_t n_copied = gwbuf_copy_data(read_buffer, 0, MYSQL_HEADER_LEN, pktlen);
if (n_copied != sizeof(pktlen) ||
(uint32_t)nbytes_read < MYSQL_GET_PAYLOAD_LEN(pktlen) + MYSQL_HEADER_LEN)
{ {
dcb_readq_set(dcb, read_buffer); dcb_readq_append(dcb, read_buffer);
return 0; return 0;
} }
set_qc_mode(session, &read_buffer); set_qc_mode(session, &read_buffer);
} }
/** Update the current protocol command being executed */
else if (!process_client_commands(dcb, nbytes_read, &read_buffer))
{
return 0;
}
/** The query classifier classifies according to the service's server that has /** The query classifier classifies according to the service's server that has
* the smallest version number. */ * the smallest version number. */
@ -1027,6 +1009,30 @@ gw_read_normal_data(DCB *dcb, GWBUF *read_buffer, int nbytes_read)
return rval; return rval;
} }
/**
* Check if a connection qualifies to be added into the persistent connection pool
*
* @param dcb The client DCB to check
*/
void check_pool_candidate(DCB* dcb)
{
MXS_SESSION *session = dcb->session;
MySQLProtocol *proto = (MySQLProtocol*)dcb->protocol;
if (proto->current_command == MXS_COM_QUIT)
{
/** The client is closing the connection. We know that this will be the
* last command the client sends so the backend connections are very likely
* to be in an idle state.
*
* If the client is pipelining the queries (i.e. sending N request as
* a batch and then expecting N responses) then it is possible that
* the backend connections are not idle when the COM_QUIT is received.
* In most cases we can assume that the connections are idle. */
session_qualify_for_pool(session);
}
}
/** /**
* @brief Client read event, common processing after single statement handling * @brief Client read event, common processing after single statement handling
* *
@ -1047,25 +1053,10 @@ gw_read_finish_processing(DCB *dcb, GWBUF *read_buffer, uint64_t capabilities)
/** Reset error handler when routing of the new query begins */ /** Reset error handler when routing of the new query begins */
dcb->dcb_errhandle_called = false; dcb->dcb_errhandle_called = false;
if (proto->current_command == MXS_COM_QUIT)
{
/** The client is closing the connection. We know that this will be the
* last command the client sends so the backend connections are very likely
* to be in an idle state.
*
* If the client is pipelining the queries (i.e. sending N request as
* a batch and then expecting N responses) then it is possible that
* the backend connections are not idle when the COM_QUIT is received.
* In most cases we can assume that the connections are idle. */
session_qualify_for_pool(session);
}
if (rcap_type_required(capabilities, RCAP_TYPE_STMT_INPUT)) if (rcap_type_required(capabilities, RCAP_TYPE_STMT_INPUT))
{ {
/** /**
* Feed each statement completely and separately * Feed each statement completely and separately to router.
* to router. The routing functions return 1 for
* success or 0 for failure.
*/ */
return_code = route_by_statement(session, capabilities, &read_buffer) ? 0 : 1; return_code = route_by_statement(session, capabilities, &read_buffer) ? 0 : 1;
@ -1080,9 +1071,10 @@ gw_read_finish_processing(DCB *dcb, GWBUF *read_buffer, uint64_t capabilities)
} }
else if (NULL != session->router_session || (rcap_type_required(capabilities, RCAP_TYPE_NO_RSESSION))) else if (NULL != session->router_session || (rcap_type_required(capabilities, RCAP_TYPE_NO_RSESSION)))
{ {
/** Feed whole packet to router, which will free it /** Check if this connection qualifies for the connection pool */
* and return 1 for success, 0 for failure check_pool_candidate(dcb);
*/
/** Feed the whole buffer to the router */
return_code = MXS_SESSION_ROUTE_QUERY(session, read_buffer) ? 0 : 1; return_code = MXS_SESSION_ROUTE_QUERY(session, read_buffer) ? 0 : 1;
} }
/* else return_code is still 0 from when it was originally set */ /* else return_code is still 0 from when it was originally set */
@ -1424,13 +1416,41 @@ static int gw_client_hangup_event(DCB *dcb)
goto retblock; goto retblock;
} }
if (!session_valid_for_pool(session))
{
// The client did not send a COM_QUIT packet
modutil_send_mysql_err_packet(dcb, 0, 0, 1927, "08S01", "Connection killed by MaxScale"); modutil_send_mysql_err_packet(dcb, 0, 0, 1927, "08S01", "Connection killed by MaxScale");
}
dcb_close(dcb); dcb_close(dcb);
retblock: retblock:
return 1; return 1;
} }
/**
* Update protocol tracking information for an individual statement
*
* @param dcb Client DCB
* @param buffer Buffer containing a single packet
*/
void update_current_command(DCB* dcb, GWBUF* buffer)
{
MySQLProtocol *proto = (MySQLProtocol*)dcb->protocol;
uint8_t cmd = (uint8_t)MXS_COM_QUERY;
/**
* As we are routing individual packets, we can extract the command byte here.
* Empty packets are treated as COM_QUERY packets by default.
*/
gwbuf_copy_data(buffer, MYSQL_HEADER_LEN, 1, &cmd);
proto->current_command = (mxs_mysql_cmd_t)cmd;
/**
* Now that we have the current command, we can check if this connection
* can be a candidate for the connection pool.
*/
check_pool_candidate(dcb);
}
/** /**
* Detect if buffer includes partial mysql packet or multiple packets. * Detect if buffer includes partial mysql packet or multiple packets.
@ -1464,21 +1484,11 @@ static int route_by_statement(MXS_SESSION* session, uint64_t capabilities, GWBUF
{ {
CHK_GWBUF(packetbuf); CHK_GWBUF(packetbuf);
MySQLProtocol* proto = (MySQLProtocol*)session->client_dcb->protocol;
proto->current_command = (mxs_mysql_cmd_t)mxs_mysql_get_command(packetbuf);
/** /**
* This means that buffer includes exactly one MySQL * Update the currently command being executed.
* statement.
* backend func.write uses the information. MySQL backend
* protocol, for example, stores the command identifier
* to protocol structure. When some other thread reads
* the corresponding response the command tells how to
* handle response.
*
* Set it here instead of gw_read_client_event to make
* sure it is set to each (MySQL) packet.
*/ */
update_current_command(session->client_dcb, packetbuf);
if (rcap_type_required(capabilities, RCAP_TYPE_CONTIGUOUS_INPUT)) if (rcap_type_required(capabilities, RCAP_TYPE_CONTIGUOUS_INPUT))
{ {
if (!GWBUF_IS_CONTIGUOUS(packetbuf)) if (!GWBUF_IS_CONTIGUOUS(packetbuf))
@ -1567,52 +1577,6 @@ return_rc:
return rc; return rc;
} }
/**
* if read queue existed appent read to it. if length of read buffer is less
* than 3 or less than mysql packet then return. else copy mysql packets to
* separate buffers from read buffer and continue. else if read queue didn't
* exist, length of read is less than 3 or less than mysql packet then
* create read queue and append to it and return. if length read is less than
* mysql packet length append to read queue append to it and return.
* else (complete packet was read) continue.
*
* @return True if we have a complete packet, otherwise false
*/
static bool ensure_complete_packet(DCB *dcb, GWBUF **read_buffer, int nbytes_read)
{
if (dcb_readq_has(dcb))
{
dcb_readq_append(dcb, *read_buffer);
nbytes_read = dcb_readq_length(dcb);
int plen = MYSQL_GET_PAYLOAD_LEN((uint8_t *) GWBUF_DATA(dcb_readq_get(dcb)));
if (nbytes_read < 3 || nbytes_read < plen + 4)
{
return false;
}
else
{
/**
* There is at least one complete mysql packet in
* read_buffer.
*/
*read_buffer = dcb_readq_release(dcb);
}
}
else
{
uint8_t* data = (uint8_t *) GWBUF_DATA(*read_buffer);
if (nbytes_read < 3 || nbytes_read < (int)MYSQL_GET_PAYLOAD_LEN(data) + 4)
{
dcb_readq_append(dcb, *read_buffer);
return false;
}
}
return true;
}
/** /**
* Some SQL commands/queries need to be detected and handled by the protocol * Some SQL commands/queries need to be detected and handled by the protocol
* and MaxScale instead of being routed forward as is. * and MaxScale instead of being routed forward as is.

View File

@ -657,6 +657,13 @@ avro_binlog_end_t avro_read_all_events(AVRO_INSTANCE *router)
snprintf(next_file, sizeof(next_file), BINLOG_NAMEFMT, router->fileroot, snprintf(next_file, sizeof(next_file), BINLOG_NAMEFMT, router->fileroot,
blr_file_get_next_binlogname(router->binlog_name)); blr_file_get_next_binlogname(router->binlog_name));
} }
else if (hdr.event_type == MARIADB_ANNOTATE_ROWS_EVENT)
{
MXS_INFO("Annotate_rows_event: %.*s", hdr.event_size - BINLOG_EVENT_HDR_LEN, ptr);
pos += original_size;
router->current_pos = pos;
continue;
}
else if (hdr.event_type == TABLE_MAP_EVENT) else if (hdr.event_type == TABLE_MAP_EVENT)
{ {
handle_table_map_event(router, &hdr, ptr); handle_table_map_event(router, &hdr, ptr);
@ -956,6 +963,8 @@ bool save_and_replace_table_create(AVRO_INSTANCE *router, TABLE_CREATE *created)
{ {
if (strcmp(key, table_ident) == 0) if (strcmp(key, table_ident) == 0)
{ {
TABLE_MAP* map = hashtable_fetch(router->table_maps, key);
router->active_maps[map->id % MAX_MAPPED_TABLES] = NULL;
hashtable_delete(router->table_maps, key); hashtable_delete(router->table_maps, key);
} }
} }
@ -1000,13 +1009,13 @@ void handle_query_event(AVRO_INSTANCE *router, REP_HEADER *hdr, int *pending_tra
memcpy(db, (char*) ptr + PHDR_OFF + vblklen, dblen); memcpy(db, (char*) ptr + PHDR_OFF + vblklen, dblen);
db[dblen] = 0; db[dblen] = 0;
unify_whitespace(sql, len);
size_t sqlsz = len, tmpsz = len; size_t sqlsz = len, tmpsz = len;
char *tmp = MXS_MALLOC(len); char *tmp = MXS_MALLOC(len);
MXS_ABORT_IF_NULL(tmp); MXS_ABORT_IF_NULL(tmp);
remove_mysql_comments((const char**)&sql, &sqlsz, &tmp, &tmpsz); remove_mysql_comments((const char**)&sql, &sqlsz, &tmp, &tmpsz);
sql = tmp; sql = tmp;
len = tmpsz; len = tmpsz;
unify_whitespace(sql, len);
if (is_create_table_statement(router, sql, len)) if (is_create_table_statement(router, sql, len))
{ {

View File

@ -104,13 +104,8 @@ bool handle_table_map_event(AVRO_INSTANCE *router, REP_HEADER *hdr, uint8_t *ptr
{ {
ss_dassert(create->columns > 0); ss_dassert(create->columns > 0);
TABLE_MAP *old = hashtable_fetch(router->table_maps, table_ident); TABLE_MAP *old = hashtable_fetch(router->table_maps, table_ident);
if (old == NULL || old->version != create->version)
{
TABLE_MAP *map = table_map_alloc(ptr, ev_len, create); TABLE_MAP *map = table_map_alloc(ptr, ev_len, create);
MXS_ABORT_IF_NULL(map); // Fatal error at this point
if (map)
{
char* json_schema = json_new_schema_from_table(map); char* json_schema = json_new_schema_from_table(map);
if (json_schema) if (json_schema)
@ -138,6 +133,7 @@ bool handle_table_map_event(AVRO_INSTANCE *router, REP_HEADER *hdr, uint8_t *ptr
hashtable_add(router->open_tables, table_ident, avro_table); hashtable_add(router->open_tables, table_ident, avro_table);
save_avro_schema(router->avrodir, json_schema, map); save_avro_schema(router->avrodir, json_schema, map);
router->active_maps[map->id % MAX_MAPPED_TABLES] = map; router->active_maps[map->id % MAX_MAPPED_TABLES] = map;
ss_dassert(router->active_maps[id % MAX_MAPPED_TABLES] == map);
MXS_DEBUG("Table %s mapped to %lu", table_ident, map->id); MXS_DEBUG("Table %s mapped to %lu", table_ident, map->id);
rval = true; rval = true;
@ -158,21 +154,6 @@ bool handle_table_map_event(AVRO_INSTANCE *router, REP_HEADER *hdr, uint8_t *ptr
} }
} }
else else
{
MXS_ERROR("Failed to allocate new table map.");
}
}
else
{
router->active_maps[old->id % MAX_MAPPED_TABLES] = NULL;
table_map_remap(ptr, ev_len, old);
router->active_maps[old->id % MAX_MAPPED_TABLES] = old;
MXS_DEBUG("Table %s re-mapped to %lu", table_ident, old->id);
/** No changes in the schema */
rval = true;
}
}
else
{ {
MXS_WARNING("Table map event for table '%s' read before the DDL statement " MXS_WARNING("Table map event for table '%s' read before the DDL statement "
"for that table was read. Data will not be processed for this " "for that table was read. Data will not be processed for this "
@ -363,8 +344,9 @@ bool handle_row_event(AVRO_INSTANCE *router, REP_HEADER *hdr, uint8_t *ptr)
} }
else else
{ {
MXS_ERROR("Row event and table map event have different column counts." MXS_ERROR("Row event and table map event have different column "
" Only full row image is currently supported."); "counts for table %s.%s, only full row image is currently "
"supported.", map->database, map->table);
} }
} }
else else
@ -606,7 +588,6 @@ uint8_t* process_row_event_data(TABLE_MAP *map, TABLE_CREATE *create, avro_value
} }
MXS_INFO("[%ld] CHAR: field: %d bytes, data: %d bytes", i, field_length, bytes); MXS_INFO("[%ld] CHAR: field: %d bytes, data: %d bytes", i, field_length, bytes);
ss_dassert(bytes || *ptr == '\0');
char str[bytes + 1]; char str[bytes + 1];
memcpy(str, ptr, bytes); memcpy(str, ptr, bytes);
str[bytes] = '\0'; str[bytes] = '\0';

View File

@ -321,11 +321,11 @@ void save_avro_schema(const char *path, const char* schema, TABLE_MAP *map)
* @return Pointer to the start of the definition of NULL if the query is * @return Pointer to the start of the definition of NULL if the query is
* malformed. * malformed.
*/ */
static const char* get_table_definition(const char *sql, int* size) static const char* get_table_definition(const char *sql, int len, int* size)
{ {
const char *rval = NULL; const char *rval = NULL;
const char *ptr = sql; const char *ptr = sql;
const char *end = strchr(sql, '\0'); const char *end = sql + len;
while (ptr < end && *ptr != '(') while (ptr < end && *ptr != '(')
{ {
ptr++; ptr++;
@ -403,10 +403,12 @@ static bool get_table_name(const char* sql, char* dest)
/** /**
* Extract the database name from a CREATE TABLE statement * Extract the database name from a CREATE TABLE statement
*
* @param sql SQL statement * @param sql SQL statement
* @param dest Destination where the database name is extracted. Must be at least * @param dest Destination where the database name is extracted. Must be at least
* MYSQL_DATABASE_MAXLEN bytes long. * MYSQL_DATABASE_MAXLEN bytes long.
* @return True if extraction was successful *
* @return True if a database name was extracted
*/ */
static bool get_database_name(const char* sql, char* dest) static bool get_database_name(const char* sql, char* dest)
{ {
@ -426,6 +428,10 @@ static bool get_database_name(const char* sql, char* dest)
ptr--; ptr--;
} }
if (*ptr == '.')
{
// The query defines an explicit database
while (*ptr == '`' || *ptr == '.' || isspace(*ptr)) while (*ptr == '`' || *ptr == '.' || isspace(*ptr))
{ {
ptr--; ptr--;
@ -443,6 +449,7 @@ static bool get_database_name(const char* sql, char* dest)
dest[end - ptr] = '\0'; dest[end - ptr] = '\0';
rval = true; rval = true;
} }
}
return rval; return rval;
} }
@ -512,13 +519,17 @@ static const char *extract_field_name(const char* ptr, char* dest, size_t size)
} }
} }
if (!bt)
{
if (strncasecmp(ptr, "constraint", 10) == 0 || strncasecmp(ptr, "index", 5) == 0 || if (strncasecmp(ptr, "constraint", 10) == 0 || strncasecmp(ptr, "index", 5) == 0 ||
strncasecmp(ptr, "key", 3) == 0 || strncasecmp(ptr, "fulltext", 8) == 0 || strncasecmp(ptr, "key", 3) == 0 || strncasecmp(ptr, "fulltext", 8) == 0 ||
strncasecmp(ptr, "spatial", 7) == 0 || strncasecmp(ptr, "foreign", 7) == 0 || strncasecmp(ptr, "spatial", 7) == 0 || strncasecmp(ptr, "foreign", 7) == 0 ||
strncasecmp(ptr, "unique", 6) == 0 || strncasecmp(ptr, "primary", 7) == 0) strncasecmp(ptr, "unique", 6) == 0 || strncasecmp(ptr, "primary", 7) == 0)
{ {
// Found a keyword
return NULL; return NULL;
} }
}
const char *start = ptr; const char *start = ptr;
@ -694,35 +705,42 @@ TABLE_CREATE* table_create_from_schema(const char* file, const char* db,
* @param db Database where this query was executed * @param db Database where this query was executed
* @return New CREATE_TABLE object or NULL if an error occurred * @return New CREATE_TABLE object or NULL if an error occurred
*/ */
TABLE_CREATE* table_create_alloc(const char* sql, int len, const char* event_db) TABLE_CREATE* table_create_alloc(const char* sql, int len, const char* db)
{ {
/** Extract the table definition so we can get the column names from it */ /** Extract the table definition so we can get the column names from it */
int stmt_len = 0; int stmt_len = 0;
const char* statement_sql = get_table_definition(sql, &stmt_len); const char* statement_sql = get_table_definition(sql, len, &stmt_len);
ss_dassert(statement_sql); ss_dassert(statement_sql);
char table[MYSQL_TABLE_MAXLEN + 1]; char table[MYSQL_TABLE_MAXLEN + 1];
char database[MYSQL_DATABASE_MAXLEN + 1]; char database[MYSQL_DATABASE_MAXLEN + 1];
const char *db = event_db; const char* err = NULL;
MXS_INFO("Create table: %.*s", len, sql); MXS_INFO("Create table: %.*s", len, sql);
if (!get_table_name(sql, table)) if (!statement_sql)
{ {
MXS_ERROR("Malformed CREATE TABLE statement, could not extract table name: %s", sql); err = "table definition";
return NULL; }
else if (!get_table_name(sql, table))
{
err = "table name";
} }
/** The CREATE statement contains the database name */ if (get_database_name(sql, database))
if (strlen(db) == 0)
{ {
if (!get_database_name(sql, database)) // The CREATE statement contains the database name
{
MXS_ERROR("Malformed CREATE TABLE statement, could not extract "
"database name: %s", sql);
return NULL;
}
db = database; db = database;
} }
else if (*db == '\0')
{
// No explicit or current database
err = "database name";
}
if (err)
{
MXS_ERROR("Malformed CREATE TABLE statement, could not extract %s: %.*s", err, len, sql);
return NULL;
}
int* lengths = NULL; int* lengths = NULL;
char **names = NULL; char **names = NULL;
@ -893,6 +911,27 @@ static void remove_extras(char* str)
ss_dassert(strlen(str) == len); ss_dassert(strlen(str) == len);
} }
static void remove_backticks(char* src)
{
char* dest = src;
while (*src)
{
if (*src != '`')
{
// Non-backtick character, keep it
*dest = *src;
dest++;
}
src++;
}
ss_dassert(dest == src || (*dest != '\0' && dest < src));
*dest = '\0';
}
/** /**
* Extract both tables from a `CREATE TABLE t1 LIKE t2` statement * Extract both tables from a `CREATE TABLE t1 LIKE t2` statement
*/ */
@ -1095,10 +1134,12 @@ static bool tok_eq(const char *a, const char *b, size_t len)
void read_alter_identifier(const char *sql, const char *end, char *dest, int size) void read_alter_identifier(const char *sql, const char *end, char *dest, int size)
{ {
int len = 0; int len = 0;
const char *tok = get_tok(sql, &len, end); const char *tok = get_tok(sql, &len, end); // ALTER
if (tok && (tok = get_tok(tok + len, &len, end)) && (tok = get_tok(tok + len, &len, end))) if (tok && (tok = get_tok(tok + len, &len, end)) // TABLE
&& (tok = get_tok(tok + len, &len, end))) // Table identifier
{ {
snprintf(dest, size, "%.*s", len, tok); snprintf(dest, size, "%.*s", len, tok);
remove_backticks(dest);
} }
} }
@ -1174,13 +1215,25 @@ bool table_create_alter(TABLE_CREATE *create, const char *sql, const char *end)
if (tok_eq(ptok, "add", plen) && tok_eq(tok, "column", len)) if (tok_eq(ptok, "add", plen) && tok_eq(tok, "column", len))
{ {
tok = get_tok(tok + len, &len, end); tok = get_tok(tok + len, &len, end);
char avro_token[len + 1];
make_avro_token(avro_token, tok, len);
bool is_new = true;
for (uint64_t i = 0; i < create->columns; i++)
{
if (strcmp(avro_token, create->column_names[i]) == 0)
{
is_new = false;
break;
}
}
if (is_new)
{
create->column_names = MXS_REALLOC(create->column_names, sizeof(char*) * (create->columns + 1)); create->column_names = MXS_REALLOC(create->column_names, sizeof(char*) * (create->columns + 1));
create->column_types = MXS_REALLOC(create->column_types, sizeof(char*) * (create->columns + 1)); create->column_types = MXS_REALLOC(create->column_types, sizeof(char*) * (create->columns + 1));
create->column_lengths = MXS_REALLOC(create->column_lengths, sizeof(int) * (create->columns + 1)); create->column_lengths = MXS_REALLOC(create->column_lengths, sizeof(int) * (create->columns + 1));
char avro_token[len + 1];
make_avro_token(avro_token, tok, len);
char field_type[200] = ""; // Enough to hold all types char field_type[200] = ""; // Enough to hold all types
int field_length = extract_type_length(tok + len, field_type); int field_length = extract_type_length(tok + len, field_type);
create->column_names[create->columns] = MXS_STRDUP_A(avro_token); create->column_names[create->columns] = MXS_STRDUP_A(avro_token);
@ -1188,6 +1241,7 @@ bool table_create_alter(TABLE_CREATE *create, const char *sql, const char *end)
create->column_lengths[create->columns] = field_length; create->column_lengths[create->columns] = field_length;
create->columns++; create->columns++;
updates++; updates++;
}
tok = get_next_def(tok, end); tok = get_next_def(tok, end);
len = 0; len = 0;
} }
@ -1393,20 +1447,3 @@ void table_map_free(TABLE_MAP *map)
MXS_FREE(map); MXS_FREE(map);
} }
} }
/**
* @brief Map a table to a different ID
*
* This updates the table ID that the @c TABLE_MAP object is assigned with
*
* @param ptr Pointer to the start of a table map event
* @param hdr_len Post-header length
* @param map Table map to remap
*/
void table_map_remap(uint8_t *ptr, uint8_t hdr_len, TABLE_MAP *map)
{
uint64_t table_id = 0;
size_t id_size = hdr_len == 6 ? 4 : 6;
memcpy(&table_id, ptr, id_size);
map->id = table_id;
}

View File

@ -328,7 +328,6 @@ extern char* json_new_schema_from_table(TABLE_MAP *map);
extern void save_avro_schema(const char *path, const char* schema, TABLE_MAP *map); extern void save_avro_schema(const char *path, const char* schema, TABLE_MAP *map);
extern bool handle_table_map_event(AVRO_INSTANCE *router, REP_HEADER *hdr, uint8_t *ptr); extern bool handle_table_map_event(AVRO_INSTANCE *router, REP_HEADER *hdr, uint8_t *ptr);
extern bool handle_row_event(AVRO_INSTANCE *router, REP_HEADER *hdr, uint8_t *ptr); extern bool handle_row_event(AVRO_INSTANCE *router, REP_HEADER *hdr, uint8_t *ptr);
extern void table_map_remap(uint8_t *ptr, uint8_t hdr_len, TABLE_MAP *map);
enum avrorouter_file_op enum avrorouter_file_op
{ {

View File

@ -229,6 +229,7 @@ static void blr_start_master(void* data)
return; return;
} }
client->session = router->session; client->session = router->session;
client->service = router->service;
/** /**
* 'client' is the fake DCB that emulates a client session: * 'client' is the fake DCB that emulates a client session:
@ -265,6 +266,7 @@ static void blr_start_master(void* data)
return; return;
} }
router->master->remote = MXS_STRDUP_A(router->service->dbref->server->name); router->master->remote = MXS_STRDUP_A(router->service->dbref->server->name);
router->master->service = router->service;
MXS_NOTICE("%s: attempting to connect to master" MXS_NOTICE("%s: attempting to connect to master"
" server [%s]:%d, binlog='%s', pos=%lu%s%s", " server [%s]:%d, binlog='%s', pos=%lu%s%s",