Merge branch '2.1' into develop

This commit is contained in:
Markus Mäkelä 2017-04-05 11:34:59 +03:00
commit ad1c05b015
51 changed files with 1133 additions and 450 deletions

View File

@ -4,6 +4,12 @@ This document lists known issues and limitations in MariaDB MaxScale and its
plugins. Since limitations are related to specific plugins, this document is
divided into several sections.
## Configuration limitations
In versions 2.1.2 and earlier, the configuration files are limited to 1024
characters per line. This limitation was increased to 16384 characters in
MaxScale 2.1.3.
## Protocol limitations
### Limitations with MySQL Protocol support (MySQLClient)

View File

@ -18,8 +18,10 @@
* Prepared statements are now in the database firewall filtered exactly like non-prepared
statements.
* The firewall filter can now filter based on function usage.
* MaxScale now supports IPv6
For more details, please refer to:
* [MariaDB MaxScale 2.1.2 Release Notes](Release-Notes/MaxScale-2.1.2-Release-Notes.md)
* [MariaDB MaxScale 2.1.1 Release Notes](Release-Notes/MaxScale-2.1.1-Release-Notes.md)
* [MariaDB MaxScale 2.1.0 Release Notes](Release-Notes/MaxScale-2.1.0-Release-Notes.md)

View File

@ -1,11 +1,19 @@
#Database Firewall filter
## Overview
The database firewall filter is used to block queries that match a set of rules. It can be used to prevent harmful queries from reaching the backend database instances or to limit access to the database based on a more flexible set of rules compared to the traditional GRANT-based privilege system. Currently the filter does not support multi-statements.
The database firewall filter is used to block queries that match a set of
rules. It can be used to prevent harmful queries from reaching the backend
database instances or to limit access to the database based on a more flexible
set of rules compared to the traditional GRANT-based privilege system. Currently
the filter does not support multi-statements.
## Configuration
The database firewall filter only requires minimal configuration in the maxscale.cnf file. The actual rules of the database firewall filter are located in a separate text file. The following is an example of a database firewall filter configuration in maxscale.cnf.
The database firewall filter only requires minimal configuration in the
maxscale.cnf file. The actual rules of the database firewall filter are located
in a separate text file. The following is an example of a database firewall
filter configuration in maxscale.cnf.
```
[DatabaseFirewall]
@ -51,11 +59,11 @@ set to `allow`:
- COM_PROCESS_KILL: Alias for `KILL <id>;` query
- COM_PROCESS_INFO: Alias for `SHOW PROCESSLIST;`
You can have both blacklist and whitelist functionality by configuring one filter
with `action=allow` and another one with `action=block`. You can then use
different rule files with each filter, one for blacklisting and another one
for whitelisting. After this you only have to add both of these filters
to a service in the following way.
You can have both blacklist and whitelist functionality by configuring one
filter with `action=allow` and another one with `action=block`. You can then use
different rule files with each filter, one for blacklisting and another one for
whitelisting. After this you only have to add both of these filters to a service
in the following way.
```
[my-firewall-service]
@ -81,10 +89,10 @@ rules=/home/user/blacklist-rules.txt
#### `log_match`
Log all queries that match a rule. For the `any` matching mode, the name of
the rule that matched is logged and for other matching modes, the name of
the last matching rule is logged. In addition to the rule name the matched
user and the query itself is logged. The log messages are logged at the notice level.
Log all queries that match a rule. For the `any` matching mode, the name of the
rule that matched is logged and for other matching modes, the name of the last
matching rule is logged. In addition to the rule name the matched user and the
query itself is logged. The log messages are logged at the notice level.
#### `log_no_match`
@ -120,7 +128,9 @@ parameter (_allow_, _block_ or _ignore_).
### Mandatory rule parameters
The database firewall filter's rules expect a single mandatory parameter for a rule. You can define multiple rules to cover situations where you would like to apply multiple mandatory rules to a query.
The database firewall filter's rules expect a single mandatory parameter for a
rule. You can define multiple rules to cover situations where you would like to
apply multiple mandatory rules to a query.
#### `wildcard`
@ -128,41 +138,58 @@ This rule blocks all queries that use the wildcard character *.
#### `columns`
This rule expects a list of values after the `columns` keyword. These values are interpreted as column names and if a query targets any of these, it is blocked.
This rule expects a list of values after the `columns` keyword. These values are
interpreted as column names and if a query targets any of these, it is matched.
#### `function`
This rule expects a list of values after the `function` keyword. These values
are interpreted as function names and if a query uses any of these, it is
blocked. The symbolic comparison operators (`<`, `>`, `>=` etc.) are also
matched. The symbolic comparison operators (`<`, `>`, `>=` etc.) are also
considered functions whereas the text versions (`NOT`, `IS`, `IS NOT` etc.) are
not considered functions.
#### `regex`
This rule blocks all queries matching a regex enclosed in single or double quotes.
The regex string expects a PCRE2 syntax regular expression. For more information
about the PCRE2 syntax, read the [PCRE2 documentation](http://www.pcre.org/current/doc/html/pcre2syntax.html).
This rule blocks all queries matching a regex enclosed in single or double
quotes. The regex string expects a PCRE2 syntax regular expression. For more
information about the PCRE2 syntax, read the [PCRE2
documentation](http://www.pcre.org/current/doc/html/pcre2syntax.html).
#### `limit_queries`
The limit_queries rule expects three parameters. The first parameter is the number of allowed queries during the time period. The second is the time period in seconds and the third is the amount of time for which the rule is considered active and blocking.
The limit_queries rule expects three parameters. The first parameter is the
number of allowed queries during the time period. The second is the time period
in seconds and the third is the amount of time for which the rule is considered
active and blocking.
**WARNING:** Using `limit_queries` in `action=allow` is not supported.
#### `no_where_clause`
This rule inspects the query and blocks it if it has no WHERE clause. For example, this would disallow a `DELETE FROM ...` query without a `WHERE` clause. This does not prevent wrongful usage of the `WHERE` clause e.g. `DELETE FROM ... WHERE 1=1`.
This rule inspects the query and blocks it if it has no WHERE clause. For
example, this would disallow a `DELETE FROM ...` query without a `WHERE`
clause. This does not prevent wrongful usage of the `WHERE` clause e.g. `DELETE
FROM ... WHERE 1=1`.
### Optional rule parameters
Each mandatory rule accepts one or more optional parameters. These are to be defined after the mandatory part of the rule.
Each mandatory rule accepts one or more optional parameters. These are to be
defined after the mandatory part of the rule.
#### `at_times`
This rule expects a list of time ranges that define the times when the rule in question is active. The time formats are expected to be ISO-8601 compliant and to be separated by a single dash (the - character). For example, to define the active period of a rule to be 5pm to 7pm, you would include `at times 17:00:00-19:00:00` in the rule definition. The rule uses local time to check if the rule is active and has a precision of one second.
This rule expects a list of time ranges that define the times when the rule in
question is active. The time formats are expected to be ISO-8601 compliant and
to be separated by a single dash (the - character). For example, to define the
active period of a rule to be 5pm to 7pm, you would include `at times
17:00:00-19:00:00` in the rule definition. The rule uses local time to check if
the rule is active and has a precision of one second.
#### `on_queries`
This limits the rule to be active only on certain types of queries. The possible values are:
This limits the rule to be active only on certain types of queries. The possible
values are:
|Keyword|Matching operations |
|-------|------------------------------|
@ -184,17 +211,35 @@ The `users` directive defines the users to which the rule should be applied.
`users NAME... match { any | all | strict_all } rules RULE...`
The first keyword is `users`, which identifies this line as a user definition line.
The first keyword is `users`, which identifies this line as a user definition
line.
The second component is a list of user names and network addresses in the format *`user`*`@`*`0.0.0.0`*. The first part is the user name and the second part is the network address. You can use the `%` character as the wildcard to enable user name matching from any address or network matching for all users. After the list of users and networks the keyword match is expected.
The second component is a list of user names and network addresses in the format
*`user`*`@`*`0.0.0.0`*. The first part is the user name and the second part is
the network address. You can use the `%` character as the wildcard to enable
user name matching from any address or network matching for all users. After the
list of users and networks the keyword match is expected.
After this either the keyword `any` `all` or `strict_all` is expected. This defined how the rules are matched. If `any` is used when the first rule is matched the query is considered blocked and the rest of the rules are skipped. If instead the `all` keyword is used all rules must match for the query to be blocked. The `strict_all` is the same as `all` but it checks the rules from left to right in the order they were listed. If one of these does not match, the rest of the rules are not checked. This could be useful in situations where you would for example combine `limit_queries` and `regex` rules. By using `strict_all` you can have the `regex` rule first and the `limit_queries` rule second. This way the rule only matches if the `regex` rule matches enough times for the `limit_queries` rule to match.
After this either the keyword `any` `all` or `strict_all` is expected. This
defined how the rules are matched. If `any` is used when the first rule is
matched the query is considered as matched and the rest of the rules are
skipped. If instead the `all` keyword is used all rules must match for the query
to be considered as matched. The `strict_all` is the same as `all` but it checks the rules
from left to right in the order they were listed. If one of these does not
match, the rest of the rules are not checked. This could be useful in situations
where you would for example combine `limit_queries` and `regex` rules. By using
`strict_all` you can have the `regex` rule first and the `limit_queries` rule
second. This way the rule only matches if the `regex` rule matches enough times
for the `limit_queries` rule to match.
After the matching part comes the rules keyword after which a list of rule names is expected. This allows reusing of the rules and enables varying levels of query restriction.
After the matching part comes the rules keyword after which a list of rule names
is expected. This allows reusing of the rules and enables varying levels of
query restriction.
## Module commands
Read [Module Commands](../Reference/Module-Commands.md) documentation for details about module commands.
Read [Module Commands](../Reference/Module-Commands.md) documentation for
details about module commands.
The dbfwfilter supports the following module commands.
@ -213,16 +258,23 @@ Shows the current statistics of the rules.
### Use Case 1 - Prevent rapid execution of specific queries
To prevent the excessive use of a database we want to set a limit on the rate of queries. We only want to apply this limit to certain queries that cause unwanted behavior. To achieve this we can use a regular expression.
To prevent the excessive use of a database we want to set a limit on the rate of
queries. We only want to apply this limit to certain queries that cause unwanted
behavior. To achieve this we can use a regular expression.
First we define the limit on the rate of queries. The first parameter for the rule sets the number of allowed queries to 10 queries and the second parameter sets the rate of sampling to 5 seconds. If a user executes queries faster than this, any further queries that match the regular expression are blocked for 60 seconds.
First we define the limit on the rate of queries. The first parameter for the
rule sets the number of allowed queries to 10 queries and the second parameter
sets the rate of sampling to 5 seconds. If a user executes queries faster than
this, any further queries that match the regular expression are blocked for 60
seconds.
```
rule limit_rate_of_queries deny limit_queries 10 5 60
rule query_regex deny regex '.*select.*from.*user_data.*'
```
To apply these rules we combine them into a single rule by adding a `users` line to the rule file.
To apply these rules we combine them into a single rule by adding a `users` line
to the rule file.
```
users %@% match all rules limit_rate_of_queries query_regex
@ -230,16 +282,24 @@ users %@% match all rules limit_rate_of_queries query_regex
### Use Case 2 - Only allow deletes with a where clause
We have a table which contains all the managers of a company. We want to prevent accidental deletes into this table where the where clause is missing. This poses a problem, we don't want to require all the delete queries to have a where clause. We only want to prevent the data in the managers table from being deleted without a where clause.
We have a table which contains all the managers of a company. We want to prevent
accidental deletes into this table where the where clause is missing. This poses
a problem, we don't want to require all the delete queries to have a where
clause. We only want to prevent the data in the managers table from being
deleted without a where clause.
To achieve this, we need two rules. The first rule defines that all delete operations must have a where clause. This rule alone does us no good so we need a second one. The second rule blocks all queries that match a regular expression.
To achieve this, we need two rules. The first rule defines that all delete
operations must have a where clause. This rule alone does us no good so we need
a second one. The second rule blocks all queries that match a regular
expression.
```
rule safe_delete deny no_where_clause on_queries delete
rule managers_table deny regex '.*from.*managers.*'
```
When we combine these two rules we get the result we want. To combine these two rules add the following line to the rule file.
When we combine these two rules we get the result we want. To combine these two
rules add the following line to the rule file.
```
users %@% match all rules safe_delete managers_table

View File

@ -46,14 +46,38 @@ The default value is `-1`.
#### `max_resultset_size`
Specifies the maximum size a resultset can have, measured in kibibytes,
in order to be sent to the client. A resultset larger than this, will
Specifies the maximum size a resultset can have in order
to be sent to the client. A resultset larger than this, will
not be sent: an empty resultset will be sent instead.
The size can be specified as described
[here](../Getting-Started/Configuration-Guide.md#sizes).
```
max_resultset_size=128
max_resultset_size=128Ki
```
The default value is 64Ki
#### `max_resultset_return`
Specifies what the filter sends to the client when the
rows or size limit is hit, possible values:
- an empty result set
- an error packet with input SQL
- an OK packet
```
max_resultset_size=empty|error|ok
```
The default result type is 'empty'
Example output with ERR packet:
```
MariaDB [(test)]> select * from test.t4;
ERROR 1415 (0A000): Row limit/size exceeded for query: select * from test.t4
```
The default value is 64.
#### `debug`

View File

@ -68,7 +68,7 @@ The following substitutions will be made to the parameter value:
For example, the previous example will be executed as:
```
/home/user/myscript.sh initiator=192.168.0.10:3306 event=master_down live_nodes=192.168.0.201:3306,192.168.0.121:3306
/home/user/myscript.sh initiator=[192.168.0.10]:3306 event=master_down live_nodes=[192.168.0.201]:3306,[192.168.0.121]:3306
```
### `events`
@ -83,19 +83,19 @@ events=master_down,slave_down
Here is a table of all possible event types and their descriptions that the monitors can be called with.
Event Name|Description
----------|----------
master_down|A Master server has gone down
master_up|A Master server has come up
slave_down|A Slave server has gone down
slave_up|A Slave server has come up
server_down|A server with no assigned role has gone down
server_up|A server with no assigned role has come up
ndb_down|A MySQL Cluster node has gone down
ndb_up|A MySQL Cluster node has come up
lost_master|A server lost Master status
lost_slave|A server lost Slave status
lost_ndb|A MySQL Cluster node lost node membership
new_master|A new Master was detected
new_slave|A new Slave was detected
new_ndb|A new MySQL Cluster node was found
Event Name |Description
------------|----------
master_down |A Master server has gone down
master_up |A Master server has come up
slave_down |A Slave server has gone down
slave_up |A Slave server has come up
server_down |A server with no assigned role has gone down
server_up |A server with no assigned role has come up
ndb_down |A MySQL Cluster node has gone down
ndb_up |A MySQL Cluster node has come up
lost_master |A server lost Master status
lost_slave |A server lost Slave status
lost_ndb |A MySQL Cluster node lost node membership
new_master |A new Master was detected
new_slave |A new Slave was detected
new_ndb |A new MySQL Cluster node was found

View File

@ -15,6 +15,25 @@ report at [Jira](https://jira.mariadb.org).
## Changed Features
### Formatting of IP Addresses and Ports
All messaging that contains both the address and the port are now printed in an
IPv6 compatible format. The output uses the format defined in
[RFC 3986] (https://www.ietf.org/rfc/rfc3986.txt) and
[STD 66] (https://www.rfc-editor.org/std/std66.txt).
In practice this means that the address is enclosed by brackets. The port is
separated from the address by a colon. Here is an example of the new format:
```
[192.168.0.201]:3306
[fe80::fa16:54ff:fe8f:7e56]:3306
[localhost]:3306
```
The first is an IPv4 address, the second an IPv6 address and the last one is a
hostname. All of the addresses use port 3306.
### Cache
* The storage `storage_inmemory` is now the default, so the parameter

View File

@ -5,7 +5,7 @@
set(MAXSCALE_VERSION_MAJOR "2" CACHE STRING "Major version")
set(MAXSCALE_VERSION_MINOR "1" CACHE STRING "Minor version")
set(MAXSCALE_VERSION_PATCH "1" CACHE STRING "Patch version")
set(MAXSCALE_VERSION_PATCH "3" CACHE STRING "Patch version")
# This should only be incremented if a package is rebuilt
set(MAXSCALE_BUILD_NUMBER 1 CACHE STRING "Release number")

View File

@ -21,11 +21,7 @@
MXS_BEGIN_DECLS
/**
* Implementation of an atomic add operation for the GCC environment, or the
* X86 processor. If we are working within GNU C then we can use the GCC
* atomic add built in function, which is portable across platforms that
* implement GCC. Otherwise, this function currently supports only X86
* architecture (without further development).
* Implementation of an atomic add operations for the GCC environment.
*
* Adds a value to the contents of a location pointed to by the first parameter.
* The add operation is atomic and the return value is the value stored in the
@ -36,7 +32,9 @@ MXS_BEGIN_DECLS
* @param value Value to be added
* @return The value of variable before the add occurred
*/
int atomic_add(int *variable, int value);
int atomic_add(int *variable, int value);
int64_t atomic_add_int64(int64_t *variable, int64_t value);
uint64_t atomic_add_uint64(uint64_t *variable, int64_t value);
/**
* @brief Impose a full memory barrier

View File

@ -74,6 +74,7 @@ MXS_BEGIN_DECLS
#define MYSQL_CHECKSUM_LEN 4
#define MYSQL_EOF_PACKET_LEN 9
#define MYSQL_OK_PACKET_MIN_LEN 11
#define MYSQL_ERR_PACKET_MIN_LEN 9
/**
* Offsets and sizes of various parts of the client packet. If the offset is

View File

@ -17,17 +17,17 @@
* @file atomic.c - Implementation of atomic operations for MaxScale
*/
int
atomic_add(int *variable, int value)
int atomic_add(int *variable, int value)
{
#ifdef __GNUC__
return (int) __sync_fetch_and_add (variable, value);
#else
asm volatile(
"lock; xaddl %%eax, %2;"
:"=a" (value)
: "a" (value), "m" (*variable)
: "memory" );
return value;
#endif
return __sync_fetch_and_add(variable, value);
}
int64_t atomic_add_int64(int64_t *variable, int64_t value)
{
return __sync_fetch_and_add(variable, value);
}
uint64_t atomic_add_uint64(uint64_t *variable, int64_t value)
{
return __sync_fetch_and_add(variable, value);
}

View File

@ -337,7 +337,7 @@ GWBUF* gwbuf_clone(GWBUF* buf)
while (clonebuf && buf->next)
{
buf = buf->next;
clonebuf->next = gwbuf_clone(buf);
clonebuf->next = gwbuf_clone_one(buf);
clonebuf = clonebuf->next;
}
@ -638,7 +638,7 @@ gwbuf_rtrim(GWBUF *head, unsigned int n_bytes)
if (GWBUF_EMPTY(head))
{
rval = head->next;
gwbuf_free(head);
gwbuf_free_one(head);
}
return rval;
}

View File

@ -293,12 +293,14 @@ char* config_clean_string_list(const char* str)
const char *replace = "$1,";
int rval = 0;
size_t destsize_tmp = destsize;
while ((rval = pcre2_substitute(re, (PCRE2_SPTR) str, PCRE2_ZERO_TERMINATED, 0,
PCRE2_SUBSTITUTE_GLOBAL, data, NULL,
(PCRE2_SPTR) replace, PCRE2_ZERO_TERMINATED,
(PCRE2_UCHAR*) dest, &destsize)) == PCRE2_ERROR_NOMEMORY)
(PCRE2_UCHAR*) dest, &destsize_tmp)) == PCRE2_ERROR_NOMEMORY)
{
char* tmp = (char*)MXS_REALLOC(dest, destsize * 2);
destsize_tmp = 2 * destsize;
char* tmp = (char*)MXS_REALLOC(dest, destsize_tmp);
if (tmp == NULL)
{
MXS_FREE(dest);
@ -306,7 +308,7 @@ char* config_clean_string_list(const char* str)
break;
}
dest = tmp;
destsize *= 2;
destsize = destsize_tmp;
}
/** Remove the trailing comma */

View File

@ -472,7 +472,7 @@ bool runtime_create_listener(SERVICE *service, const char *name, const char *add
if (addr == NULL || strcasecmp(addr, "default") == 0)
{
addr = "0.0.0.0";
addr = "::";
}
if (port == NULL || strcasecmp(port, "default") == 0)
{
@ -519,7 +519,7 @@ bool runtime_create_listener(SERVICE *service, const char *name, const char *add
if (rval)
{
const char *print_addr = addr ? addr : "0.0.0.0";
const char *print_addr = addr ? addr : "::";
SERV_LISTENER *listener = serviceCreateListener(service, name, proto, addr,
u_port, auth, auth_opt, ssl);

View File

@ -695,7 +695,7 @@ dcb_connect(SERVER *server, MXS_SESSION *session, const char *protocol)
if (fd == DCBFD_CLOSED)
{
MXS_DEBUG("%lu [dcb_connect] Failed to connect to server %s:%d, "
MXS_DEBUG("%lu [dcb_connect] Failed to connect to server [%s]:%d, "
"from backend dcb %p, client dcp %p fd %d.",
pthread_self(),
server->name,
@ -709,7 +709,7 @@ dcb_connect(SERVER *server, MXS_SESSION *session, const char *protocol)
}
else
{
MXS_DEBUG("%lu [dcb_connect] Connected to server %s:%d, "
MXS_DEBUG("%lu [dcb_connect] Connected to server [%s]:%d, "
"from backend dcb %p, client dcp %p fd %d.",
pthread_self(),
server->name,
@ -2886,13 +2886,13 @@ int dcb_listen(DCB *listener, const char *config, const char *protocol_name)
*/
if (listen(listener_socket, INT_MAX) != 0)
{
MXS_ERROR("Failed to start listening on '%s' with protocol '%s': %d, %s",
config, protocol_name, errno, mxs_strerror(errno));
MXS_ERROR("Failed to start listening on '[%s]:%u' with protocol '%s': %d, %s",
host, port, protocol_name, errno, mxs_strerror(errno));
close(listener_socket);
return -1;
}
MXS_NOTICE("Listening for connections at %s with protocol %s", config, protocol_name);
MXS_NOTICE("Listening for connections at [%s]:%u with protocol %s", host, port, protocol_name);
// assign listener_socket to dcb
listener->fd = listener_socket;

View File

@ -127,7 +127,7 @@ bool runtime_alter_monitor(MXS_MONITOR *monitor, char *key, char *value);
*
* @param service Service where the listener is added
* @param name Name of the listener
* @param addr Listening address, NULL for default of 0.0.0.0
* @param addr Listening address, NULL for default of ::
* @param port Listening port, NULL for default of 3306
* @param proto Listener protocol, NULL for default of "MySQLClient"
* @param auth Listener authenticator, NULL for protocol default authenticator

View File

@ -41,7 +41,7 @@
* @param subject Subject string
* @param replace Replacement string
* @param dest Destination buffer
* @param size Size of the desination buffer
* @param size Size of the destination buffer
* @return MXS_PCRE2_MATCH if replacements were made, MXS_PCRE2_NOMATCH if nothing
* was replaced or MXS_PCRE2_ERROR if memory reallocation failed
*/
@ -54,18 +54,20 @@ mxs_pcre2_result_t mxs_pcre2_substitute(pcre2_code *re, const char *subject, con
if (mdata)
{
size_t size_tmp = *size;
while ((rc = pcre2_substitute(re, (PCRE2_SPTR) subject, PCRE2_ZERO_TERMINATED, 0,
PCRE2_SUBSTITUTE_GLOBAL, mdata, NULL,
(PCRE2_SPTR) replace, PCRE2_ZERO_TERMINATED,
(PCRE2_UCHAR*) *dest, size)) == PCRE2_ERROR_NOMEMORY)
(PCRE2_UCHAR*) *dest, &size_tmp)) == PCRE2_ERROR_NOMEMORY)
{
char *tmp = (char*)MXS_REALLOC(*dest, *size * 2);
size_tmp = 2 * (*size);
char *tmp = (char*)MXS_REALLOC(*dest, size_tmp);
if (tmp == NULL)
{
break;
}
*dest = tmp;
*size *= 2;
*size = size_tmp;
}
if (rc > 0)

View File

@ -469,7 +469,7 @@ monitorShow(DCB *dcb, MXS_MONITOR *monitor)
for (MXS_MONITOR_SERVERS *db = monitor->databases; db; db = db->next)
{
dcb_printf(dcb, "%s%s:%d", sep, db->server->name, db->server->port);
dcb_printf(dcb, "%s[%s]:%d", sep, db->server->name, db->server->port);
sep = ", ";
}
@ -691,7 +691,7 @@ bool check_monitor_permissions(MXS_MONITOR* monitor, const char* query)
{
if (mon_ping_or_connect_to_db(monitor, mondb) != MONITOR_CONN_OK)
{
MXS_ERROR("[%s] Failed to connect to server '%s' (%s:%d) when"
MXS_ERROR("[%s] Failed to connect to server '%s' ([%s]:%d) when"
" checking monitor user credentials and permissions: %s",
monitor->name, mondb->server->unique_name, mondb->server->name,
mondb->server->port, mysql_error(mondb->con));
@ -971,7 +971,7 @@ static void mon_append_node_names(MXS_MONITOR_SERVERS* servers, char* dest, int
{
if (status == 0 || servers->server->status & status)
{
snprintf(arr, sizeof(arr), "%s%s:%d", separator, servers->server->name,
snprintf(arr, sizeof(arr), "%s[%s]:%d", separator, servers->server->name,
servers->server->port);
separator = ",";
int arrlen = strlen(arr);
@ -1055,7 +1055,7 @@ monitor_launch_script(MXS_MONITOR* mon, MXS_MONITOR_SERVERS* ptr, const char* sc
if (externcmd_matches(cmd, "$INITIATOR"))
{
char initiator[strlen(ptr->server->name) + 24]; // Extra space for port
snprintf(initiator, sizeof(initiator), "%s:%d", ptr->server->name, ptr->server->port);
snprintf(initiator, sizeof(initiator), "[%s]:%d", ptr->server->name, ptr->server->port);
externcmd_substitute_arg(cmd, "[$]INITIATOR", initiator);
}
@ -1228,8 +1228,8 @@ void
mon_log_connect_error(MXS_MONITOR_SERVERS* database, mxs_connect_result_t rval)
{
MXS_ERROR(rval == MONITOR_CONN_TIMEOUT ?
"Monitor timed out when connecting to server %s:%d : \"%s\"" :
"Monitor was unable to connect to server %s:%d : \"%s\"",
"Monitor timed out when connecting to server [%s]:%d : \"%s\"" :
"Monitor was unable to connect to server [%s]:%d : \"%s\"",
database->server->name, database->server->port,
mysql_error(database->con));
}

View File

@ -315,7 +315,7 @@ serviceStartPort(SERVICE *service, SERV_LISTENER *port)
}
else
{
sprintf(config_bind, "0.0.0.0|%d", port->port);
sprintf(config_bind, "::|%d", port->port);
}
/** Load the authentication users before before starting the listener */
@ -1339,7 +1339,7 @@ printService(SERVICE *service)
printf("\tBackend databases\n");
while (ptr)
{
printf("\t\t%s:%d Protocol: %s\n", ptr->server->name, ptr->server->port, ptr->server->protocol);
printf("\t\t[%s]:%d Protocol: %s\n", ptr->server->name, ptr->server->port, ptr->server->protocol);
ptr = ptr->next;
}
if (service->n_filters)
@ -1452,7 +1452,7 @@ void dprintService(DCB *dcb, SERVICE *service)
{
if (SERVER_REF_IS_ACTIVE(server))
{
dcb_printf(dcb, "\t\t%s:%d Protocol: %s Name: %s\n",
dcb_printf(dcb, "\t\t[%s]:%d Protocol: %s Name: %s\n",
server->server->name, server->server->port,
server->server->protocol, server->server->unique_name);
}

View File

@ -85,6 +85,8 @@ GWBUF* create_test_buffer()
total += buffers[i];
}
MXS_FREE(data);
return head;
}
@ -139,15 +141,14 @@ void copy_buffer(int n, int offset)
ss_info_dassert(gwbuf_copy_data(buffer, 0, cutoff, dest) == cutoff, "All bytes should be read");
ss_info_dassert(memcmp(data, dest, sizeof(dest)) == 0, "Data should be OK");
gwbuf_free(buffer);
MXS_FREE(data);
}
/** gwbuf_split test - These tests assume allocation will always succeed */
void test_split()
{
size_t headsize = 10;
GWBUF* head = gwbuf_alloc(headsize);
size_t tailsize = 20;
GWBUF* tail = gwbuf_alloc(tailsize);
GWBUF* oldchain = gwbuf_append(gwbuf_alloc(headsize), gwbuf_alloc(tailsize));
ss_info_dassert(gwbuf_length(oldchain) == headsize + tailsize, "Allocated buffer should be 30 bytes");
@ -178,6 +179,7 @@ void test_split()
ss_info_dassert(newchain, "New chain should be non-NULL");
ss_info_dassert(gwbuf_length(newchain) == headsize + tailsize, "New chain should be 30 bytes long");
ss_info_dassert(oldchain == NULL, "Old chain should be NULL");
gwbuf_free(newchain);
/** Splitting of contiguous memory */
GWBUF* buffer = gwbuf_alloc(10);
@ -189,6 +191,8 @@ void test_split()
ss_info_dassert(newbuf->tail == newbuf, "New buffer's tail should point to itself");
ss_info_dassert(buffer->next == NULL, "Old buffer's next pointer should be NULL");
ss_info_dassert(newbuf->next == NULL, "New buffer's next pointer should be NULL");
gwbuf_free(buffer);
gwbuf_free(newbuf);
/** Bad parameter tests */
GWBUF* ptr = NULL;
@ -198,7 +202,6 @@ void test_split()
ss_info_dassert(gwbuf_split(&buffer, 0) == NULL, "gwbuf_split with length of 0 should return NULL");
ss_info_dassert(gwbuf_length(buffer) == 10, "Buffer should be 10 bytes");
gwbuf_free(buffer);
gwbuf_free(newbuf);
/** Splitting near buffer boudaries */
for (int i = 0; i < n_buffers - 1; i++)
@ -386,6 +389,56 @@ void test_compare()
ss_dassert(gwbuf_compare(lhs, rhs) == 0);
ss_dassert(gwbuf_compare(rhs, lhs) == 0);
gwbuf_free(lhs);
gwbuf_free(rhs);
}
void test_clone()
{
GWBUF* original = gwbuf_alloc_and_load(1, "1");
original = gwbuf_append(original, gwbuf_alloc_and_load(1, "1"));
original = gwbuf_append(original, gwbuf_alloc_and_load(2, "12"));
original = gwbuf_append(original, gwbuf_alloc_and_load(3, "123"));
original = gwbuf_append(original, gwbuf_alloc_and_load(5, "12345"));
original = gwbuf_append(original, gwbuf_alloc_and_load(8, "12345678"));
original = gwbuf_append(original, gwbuf_alloc_and_load(13, "1234567890123"));
original = gwbuf_append(original, gwbuf_alloc_and_load(21, "123456789012345678901"));
GWBUF* clone = gwbuf_clone(original);
GWBUF* o = original;
GWBUF* c = clone;
ss_dassert(gwbuf_length(o) == gwbuf_length(c));
while (o)
{
ss_dassert(c);
ss_dassert(GWBUF_LENGTH(o) == GWBUF_LENGTH(c));
const char* i = (char*)GWBUF_DATA(o);
const char* end = i + GWBUF_LENGTH(o);
const char* j = (char*)GWBUF_DATA(c);
while (i != end)
{
ss_dassert(*i == *j);
++i;
++j;
}
o = o->next;
c = c->next;
}
ss_dassert(c == NULL);
gwbuf_free(clone);
gwbuf_free(original);
}
/**
@ -484,7 +537,7 @@ test1()
ss_info_dassert(100000 == buflen, "Incorrect buffer size");
ss_info_dassert(buffer == extra, "The buffer pointer should now point to the extra buffer");
ss_dfprintf(stderr, "\t..done\n");
gwbuf_free(buffer);
/** gwbuf_clone_all test */
size_t headsize = 10;
GWBUF* head = gwbuf_alloc(headsize);
@ -501,11 +554,14 @@ test1()
ss_info_dassert(GWBUF_LENGTH(all_clones) == headsize, "First buffer should be 10 bytes");
ss_info_dassert(GWBUF_LENGTH(all_clones->next) == tailsize, "Second buffer should be 20 bytes");
ss_info_dassert(gwbuf_length(all_clones) == headsize + tailsize, "Total buffer length should be 30 bytes");
gwbuf_free(all_clones);
gwbuf_free(head);
test_split();
test_load_and_copy();
test_consume();
test_compare();
test_clone();
return 0;
}

View File

@ -113,6 +113,7 @@ bool run(const MXS_LOG_THROTTLING& throttling, int priority, size_t n_generate,
in.seekg(0, ios_base::end);
THREAD_ARG args[N_THREADS];
pthread_t tids[N_THREADS];
// Create the threads.
for (size_t i = 0; i < N_THREADS; ++i)
@ -122,8 +123,7 @@ bool run(const MXS_LOG_THROTTLING& throttling, int priority, size_t n_generate,
parg->n_generate = n_generate;
parg->priority = priority;
pthread_t tid;
int rc = pthread_create(&tid, 0, thread_main, parg);
int rc = pthread_create(&tids[i], 0, thread_main, parg);
ensure(rc == 0);
}
@ -145,6 +145,12 @@ bool run(const MXS_LOG_THROTTLING& throttling, int priority, size_t n_generate,
mxs_log_flush_sync();
for (size_t i = 0; i < N_THREADS; ++i)
{
void* rv;
pthread_join(tids[i], &rv);
}
return check_messages(in, n_expect);
}

View File

@ -89,11 +89,19 @@ static int test2()
test_assert(result == MXS_PCRE2_MATCH, "Substitution should substitute");
test_assert(strcmp(dest, expected) == 0, "Replaced text should match expected text");
size = 1000;
dest = MXS_REALLOC(dest, size);
result = mxs_pcre2_substitute(re2, subject, good_replace, &dest, &size);
test_assert(result == MXS_PCRE2_NOMATCH, "Non-matching substitution should not substitute");
size = 1000;
dest = MXS_REALLOC(dest, size);
result = mxs_pcre2_substitute(re, subject, bad_replace, &dest, &size);
test_assert(result == MXS_PCRE2_ERROR, "Bad substitution should return an error");
MXS_FREE(dest);
pcre2_code_free(re);
pcre2_code_free(re2);
return 0;
}

View File

@ -545,7 +545,7 @@ strip_escape_chars(char* val)
return true;
}
#define BUFFER_GROWTH_RATE 1.2
#define BUFFER_GROWTH_RATE 2.0
static pcre2_code* remove_comments_re = NULL;
static const PCRE2_SPTR remove_comments_pattern = (PCRE2_SPTR)
"(?:`[^`]*`\\K)|(\\/[*](?!(M?!)).*?[*]\\/)|(?:#.*|--[[:space:]].*)";
@ -573,18 +573,19 @@ char* remove_mysql_comments(const char** src, const size_t* srcsize, char** dest
char* output = *dest;
size_t orig_len = *srcsize;
size_t len = output ? *destsize : orig_len;
if (orig_len > 0)
{
if ((output || (output = (char*) malloc(len * sizeof (char)))) &&
(mdata = pcre2_match_data_create_from_pattern(remove_comments_re, NULL)))
{
size_t len_tmp = len;
while (pcre2_substitute(remove_comments_re, (PCRE2_SPTR) * src, orig_len, 0,
PCRE2_SUBSTITUTE_GLOBAL, mdata, NULL,
replace, PCRE2_ZERO_TERMINATED,
(PCRE2_UCHAR8*) output, &len) == PCRE2_ERROR_NOMEMORY)
(PCRE2_UCHAR8*) output, &len_tmp) == PCRE2_ERROR_NOMEMORY)
{
char* tmp = (char*) realloc(output, (len = (size_t) (len * BUFFER_GROWTH_RATE + 1)));
len_tmp = (size_t) (len * BUFFER_GROWTH_RATE + 1);
char* tmp = (char*) realloc(output, len_tmp);
if (tmp == NULL)
{
free(output);
@ -592,6 +593,7 @@ char* remove_mysql_comments(const char** src, const size_t* srcsize, char** dest
break;
}
output = tmp;
len = len_tmp;
}
pcre2_match_data_free(mdata);
}
@ -645,12 +647,14 @@ char* replace_values(const char** src, const size_t* srcsize, char** dest, size_
if ((output || (output = (char*) malloc(len * sizeof (char)))) &&
(mdata = pcre2_match_data_create_from_pattern(replace_values_re, NULL)))
{
size_t len_tmp = len;
while (pcre2_substitute(replace_values_re, (PCRE2_SPTR) * src, orig_len, 0,
PCRE2_SUBSTITUTE_GLOBAL, mdata, NULL,
replace, PCRE2_ZERO_TERMINATED,
(PCRE2_UCHAR8*) output, &len) == PCRE2_ERROR_NOMEMORY)
(PCRE2_UCHAR8*) output, &len_tmp) == PCRE2_ERROR_NOMEMORY)
{
char* tmp = (char*) realloc(output, (len = (size_t) (len * BUFFER_GROWTH_RATE + 1)));
len_tmp = (size_t) (len * BUFFER_GROWTH_RATE + 1);
char* tmp = (char*) realloc(output, len_tmp);
if (tmp == NULL)
{
free(output);
@ -658,6 +662,7 @@ char* replace_values(const char** src, const size_t* srcsize, char** dest, size_
break;
}
output = tmp;
len = len_tmp;
}
pcre2_match_data_free(mdata);
}
@ -796,12 +801,14 @@ char* replace_quoted(const char** src, const size_t* srcsize, char** dest, size_
if ((output || (output = (char*) malloc(len * sizeof (char)))) &&
(mdata = pcre2_match_data_create_from_pattern(replace_quoted_re, NULL)))
{
size_t len_tmp = len;
while (pcre2_substitute(replace_quoted_re, (PCRE2_SPTR) * src, orig_len, 0,
PCRE2_SUBSTITUTE_GLOBAL, mdata, NULL,
replace, PCRE2_ZERO_TERMINATED,
(PCRE2_UCHAR8*) output, &len) == PCRE2_ERROR_NOMEMORY)
(PCRE2_UCHAR8*) output, &len_tmp) == PCRE2_ERROR_NOMEMORY)
{
char* tmp = (char*) realloc(output, (len = (size_t) (len * BUFFER_GROWTH_RATE + 1)));
len_tmp = (size_t) (len * BUFFER_GROWTH_RATE + 1);
char* tmp = (char*) realloc(output, len_tmp);
if (tmp == NULL)
{
free(output);
@ -809,6 +816,7 @@ char* replace_quoted(const char** src, const size_t* srcsize, char** dest, size_
break;
}
output = tmp;
len = len_tmp;
}
pcre2_match_data_free(mdata);
}

View File

@ -1,2 +1,2 @@
add_definitions(-DINI_MAX_LINE=1024 -DINI_ALLOW_MULTILINE)
add_definitions(-DINI_MAX_LINE=16384 -DINI_USE_STACK=0 -DINI_ALLOW_MULTILINE)
add_library(inih ini.c)

View File

@ -202,6 +202,20 @@ int validate_mysql_user(sqlite3 *handle, DCB *dcb, MYSQL_session *session,
sqlite3_free(err);
}
/** Check for IPv6 mapped IPv4 address */
if (!res.ok && strchr(dcb->remote, ':') && strchr(dcb->remote, '.'))
{
const char *ipv4 = strrchr(dcb->remote, ':') + 1;
sprintf(sql, mysqlauth_validate_user_query, session->user, ipv4, ipv4,
session->db, session->db);
if (sqlite3_exec(handle, sql, auth_cb, &res, &err) != SQLITE_OK)
{
MXS_ERROR("Failed to execute auth query: %s", err);
sqlite3_free(err);
}
}
if (!res.ok)
{
/**
@ -494,7 +508,7 @@ static bool check_server_permissions(SERVICE *service, SERVER* server,
{
int my_errno = mysql_errno(mysql);
MXS_ERROR("[%s] Failed to connect to server '%s' (%s:%d) when"
MXS_ERROR("[%s] Failed to connect to server '%s' ([%s]:%d) when"
" checking authentication user credentials and permissions: %d %s",
service->name, server->unique_name, server->name, server->port,
my_errno, mysql_error(mysql));

View File

@ -350,7 +350,7 @@ mysql_auth_authenticate(DCB *dcb)
}
else if (dcb->service->log_auth_warnings)
{
MXS_WARNING("%s: login attempt for user '%s'@%s:%d, authentication failed.",
MXS_WARNING("%s: login attempt for user '%s'@[%s]:%d, authentication failed.",
dcb->service->name, client_data->user, dcb->remote, dcb_get_port(dcb));
if (is_localhost_address(&dcb->ip) &&
@ -608,12 +608,6 @@ static int mysql_auth_load_users(SERV_LISTENER *port)
int rc = MXS_AUTH_LOADUSERS_OK;
SERVICE *service = port->listener->service;
MYSQL_AUTH *instance = (MYSQL_AUTH*)port->auth_instance;
if (port->users == NULL && !check_service_permissions(port->service))
{
return MXS_AUTH_LOADUSERS_FATAL;
}
bool skip_local = false;
if (instance->handle == NULL)
@ -621,7 +615,8 @@ static int mysql_auth_load_users(SERV_LISTENER *port)
skip_local = true;
char path[PATH_MAX];
get_database_path(port, path, sizeof(path));
if (!open_instance_database(path, &instance->handle))
if (!check_service_permissions(port->service) ||
!open_instance_database(path, &instance->handle))
{
return MXS_AUTH_LOADUSERS_FATAL;
}
@ -631,8 +626,8 @@ static int mysql_auth_load_users(SERV_LISTENER *port)
if (loaded < 0)
{
MXS_ERROR("[%s] Unable to load users for listener %s listening at %s:%d.", service->name,
port->name, port->address ? port->address : "0.0.0.0", port->port);
MXS_ERROR("[%s] Unable to load users for listener %s listening at [%s]:%d.", service->name,
port->name, port->address ? port->address : "::", port->port);
if (instance->inject_service_user)
{

View File

@ -818,11 +818,15 @@ bool CacheFilterSession::should_consult_cache(GWBUF* pPacket)
if (qc_query_is_type(type_mask, QUERY_TYPE_BEGIN_TRX))
{
if (log_decisions())
{
zReason = "transaction start";
}
// When a transaction is started, we initially assume it is read-only.
m_is_read_only = true;
}
if (!session_trx_is_active(m_pSession))
else if (!session_trx_is_active(m_pSession))
{
if (log_decisions())
{

View File

@ -535,6 +535,7 @@ static CACHE_RULE *cache_rule_create_regexp(cache_rule_attribute_t attribute,
pcre2_jit_compile(code, PCRE2_JIT_COMPLETE);
int n_threads = config_threadcount();
ss_dassert(n_threads > 0);
pcre2_match_data **datas = alloc_match_datas(n_threads, code);

View File

@ -236,6 +236,9 @@ int main()
if (mxs_log_init(NULL, ".", MXS_LOG_TARGET_DEFAULT))
{
MXS_CONFIG* pConfig = config_get_global_options();
pConfig->n_threads = 1;
set_libdir(MXS_STRDUP_A("../../../../../query_classifier/qc_sqlite/"));
if (qc_setup("qc_sqlite", "") && qc_process_init(QC_INIT_BOTH))
{

View File

@ -27,7 +27,7 @@
%pure-parser
/** Prefix all functions */
%name-prefix "dbfw_yy"
%name-prefix="dbfw_yy"
/** The pure parser requires one extra parameter */
%parse-param {void* scanner}

View File

@ -46,17 +46,47 @@
#include <maxscale/debug.h>
#include "maxrows.h"
static MXS_FILTER *createInstance(const char *name, char **options, MXS_CONFIG_PARAMETER *);
static MXS_FILTER_SESSION *newSession(MXS_FILTER *instance, MXS_SESSION *session);
static void closeSession(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata);
static void freeSession(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata);
static void setDownstream(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, MXS_DOWNSTREAM *downstream);
static void setUpstream(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, MXS_UPSTREAM *upstream);
static int routeQuery(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, GWBUF *queue);
static int clientReply(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, GWBUF *queue);
static void diagnostics(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, DCB *dcb);
static MXS_FILTER *createInstance(const char *name,
char **options,
MXS_CONFIG_PARAMETER *);
static MXS_FILTER_SESSION *newSession(MXS_FILTER *instance,
MXS_SESSION *session);
static void closeSession(MXS_FILTER *instance,
MXS_FILTER_SESSION *sdata);
static void freeSession(MXS_FILTER *instance,
MXS_FILTER_SESSION *sdata);
static void setDownstream(MXS_FILTER *instance,
MXS_FILTER_SESSION *sdata,
MXS_DOWNSTREAM *downstream);
static void setUpstream(MXS_FILTER *instance,
MXS_FILTER_SESSION *sdata,
MXS_UPSTREAM *upstream);
static int routeQuery(MXS_FILTER *instance,
MXS_FILTER_SESSION *sdata,
GWBUF *queue);
static int clientReply(MXS_FILTER *instance,
MXS_FILTER_SESSION *sdata,
GWBUF *queue);
static void diagnostics(MXS_FILTER *instance,
MXS_FILTER_SESSION *sdata,
DCB *dcb);
static uint64_t getCapabilities(MXS_FILTER *instance);
enum maxrows_return_mode
{
MAXROWS_RETURN_EMPTY = 0,
MAXROWS_RETURN_ERR,
MAXROWS_RETURN_OK
};
static const MXS_ENUM_VALUE return_option_values[] =
{
{"empty", MAXROWS_RETURN_EMPTY},
{"error", MAXROWS_RETURN_ERR},
{"ok", MAXROWS_RETURN_OK},
{NULL}
};
/* Global symbols of the Module */
/**
@ -102,7 +132,7 @@ MXS_MODULE* MXS_CREATE_MODULE()
},
{
"max_resultset_size",
MXS_MODULE_PARAM_COUNT,
MXS_MODULE_PARAM_SIZE,
MAXROWS_DEFAULT_MAX_RESULTSET_SIZE
},
{
@ -110,6 +140,13 @@ MXS_MODULE* MXS_CREATE_MODULE()
MXS_MODULE_PARAM_COUNT,
MAXROWS_DEFAULT_DEBUG
},
{
"max_resultset_return",
MXS_MODULE_PARAM_ENUM,
"empty",
MXS_MODULE_OPT_ENUM_UNIQUE,
return_option_values
},
{MXS_END_MODULE_PARAMS}
}
};
@ -121,9 +158,10 @@ MXS_MODULE* MXS_CREATE_MODULE()
typedef struct maxrows_config
{
uint32_t max_resultset_rows;
uint32_t max_resultset_size;
uint32_t debug;
uint32_t max_resultset_rows;
uint32_t max_resultset_size;
uint32_t debug;
enum maxrows_return_mode m_return;
} MAXROWS_CONFIG;
typedef struct maxrows_instance
@ -155,7 +193,7 @@ static void maxrows_response_state_reset(MAXROWS_RESPONSE_STATE *state);
typedef struct maxrows_session_data
{
MAXROWS_INSTANCE *instance; /**< The maxrows instance the session is associated with. */
MAXROWS_INSTANCE *instance; /**< The maxrows instance the session is associated with. */
MXS_DOWNSTREAM down; /**< The previous filter or equivalent. */
MXS_UPSTREAM up; /**< The next filter or equivalent. */
MAXROWS_RESPONSE_STATE res; /**< The response state. */
@ -163,9 +201,11 @@ typedef struct maxrows_session_data
maxrows_session_state_t state;
bool large_packet; /**< Large packet (> 16MB)) indicator */
bool discard_resultset; /**< Discard resultset indicator */
GWBUF *input_sql; /**< Input query */
} MAXROWS_SESSION_DATA;
static MAXROWS_SESSION_DATA *maxrows_session_data_create(MAXROWS_INSTANCE *instance, MXS_SESSION *session);
static MAXROWS_SESSION_DATA *maxrows_session_data_create(MAXROWS_INSTANCE *instance,
MXS_SESSION *session);
static void maxrows_session_data_free(MAXROWS_SESSION_DATA *data);
static int handle_expecting_fields(MAXROWS_SESSION_DATA *csdata);
@ -173,10 +213,14 @@ static int handle_expecting_nothing(MAXROWS_SESSION_DATA *csdata);
static int handle_expecting_response(MAXROWS_SESSION_DATA *csdata);
static int handle_rows(MAXROWS_SESSION_DATA *csdata);
static int handle_ignoring_response(MAXROWS_SESSION_DATA *csdata);
static bool process_params(char **options, MXS_CONFIG_PARAMETER *params, MAXROWS_CONFIG* config);
static bool process_params(char **options,
MXS_CONFIG_PARAMETER *params,
MAXROWS_CONFIG* config);
static int send_upstream(MAXROWS_SESSION_DATA *csdata);
static int send_eof_upstream(MAXROWS_SESSION_DATA *csdata, size_t offset);
static int send_eof_upstream(MAXROWS_SESSION_DATA *csdata);
static int send_error_upstream(MAXROWS_SESSION_DATA *csdata);
static int send_maxrows_reply_limit(MAXROWS_SESSION_DATA *csdata);
/* API BEGIN */
@ -190,15 +234,22 @@ static int send_eof_upstream(MAXROWS_SESSION_DATA *csdata, size_t offset);
*
* @return The instance data for this new instance
*/
static MXS_FILTER *createInstance(const char *name, char **options, MXS_CONFIG_PARAMETER *params)
static MXS_FILTER *createInstance(const char *name,
char **options,
MXS_CONFIG_PARAMETER *params)
{
MAXROWS_INSTANCE *cinstance = MXS_CALLOC(1, sizeof(MAXROWS_INSTANCE));
if (cinstance)
{
cinstance->name = name;
cinstance->config.max_resultset_rows = config_get_integer(params, "max_resultset_rows");
cinstance->config.max_resultset_size = config_get_integer(params, "max_resultset_size");
cinstance->config.max_resultset_rows = config_get_integer(params,
"max_resultset_rows");
cinstance->config.max_resultset_size = config_get_size(params,
"max_resultset_size");
cinstance->config.m_return = config_get_enum(params,
"max_resultset_return",
return_option_values);
cinstance->config.debug = config_get_integer(params, "debug");
}
@ -284,7 +335,9 @@ static void setUpstream(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, MXS_UPS
* @param sdata The filter session data
* @param buffer Buffer containing an MySQL protocol packet.
*/
static int routeQuery(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, GWBUF *packet)
static int routeQuery(MXS_FILTER *instance,
MXS_FILTER_SESSION *sdata,
GWBUF *packet)
{
MAXROWS_INSTANCE *cinstance = (MAXROWS_INSTANCE*)instance;
MAXROWS_SESSION_DATA *csdata = (MAXROWS_SESSION_DATA*)sdata;
@ -294,7 +347,8 @@ static int routeQuery(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, GWBUF *pa
// All of these should be guaranteed by RCAP_TYPE_TRANSACTION_TRACKING
ss_dassert(GWBUF_IS_CONTIGUOUS(packet));
ss_dassert(GWBUF_LENGTH(packet) >= MYSQL_HEADER_LEN + 1);
ss_dassert(MYSQL_GET_PAYLOAD_LEN(data) + MYSQL_HEADER_LEN == GWBUF_LENGTH(packet));
ss_dassert(MYSQL_GET_PAYLOAD_LEN(data) +
MYSQL_HEADER_LEN == GWBUF_LENGTH(packet));
maxrows_response_state_reset(&csdata->res);
csdata->state = MAXROWS_IGNORING_RESPONSE;
@ -306,6 +360,23 @@ static int routeQuery(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, GWBUF *pa
case MYSQL_COM_QUERY:
case MYSQL_COM_STMT_EXECUTE:
{
/* Set input query only with MAXROWS_RETURN_ERR */
if (csdata->instance->config.m_return == MAXROWS_RETURN_ERR &&
(csdata->input_sql = gwbuf_clone(packet)) == NULL)
{
csdata->state = MAXROWS_EXPECTING_NOTHING;
/* Abort client connection on copy failure */
poll_fake_hangup_event(csdata->session->client_dcb);
gwbuf_free(csdata->res.data);
gwbuf_free(packet);
MXS_FREE(csdata);
csdata->res.data = NULL;
packet = NULL;
csdata = NULL;
return 0;
}
csdata->state = MAXROWS_EXPECTING_RESPONSE;
break;
}
@ -319,7 +390,9 @@ static int routeQuery(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, GWBUF *pa
MXS_NOTICE("Maxrows filter is sending data.");
}
return csdata->down.routeQuery(csdata->down.instance, csdata->down.session, packet);
return csdata->down.routeQuery(csdata->down.instance,
csdata->down.session,
packet);
}
/**
@ -329,7 +402,9 @@ static int routeQuery(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, GWBUF *pa
* @param sdata The filter session data
* @param queue The query data
*/
static int clientReply(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, GWBUF *data)
static int clientReply(MXS_FILTER *instance,
MXS_FILTER_SESSION *sdata,
GWBUF *data)
{
MAXROWS_INSTANCE *cinstance = (MAXROWS_INSTANCE*)instance;
MAXROWS_SESSION_DATA *csdata = (MAXROWS_SESSION_DATA*)sdata;
@ -387,7 +462,8 @@ static int clientReply(MXS_FILTER *instance, MXS_FILTER_SESSION *sdata, GWBUF *d
break;
default:
MXS_ERROR("Internal filter logic broken, unexpected state: %d", csdata->state);
MXS_ERROR("Internal filter logic broken, unexpected state: %d",
csdata->state);
ss_dassert(!true);
rv = send_upstream(csdata);
maxrows_response_state_reset(&csdata->res);
@ -463,6 +539,7 @@ static MAXROWS_SESSION_DATA *maxrows_session_data_create(MAXROWS_INSTANCE *insta
MYSQL_session *mysql_session = (MYSQL_session*)session->client_dcb->data;
data->instance = instance;
data->session = session;
data->input_sql = NULL;
data->state = MAXROWS_EXPECTING_NOTHING;
}
@ -501,7 +578,10 @@ static int handle_expecting_fields(MAXROWS_SESSION_DATA *csdata)
while (!insufficient && (buflen - csdata->res.offset >= MYSQL_HEADER_LEN))
{
uint8_t header[MYSQL_HEADER_LEN + 1];
gwbuf_copy_data(csdata->res.data, csdata->res.offset, MYSQL_HEADER_LEN + 1, header);
gwbuf_copy_data(csdata->res.data,
csdata->res.offset,
MYSQL_HEADER_LEN + 1,
header);
size_t packetlen = MYSQL_HEADER_LEN + MYSQL_GET_PAYLOAD_LEN(header);
@ -585,7 +665,10 @@ static int handle_expecting_response(MAXROWS_SESSION_DATA *csdata)
uint8_t header[MYSQL_HEADER_LEN + 1 + 8];
// Read packet header from buffer at current offset
gwbuf_copy_data(csdata->res.data, csdata->res.offset, MYSQL_HEADER_LEN + 1, header);
gwbuf_copy_data(csdata->res.data,
csdata->res.offset,
MYSQL_HEADER_LEN + 1,
header);
switch ((int)MYSQL_GET_COMMAND(header))
{
@ -611,7 +694,7 @@ static int handle_expecting_response(MAXROWS_SESSION_DATA *csdata)
if (csdata->discard_resultset)
{
rv = send_eof_upstream(csdata, csdata->res.rows_offset);
rv = send_maxrows_reply_limit(csdata);
csdata->state = MAXROWS_EXPECTING_NOTHING;
}
else
@ -653,7 +736,9 @@ static int handle_expecting_response(MAXROWS_SESSION_DATA *csdata)
// Now we can figure out how many fields there are, but first we
// need to copy some more data.
gwbuf_copy_data(csdata->res.data,
MYSQL_HEADER_LEN + 1, n_bytes - 1, &header[MYSQL_HEADER_LEN + 1]);
MYSQL_HEADER_LEN + 1,
n_bytes - 1,
&header[MYSQL_HEADER_LEN + 1]);
csdata->res.n_totalfields = mxs_leint_value(&header[4]);
csdata->res.offset += MYSQL_HEADER_LEN + n_bytes;
@ -692,7 +777,10 @@ static int handle_rows(MAXROWS_SESSION_DATA *csdata)
bool pending_large_data = csdata->large_packet;
// header array holds a full EOF packet
uint8_t header[MYSQL_EOF_PACKET_LEN];
gwbuf_copy_data(csdata->res.data, csdata->res.offset, MYSQL_EOF_PACKET_LEN, header);
gwbuf_copy_data(csdata->res.data,
csdata->res.offset,
MYSQL_EOF_PACKET_LEN,
header);
size_t packetlen = MYSQL_HEADER_LEN + MYSQL_GET_PAYLOAD_LEN(header);
@ -703,7 +791,9 @@ static int handle_rows(MAXROWS_SESSION_DATA *csdata)
* max is 1 byte less than EOF_PACKET_LEN
* If true skip data processing.
*/
if (pending_large_data && (packetlen >= MYSQL_HEADER_LEN && packetlen < MYSQL_EOF_PACKET_LEN))
if (pending_large_data &&
(packetlen >= MYSQL_HEADER_LEN &&
packetlen < MYSQL_EOF_PACKET_LEN))
{
// Update offset, number of rows and break
csdata->res.offset += packetlen;
@ -758,7 +848,7 @@ static int handle_rows(MAXROWS_SESSION_DATA *csdata)
// Send data in buffer or empty resultset
if (csdata->discard_resultset)
{
rv = send_eof_upstream(csdata, csdata->res.rows_offset);
rv = send_maxrows_reply_limit(csdata);
}
else
{
@ -790,8 +880,10 @@ static int handle_rows(MAXROWS_SESSION_DATA *csdata)
*/
if (packetlen < MYSQL_EOF_PACKET_LEN)
{
MXS_ERROR("EOF packet has size of %lu instead of %d", packetlen, MYSQL_EOF_PACKET_LEN);
rv = send_eof_upstream(csdata, csdata->res.rows_offset);
MXS_ERROR("EOF packet has size of %lu instead of %d",
packetlen,
MYSQL_EOF_PACKET_LEN);
rv = send_maxrows_reply_limit(csdata);
csdata->state = MAXROWS_EXPECTING_NOTHING;
break;
}
@ -812,7 +904,7 @@ static int handle_rows(MAXROWS_SESSION_DATA *csdata)
// Discard data or send data
if (csdata->discard_resultset)
{
rv = send_eof_upstream(csdata, csdata->res.rows_offset);
rv = send_maxrows_reply_limit(csdata);
}
else
{
@ -858,7 +950,8 @@ static int handle_rows(MAXROWS_SESSION_DATA *csdata)
{
if (csdata->instance->config.debug & MAXROWS_DEBUG_DISCARDING)
{
MXS_INFO("max_resultset_rows %lu reached, not returning the resultset.", csdata->res.n_rows);
MXS_INFO("max_resultset_rows %lu reached, not returning the resultset.",
csdata->res.n_rows);
}
// Set the discard indicator
@ -902,7 +995,16 @@ static int send_upstream(MAXROWS_SESSION_DATA *csdata)
{
ss_dassert(csdata->res.data != NULL);
int rv = csdata->up.clientReply(csdata->up.instance, csdata->up.session, csdata->res.data);
/* Free a saved SQL not freed by send_error_upstream() */
if (csdata->input_sql)
{
gwbuf_free(csdata->input_sql);
csdata->input_sql = NULL;
}
int rv = csdata->up.clientReply(csdata->up.instance,
csdata->up.session,
csdata->res.data);
csdata->res.data = NULL;
return rv;
@ -915,18 +1017,21 @@ static int send_upstream(MAXROWS_SESSION_DATA *csdata)
* at the end.
*
* @param csdata Session data
* @param offset The offset to server reply pointing to
* next byte after column definitions EOF
* of the first result set.
*
* @return Whatever the upstream returns.
* @return Non-Zero if successful, 0 on errors
*/
static int send_eof_upstream(MAXROWS_SESSION_DATA *csdata, size_t offset)
static int send_eof_upstream(MAXROWS_SESSION_DATA *csdata)
{
int rv = -1;
/* Sequence byte is #3 */
uint8_t eof[MYSQL_EOF_PACKET_LEN] = {05, 00, 00, 01, 0xfe, 00, 00, 02, 00};
GWBUF *new_pkt = NULL;
/**
* The offset to server reply pointing to
* next byte after column definitions EOF
* of the first result set.
*/
size_t offset = csdata->res.rows_offset;
ss_dassert(csdata->res.data != NULL);
@ -955,7 +1060,9 @@ static int send_eof_upstream(MAXROWS_SESSION_DATA *csdata, size_t offset)
if (new_pkt)
{
/* new_pkt will be freed by write routine */
rv = csdata->up.clientReply(csdata->up.instance, csdata->up.session, new_pkt);
rv = csdata->up.clientReply(csdata->up.instance,
csdata->up.session,
new_pkt);
}
}
@ -973,3 +1080,154 @@ static int send_eof_upstream(MAXROWS_SESSION_DATA *csdata, size_t offset)
return rv;
}
/**
* Send OK packet data upstream.
*
* @param csdata Session data
*
* @return Non-Zero if successful, 0 on errors
*/
static int send_ok_upstream(MAXROWS_SESSION_DATA *csdata)
{
/* Note: sequence id is always 01 (4th byte) */
const static uint8_t ok[MYSQL_OK_PACKET_MIN_LEN] = { 07, 00, 00, 01, 00, 00,
00, 02, 00, 00, 00 };
ss_dassert(csdata->res.data != NULL);
GWBUF *packet = gwbuf_alloc(MYSQL_OK_PACKET_MIN_LEN);
if(!packet)
{
/* Abort clienrt connection */
poll_fake_hangup_event(csdata->session->client_dcb);
gwbuf_free(csdata->res.data);
csdata->res.data = NULL;
return 0;
}
uint8_t *ptr = GWBUF_DATA(packet);
memcpy(ptr, &ok, MYSQL_OK_PACKET_MIN_LEN);
ss_dassert(csdata->res.data != NULL);
int rv = csdata->up.clientReply(csdata->up.instance,
csdata->up.session,
packet);
gwbuf_free(csdata->res.data);
csdata->res.data = NULL;
return rv;
}
/**
* Send ERR packet data upstream.
*
* An error packet is sent to client including
* a message prefix plus the original SQL input
*
* @param csdata Session data
* @return Non-Zero if successful, 0 on errors
*/
static int send_error_upstream(MAXROWS_SESSION_DATA *csdata)
{
GWBUF *err_pkt;
uint8_t hdr_err[MYSQL_ERR_PACKET_MIN_LEN];
unsigned long bytes_copied;
char *err_msg_prefix = "Row limit/size exceeded for query: ";
int err_prefix_len = strlen(err_msg_prefix);
unsigned long pkt_len = MYSQL_ERR_PACKET_MIN_LEN + err_prefix_len;
unsigned long sql_len = gwbuf_length(csdata->input_sql) -
(MYSQL_HEADER_LEN + 1);
/**
* The input SQL statement added in the error message
* has a limit of MAXROWS_INPUT_SQL_MAX_LEN bytes
*/
sql_len = (sql_len > MAXROWS_INPUT_SQL_MAX_LEN) ?
MAXROWS_INPUT_SQL_MAX_LEN : sql_len;
uint8_t sql[sql_len];
ss_dassert(csdata->res.data != NULL);
pkt_len += sql_len;
bytes_copied = gwbuf_copy_data(csdata->input_sql,
MYSQL_HEADER_LEN + 1,
sql_len,
sql);
if (!bytes_copied ||
(err_pkt = gwbuf_alloc(MYSQL_HEADER_LEN + pkt_len)) == NULL)
{
/* Abort client connection */
poll_fake_hangup_event(csdata->session->client_dcb);
gwbuf_free(csdata->res.data);
gwbuf_free(csdata->input_sql);
csdata->res.data = NULL;
csdata->input_sql = NULL;
return 0;
}
uint8_t *ptr = GWBUF_DATA(err_pkt);
memcpy(ptr, &hdr_err, MYSQL_ERR_PACKET_MIN_LEN);
unsigned int err_errno = 1415;
char err_state[7] = "#0A000";
/* Set the payload length of the whole error message */
gw_mysql_set_byte3(&ptr[0], pkt_len);
/* Note: sequence id is always 01 (4th byte) */
ptr[3] = 1;
/* Error indicator */
ptr[4] = 0xff;
/* MySQL error code: 2 bytes */
gw_mysql_set_byte2(&ptr[5], err_errno);
/* Status Message 6 bytes */
memcpy((char *)&ptr[7], err_state, 6);
/* Copy error message prefix */
memcpy(&ptr[13], err_msg_prefix, err_prefix_len);
/* Copy SQL input */
memcpy(&ptr[13 + err_prefix_len], sql, sql_len);
int rv = csdata->up.clientReply(csdata->up.instance,
csdata->up.session,
err_pkt);
/* Free server result buffer */
gwbuf_free(csdata->res.data);
/* Free input_sql buffer */
gwbuf_free(csdata->input_sql);
csdata->res.data = NULL;
csdata->input_sql = NULL;
return rv;
}
/**
* Send the proper reply to client when the maxrows
* limit/size is hit.
*
* @param csdata Session data
* @return Non-Zero if successful, 0 on errors
*/
static int send_maxrows_reply_limit(MAXROWS_SESSION_DATA *csdata)
{
switch(csdata->instance->config.m_return)
{
case MAXROWS_RETURN_EMPTY:
return send_eof_upstream(csdata);
break;
case MAXROWS_RETURN_OK:
return send_ok_upstream(csdata);
break;
case MAXROWS_RETURN_ERR:
return send_error_upstream(csdata);
break;
default:
MXS_ERROR("MaxRows config value not expected!");
ss_dassert(!true);
return 0;
break;
}
}

View File

@ -36,5 +36,7 @@ MXS_BEGIN_DECLS
#define MAXROWS_DEFAULT_MAX_RESULTSET_SIZE "65536"
// Integer value
#define MAXROWS_DEFAULT_DEBUG "0"
// Max size of copied input SQL
#define MAXROWS_INPUT_SQL_MAX_LEN 1024
MXS_END_DECLS

View File

@ -1160,7 +1160,7 @@ routeQuery(MXS_FILTER *instance, MXS_FILTER_SESSION *session, GWBUF *queue)
* Something matched the trigger, log the query
*/
MXS_INFO("Routing message to: %s:%d %s as %s/%s, exchange: %s<%s> key:%s queue:%s",
MXS_INFO("Routing message to: [%s]:%d %s as %s/%s, exchange: %s<%s> key:%s queue:%s",
my_instance->hostname, my_instance->port,
my_instance->vhost, my_instance->username,
my_instance->password, my_instance->exchange,
@ -1491,7 +1491,7 @@ diagnostic(MXS_FILTER *instance, MXS_FILTER_SESSION *fsession, DCB *dcb)
if (my_instance)
{
dcb_printf(dcb, "Connecting to %s:%d as '%s'.\nVhost: %s\tExchange: %s\nKey: %s\tQueue: %s\n\n",
dcb_printf(dcb, "Connecting to [%s]:%d as '%s'.\nVhost: %s\tExchange: %s\nKey: %s\tQueue: %s\n\n",
my_instance->hostname, my_instance->port,
my_instance->username,
my_instance->vhost, my_instance->exchange,

View File

@ -428,19 +428,22 @@ regex_replace(const char *sql, pcre2_code *re, pcre2_match_data *match_data, con
result_size = strlen(sql) + strlen(replace);
result = MXS_MALLOC(result_size);
size_t result_size_tmp = result_size;
while (result &&
pcre2_substitute(re, (PCRE2_SPTR) sql, PCRE2_ZERO_TERMINATED, 0,
PCRE2_SUBSTITUTE_GLOBAL, match_data, NULL,
(PCRE2_SPTR) replace, PCRE2_ZERO_TERMINATED,
(PCRE2_UCHAR*) result, (PCRE2_SIZE*) & result_size) == PCRE2_ERROR_NOMEMORY)
(PCRE2_UCHAR*) result, (PCRE2_SIZE*) & result_size_tmp) == PCRE2_ERROR_NOMEMORY)
{
result_size_tmp = 1.5 * result_size;
char *tmp;
if ((tmp = MXS_REALLOC(result, (result_size *= 1.5))) == NULL)
if ((tmp = MXS_REALLOC(result, result_size_tmp)) == NULL)
{
MXS_FREE(result);
result = NULL;
}
result = tmp;
result_size = result_size_tmp;
}
}
return result;

View File

@ -79,7 +79,7 @@ void update_server_status(MXS_MONITOR *monitor, MXS_MONITOR_SERVERS *database)
}
else
{
MXS_ERROR("Failed to query server %s (%s:%d): %d, %s",
MXS_ERROR("Failed to query server %s ([%s]:%d): %d, %s",
database->server->unique_name, database->server->name,
database->server->port, mysql_errno(database->con),
mysql_error(database->con));

View File

@ -524,7 +524,7 @@ monitorMain(void *arg)
/* Log server status change */
if (mon_status_changed(ptr))
{
MXS_DEBUG("Backend server %s:%d state : %s",
MXS_DEBUG("Backend server [%s]:%d state : %s",
ptr->server->name,
ptr->server->port,
STRSRVSTATUS(ptr->server));

View File

@ -539,7 +539,7 @@ monitorMain(void *arg)
if (mon_status_changed(ptr) ||
mon_print_fail_status(ptr))
{
MXS_DEBUG("Backend server %s:%d state : %s",
MXS_DEBUG("Backend server [%s]:%d state : %s",
ptr->server->name,
ptr->server->port,
STRSRVSTATUS(ptr->server));

View File

@ -559,7 +559,7 @@ static MXS_MONITOR_SERVERS *build_mysql51_replication_tree(MXS_MONITOR *mon)
/* Set the Slave Role */
if (ismaster)
{
MXS_DEBUG("Master server found at %s:%d with %d slaves",
MXS_DEBUG("Master server found at [%s]:%d with %d slaves",
database->server->name,
database->server->port,
nslaves);
@ -1138,7 +1138,7 @@ monitorMain(void *arg)
if (SRV_MASTER_STATUS(ptr->mon_prev_status))
{
/** Master failed, can't recover */
MXS_NOTICE("Server %s:%d lost the master status.",
MXS_NOTICE("Server [%s]:%d lost the master status.",
ptr->server->name,
ptr->server->port);
}
@ -1147,12 +1147,12 @@ monitorMain(void *arg)
if (mon_status_changed(ptr))
{
#if defined(SS_DEBUG)
MXS_INFO("Backend server %s:%d state : %s",
MXS_INFO("Backend server [%s]:%d state : %s",
ptr->server->name,
ptr->server->port,
STRSRVSTATUS(ptr->server));
#else
MXS_DEBUG("Backend server %s:%d state : %s",
MXS_DEBUG("Backend server [%s]:%d state : %s",
ptr->server->name,
ptr->server->port,
STRSRVSTATUS(ptr->server));

View File

@ -349,7 +349,7 @@ monitorMain(void *arg)
if (ptr->server->status != ptr->mon_prev_status ||
SERVER_IS_DOWN(ptr->server))
{
MXS_DEBUG("Backend server %s:%d state : %s",
MXS_DEBUG("Backend server [%s]:%d state : %s",
ptr->server->name,
ptr->server->port,
STRSRVSTATUS(ptr->server));

View File

@ -281,7 +281,7 @@ static int gw_do_connect_to_backend(char *host, int port, int *fd)
if (so == -1)
{
MXS_ERROR("Establishing connection to backend server %s:%d failed.", host, port);
MXS_ERROR("Establishing connection to backend server [%s]:%d failed.", host, port);
return rv;
}
@ -295,7 +295,7 @@ static int gw_do_connect_to_backend(char *host, int port, int *fd)
}
else
{
MXS_ERROR("Failed to connect backend server %s:%d due to: %d, %s.",
MXS_ERROR("Failed to connect backend server [%s]:%d due to: %d, %s.",
host, port, errno, mxs_strerror(errno));
close(so);
return rv;
@ -304,7 +304,7 @@ static int gw_do_connect_to_backend(char *host, int port, int *fd)
*fd = so;
MXS_DEBUG("%lu [gw_do_connect_to_backend] Connected to backend server "
"%s:%d, fd %d.", pthread_self(), host, port, so);
"[%s]:%d, fd %d.", pthread_self(), host, port, so);
return rv;

View File

@ -1553,11 +1553,9 @@ bool mxs_mysql_is_result_set(GWBUF *buffer)
case MYSQL_REPLY_EOF:
/** Not a result set */
break;
default:
if (gwbuf_copy_data(buffer, MYSQL_HEADER_LEN + 1, 1, &cmd) && cmd > 1)
{
rval = true;
}
rval = true;
break;
}
}

View File

@ -904,7 +904,7 @@ diagnostics(MXS_ROUTER *router, DCB *dcb)
char sync_marker_hex[SYNC_MARKER_SIZE * 2 + 1];
dcb_printf(dcb, "\t\tClient UUID: %s\n", session->uuid);
dcb_printf(dcb, "\t\tClient_host_port: %s:%d\n",
dcb_printf(dcb, "\t\tClient_host_port: [%s]:%d\n",
session->dcb->remote, dcb_get_port(session->dcb));
dcb_printf(dcb, "\t\tUsername: %s\n", session->dcb->user);
dcb_printf(dcb, "\t\tClient DCB: %p\n", session->dcb);

View File

@ -1169,7 +1169,7 @@ closeSession(MXS_ROUTER *instance, MXS_ROUTER_SESSION *router_session)
if (slave->state > 0)
{
MXS_NOTICE("%s: Slave %s:%d, server id %d, disconnected after %ld seconds. "
MXS_NOTICE("%s: Slave [%s]:%d, server id %d, disconnected after %ld seconds. "
"%d SQL commands, %d events sent (%lu bytes), binlog '%s', "
"last position %lu",
router->service->name, slave->dcb->remote, dcb_get_port(slave->dcb),
@ -1579,7 +1579,7 @@ diagnostics(MXS_ROUTER *router, DCB *dcb)
dcb_printf(dcb, "\t\tSlave UUID: %s\n", session->uuid);
}
dcb_printf(dcb,
"\t\tSlave_host_port: %s:%d\n",
"\t\tSlave_host_port: [%s]:%d\n",
session->dcb->remote, dcb_get_port(session->dcb));
dcb_printf(dcb,
"\t\tUsername: %s\n",
@ -1819,7 +1819,7 @@ errorReply(MXS_ROUTER *instance,
dcb_close(backend_dcb);
MXS_ERROR("%s: Master connection error %lu '%s' in state '%s', "
"%s while connecting to master %s:%d",
"%s while connecting to master [%s]:%d",
router->service->name, router->m_errno, router->m_errmsg,
blrm_states[BLRM_TIMESTAMP], msg,
router->service->dbref->server->name,
@ -1862,7 +1862,7 @@ errorReply(MXS_ROUTER *instance,
spinlock_release(&router->lock);
MXS_ERROR("%s: Master connection error %lu '%s' in state '%s', "
"%s attempting reconnect to master %s:%d",
"%s attempting reconnect to master [%s]:%d",
router->service->name, mysql_errno, errmsg,
blrm_states[router->master_state], msg,
router->service->dbref->server->name,
@ -1871,7 +1871,7 @@ errorReply(MXS_ROUTER *instance,
else
{
MXS_ERROR("%s: Master connection error %lu '%s' in state '%s', "
"%s attempting reconnect to master %s:%d",
"%s attempting reconnect to master [%s]:%d",
router->service->name, router->m_errno,
router->m_errmsg ? router->m_errmsg : "(memory failure)",
blrm_states[router->master_state], msg,
@ -2511,7 +2511,7 @@ destroyInstance(MXS_ROUTER *instance)
}
}
MXS_INFO("%s is being stopped by MaxScale shudown. Disconnecting from master %s:%d, "
MXS_INFO("%s is being stopped by MaxScale shudown. Disconnecting from master [%s]:%d, "
"read up to log %s, pos %lu, transaction safe pos %lu",
inst->service->name,
inst->service->dbref->server->name,

View File

@ -195,7 +195,7 @@ blr_start_master(void* data)
}
router->master->remote = MXS_STRDUP_A(router->service->dbref->server->name);
MXS_NOTICE("%s: attempting to connect to master server %s:%d, binlog %s, pos %lu",
MXS_NOTICE("%s: attempting to connect to master server [%s]:%d, binlog %s, pos %lu",
router->service->name, router->service->dbref->server->name,
router->service->dbref->server->port, router->binlog_name, router->current_pos);
@ -799,7 +799,7 @@ blr_master_response(ROUTER_INSTANCE *router, GWBUF *buf)
/* if semisync option is set, check for master semi-sync availability */
if (router->request_semi_sync)
{
MXS_NOTICE("%s: checking Semi-Sync replication capability for master server %s:%d",
MXS_NOTICE("%s: checking Semi-Sync replication capability for master server [%s]:%d",
router->service->name,
router->service->dbref->server->name,
router->service->dbref->server->port);
@ -832,7 +832,7 @@ blr_master_response(ROUTER_INSTANCE *router, GWBUF *buf)
if (router->master_semi_sync == MASTER_SEMISYNC_NOT_AVAILABLE)
{
/* not installed */
MXS_NOTICE("%s: master server %s:%d doesn't have semi_sync capability",
MXS_NOTICE("%s: master server [%s]:%d doesn't have semi_sync capability",
router->service->name,
router->service->dbref->server->name,
router->service->dbref->server->port);
@ -846,7 +846,7 @@ blr_master_response(ROUTER_INSTANCE *router, GWBUF *buf)
if (router->master_semi_sync == MASTER_SEMISYNC_DISABLED)
{
/* Installed but not enabled, right now */
MXS_NOTICE("%s: master server %s:%d doesn't have semi_sync enabled right now, "
MXS_NOTICE("%s: master server [%s]:%d doesn't have semi_sync enabled right now, "
"Requesting Semi-Sync Replication",
router->service->name,
router->service->dbref->server->name,
@ -855,7 +855,7 @@ blr_master_response(ROUTER_INSTANCE *router, GWBUF *buf)
else
{
/* Installed and enabled */
MXS_NOTICE("%s: master server %s:%d has semi_sync enabled, Requesting Semi-Sync Replication",
MXS_NOTICE("%s: master server [%s]:%d has semi_sync enabled, Requesting Semi-Sync Replication",
router->service->name,
router->service->dbref->server->name,
router->service->dbref->server->port);
@ -896,7 +896,7 @@ blr_master_response(ROUTER_INSTANCE *router, GWBUF *buf)
router->master->func.write(router->master, buf);
MXS_NOTICE("%s: Request binlog records from %s at "
"position %lu from master server %s:%d",
"position %lu from master server [%s]:%d",
router->service->name, router->binlog_name,
router->current_pos,
router->service->dbref->server->name,
@ -1629,7 +1629,7 @@ blr_handle_binlog_record(ROUTER_INSTANCE *router, GWBUF *pkt)
MXS_DEBUG("%s: binlog record in file %s, pos %lu has "
"SEMI_SYNC_ACK_REQ and needs a Semi-Sync ACK packet to "
"be sent to the master server %s:%d",
"be sent to the master server [%s]:%d",
router->service->name, router->binlog_name,
router->current_pos,
router->service->dbref->server->name,
@ -2283,7 +2283,7 @@ blr_check_heartbeat(ROUTER_INSTANCE *router)
{
if ((t_now - router->stats.lastReply) > (router->heartbeat + BLR_NET_LATENCY_WAIT_TIME))
{
MXS_ERROR("No event received from master %s:%d in heartbeat period (%lu seconds), "
MXS_ERROR("No event received from master [%s]:%d in heartbeat period (%lu seconds), "
"last event (%s %d) received %lu seconds ago. Assuming connection is dead "
"and reconnecting.",
router->service->dbref->server->name,
@ -2546,7 +2546,7 @@ bool blr_send_event(blr_thread_role_t role,
}
else
{
MXS_ERROR("Failed to send an event of %u bytes to slave at %s:%d.",
MXS_ERROR("Failed to send an event of %u bytes to slave at [%s]:%d.",
hdr->event_size, slave->dcb->remote,
dcb_get_port(slave->dcb));
}

View File

@ -2420,7 +2420,7 @@ blr_slave_binlog_dump(ROUTER_INSTANCE *router, ROUTER_SLAVE *slave, GWBUF *queue
slave->state = BLRS_DUMPING;
MXS_NOTICE("%s: Slave %s:%d, server id %d requested binlog file %s from position %lu",
MXS_NOTICE("%s: Slave [%s]:%d, server id %d requested binlog file %s from position %lu",
router->service->name, slave->dcb->remote,
dcb_get_port(slave->dcb),
slave->serverid,
@ -2958,7 +2958,7 @@ blr_slave_catchup(ROUTER_INSTANCE *router, ROUTER_SLAVE *slave, bool large)
* but the new binlog file has not yet been created. Therefore
* we ignore these issues during the rotate processing.
*/
MXS_ERROR("%s: Slave %s:%d, server-id %d reached end of file for binlog file %s "
MXS_ERROR("%s: Slave [%s]:%d, server-id %d reached end of file for binlog file %s "
"at %lu which is not the file currently being downloaded. "
"Master binlog is %s, %lu. This may be caused by a "
"previous failure of the master.",
@ -3765,7 +3765,7 @@ blr_stop_slave(ROUTER_INSTANCE* router, ROUTER_SLAVE* slave)
spinlock_release(&router->lock);
MXS_NOTICE("%s: STOP SLAVE executed by %s@%s. Disconnecting from master %s:%d, "
MXS_NOTICE("%s: STOP SLAVE executed by %s@%s. Disconnecting from master [%s]:%d, "
"read up to log %s, pos %lu, transaction safe pos %lu",
router->service->name,
slave->dcb->user,
@ -3938,8 +3938,7 @@ blr_start_slave(ROUTER_INSTANCE* router, ROUTER_SLAVE* slave)
/** Start replication from master */
blr_start_master(router);
MXS_NOTICE("%s: START SLAVE executed by %s@%s. "
"Trying connection to master %s:%d, "
MXS_NOTICE("%s: START SLAVE executed by %s@%s. Trying connection to master [%s]:%d, "
"binlog %s, pos %lu, transaction safe pos %lu",
router->service->name,
slave->dcb->user,

File diff suppressed because it is too large Load Diff

View File

@ -13,6 +13,7 @@
#include "readwritesplit.h"
#include <inttypes.h>
#include <stdio.h>
#include <strings.h>
#include <string.h>
@ -619,17 +620,17 @@ static void diagnostics(MXS_ROUTER *instance, DCB *dcb)
all_pct = ((double)router->stats.n_all / (double)router->stats.n_queries) * 100.0;
}
dcb_printf(dcb, "\tNumber of router sessions: %d\n",
dcb_printf(dcb, "\tNumber of router sessions: %" PRIu64 "\n",
router->stats.n_sessions);
dcb_printf(dcb, "\tCurrent no. of router sessions: %d\n",
router->service->stats.n_current);
dcb_printf(dcb, "\tNumber of queries forwarded: %d\n",
dcb_printf(dcb, "\tNumber of queries forwarded: %" PRIu64 "\n",
router->stats.n_queries);
dcb_printf(dcb, "\tNumber of queries forwarded to master: %d (%.2f%%)\n",
dcb_printf(dcb, "\tNumber of queries forwarded to master: %" PRIu64 " (%.2f%%)\n",
router->stats.n_master, master_pct);
dcb_printf(dcb, "\tNumber of queries forwarded to slave: %d (%.2f%%)\n",
dcb_printf(dcb, "\tNumber of queries forwarded to slave: %" PRIu64 " (%.2f%%)\n",
router->stats.n_slave, slave_pct);
dcb_printf(dcb, "\tNumber of queries forwarded to all: %d (%.2f%%)\n",
dcb_printf(dcb, "\tNumber of queries forwarded to all: %" PRIu64 " (%.2f%%)\n",
router->stats.n_all, all_pct);
if ((weightby = serviceGetWeightingParameter(router->service)) != NULL)
@ -760,14 +761,14 @@ static void clientReply(MXS_ROUTER *instance,
{
bool succp;
MXS_INFO("Backend %s:%d processed reply and starts to execute active cursor.",
MXS_INFO("Backend [%s]:%d processed reply and starts to execute active cursor.",
bref->ref->server->name, bref->ref->server->port);
succp = execute_sescmd_in_backend(bref);
if (!succp)
{
MXS_INFO("Backend %s:%d failed to execute session command.",
MXS_INFO("Backend [%s]:%d failed to execute session command.",
bref->ref->server->name, bref->ref->server->port);
}
}
@ -781,7 +782,7 @@ static void clientReply(MXS_ROUTER *instance,
gwbuf_clone(bref->bref_pending_cmd))) == 1)
{
ROUTER_INSTANCE* inst = (ROUTER_INSTANCE *)instance;
atomic_add(&inst->stats.n_queries, 1);
atomic_add_uint64(&inst->stats.n_queries, 1);
/**
* Add one query response waiter to backend reference
*/

View File

@ -337,11 +337,11 @@ struct router_client_session
*/
typedef struct
{
int n_sessions; /*< Number sessions created */
int n_queries; /*< Number of queries forwarded */
int n_master; /*< Number of stmts sent to master */
int n_slave; /*< Number of stmts sent to slave */
int n_all; /*< Number of stmts sent to all */
uint64_t n_sessions; /*< Number sessions created */
uint64_t n_queries; /*< Number of queries forwarded */
uint64_t n_master; /*< Number of stmts sent to master */
uint64_t n_slave; /*< Number of stmts sent to slave */
uint64_t n_all; /*< Number of stmts sent to all */
} ROUTER_STATS;
/**

View File

@ -282,7 +282,7 @@ handle_target_is_all(route_target_t route_target,
if (result)
{
atomic_add(&inst->stats.n_all, 1);
atomic_add_uint64(&inst->stats.n_all, 1);
}
return result;
}
@ -338,7 +338,7 @@ void check_session_command_reply(GWBUF *writebuf, sescmd_cursor_t *scur, backend
ss_dassert(len + 4 == GWBUF_LENGTH(scur->scmd_cur_cmd->my_sescmd_buf));
MXS_ERROR("Failed to execute session command in %s:%d. Error was: %s %s",
MXS_ERROR("Failed to execute session command in [%s]:%d. Error was: %s %s",
bref->ref->server->name,
bref->ref->server->port, err, replystr);
MXS_FREE(err);

View File

@ -245,7 +245,7 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
if (MXS_LOG_PRIORITY_IS_ENABLED(LOG_INFO) &&
BREF_IS_IN_USE((&backend_ref[i])))
{
MXS_INFO("Route query to %s \t%s:%d%s",
MXS_INFO("Route query to %s \t[%s]:%d%s",
(SERVER_IS_MASTER(backend_ref[i].ref->server)
? "master" : "slave"),
backend_ref[i].ref->server->name,
@ -352,7 +352,7 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
if (MXS_LOG_PRIORITY_IS_ENABLED(LOG_INFO))
{
MXS_INFO("Route query to %s \t%s:%d%s",
MXS_INFO("Route query to %s \t[%s]:%d%s",
(SERVER_IS_MASTER(backend_ref[i].ref->server)
? "master" : "slave"),
backend_ref[i].ref->server->name,
@ -375,7 +375,7 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
if (sescmd_cursor_is_active(scur) && &backend_ref[i] != router_cli_ses->rses_master_ref)
{
nsucc += 1;
MXS_INFO("Backend %s:%d already executing sescmd.",
MXS_INFO("Backend [%s]:%d already executing sescmd.",
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port);
}
@ -387,7 +387,7 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
}
else
{
MXS_ERROR("Failed to execute session command in %s:%d",
MXS_ERROR("Failed to execute session command in [%s]:%d",
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port);
}
@ -643,7 +643,7 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
}
else
{
MXS_INFO("Server %s:%d is too much behind the master, %d s. and can't be chosen.",
MXS_INFO("Server [%s]:%d is too much behind the master, %d s. and can't be chosen.",
b->server->name, b->server->port, b->server->rlag);
}
}
@ -1105,7 +1105,7 @@ bool handle_slave_is_target(ROUTER_INSTANCE *inst, ROUTER_CLIENT_SES *rses,
*/
if (rwsplit_get_dcb(target_dcb, rses, BE_SLAVE, NULL, rlag_max))
{
atomic_add(&inst->stats.n_slave, 1);
atomic_add_uint64(&inst->stats.n_slave, 1);
return true;
}
else
@ -1193,14 +1193,14 @@ bool handle_master_is_target(ROUTER_INSTANCE *inst, ROUTER_CLIENT_SES *rses,
if (succp && master_dcb == curr_master_dcb)
{
atomic_add(&inst->stats.n_master, 1);
atomic_add_uint64(&inst->stats.n_master, 1);
*target_dcb = master_dcb;
}
else
{
if (succp && master_dcb == curr_master_dcb)
{
atomic_add(&inst->stats.n_master, 1);
atomic_add_uint64(&inst->stats.n_master, 1);
*target_dcb = master_dcb;
}
else
@ -1266,7 +1266,7 @@ handle_got_target(ROUTER_INSTANCE *inst, ROUTER_CLIENT_SES *rses,
ss_dassert(target_dcb != NULL);
MXS_INFO("Route query to %s \t%s:%d <",
MXS_INFO("Route query to %s \t[%s]:%d <",
(SERVER_IS_MASTER(bref->ref->server) ? "master"
: "slave"), bref->ref->server->name, bref->ref->server->port);
/**
@ -1289,7 +1289,7 @@ handle_got_target(ROUTER_INSTANCE *inst, ROUTER_CLIENT_SES *rses,
backend_ref_t *bref;
atomic_add(&inst->stats.n_queries, 1);
atomic_add_uint64(&inst->stats.n_queries, 1);
/**
* Add one query response waiter to backend reference
*/

View File

@ -277,7 +277,7 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
{
if (BREF_IS_IN_USE((&backend_ref[i])))
{
MXS_INFO("Selected %s in \t%s:%d",
MXS_INFO("Selected %s in \t[%s]:%d",
STRSRVSTATUS(backend_ref[i].ref->server),
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port);
@ -440,7 +440,7 @@ static bool connect_server(backend_ref_t *bref, MXS_SESSION *session, bool execu
}
else
{
MXS_ERROR("Failed to execute session command in %s (%s:%d). See earlier "
MXS_ERROR("Failed to execute session command in %s ([%s]:%d). See earlier "
"errors for more details.",
bref->ref->server->unique_name,
bref->ref->server->name,
@ -453,7 +453,7 @@ static bool connect_server(backend_ref_t *bref, MXS_SESSION *session, bool execu
}
else
{
MXS_ERROR("Unable to establish connection with server %s:%d",
MXS_ERROR("Unable to establish connection with server [%s]:%d",
serv->name, serv->port);
}
@ -486,26 +486,26 @@ static void log_server_connections(select_criteria_t select_criteria,
switch (select_criteria)
{
case LEAST_GLOBAL_CONNECTIONS:
MXS_INFO("MaxScale connections : %d in \t%s:%d %s",
MXS_INFO("MaxScale connections : %d in \t[%s]:%d %s",
b->server->stats.n_current, b->server->name,
b->server->port, STRSRVSTATUS(b->server));
break;
case LEAST_ROUTER_CONNECTIONS:
MXS_INFO("RWSplit connections : %d in \t%s:%d %s",
MXS_INFO("RWSplit connections : %d in \t[%s]:%d %s",
b->connections, b->server->name,
b->server->port, STRSRVSTATUS(b->server));
break;
case LEAST_CURRENT_OPERATIONS:
MXS_INFO("current operations : %d in \t%s:%d %s",
MXS_INFO("current operations : %d in \t[%s]:%d %s",
b->server->stats.n_current_ops,
b->server->name, b->server->port,
STRSRVSTATUS(b->server));
break;
case LEAST_BEHIND_MASTER:
MXS_INFO("replication lag : %d in \t%s:%d %s",
MXS_INFO("replication lag : %d in \t[%s]:%d %s",
b->server->rlag, b->server->name,
b->server->port, STRSRVSTATUS(b->server));
default:

View File

@ -216,7 +216,7 @@ GWBUF *sescmd_cursor_process_replies(GWBUF *replybuf,
RW_CLOSE_BREF(&ses->rses_backend_ref[i]);
}
*reconnect = true;
MXS_INFO("Disabling slave %s:%d, result differs from "
MXS_INFO("Disabling slave [%s]:%d, result differs from "
"master's result. Master: %d Slave: %d",
ses->rses_backend_ref[i].ref->server->name,
ses->rses_backend_ref[i].ref->server->port,