Merge from develop

Merge from develop
This commit is contained in:
MassimilianoPinto 2016-11-14 09:43:54 +01:00
commit 2b2d2cc679
105 changed files with 7921 additions and 3429 deletions

View File

@ -154,10 +154,6 @@ if(GCOV)
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -lgcov")
endif()
if(FAKE_CODE)
set(FLAGS "${FLAGS} -DFAKE_CODE" CACHE STRING "Compilation flags" FORCE)
endif()
if(PROFILE)
message(STATUS "Profiling executables")
set(FLAGS "${FLAGS} -pg " CACHE STRING "Compilation flags" FORCE)

View File

@ -31,6 +31,7 @@
- [Debug and Diagnostic Support](Reference/Debug-And-Diagnostic-Support.md)
- [Routing Hints](Reference/Hint-Syntax.md)
- [MaxBinlogCheck](Reference/MaxBinlogCheck.md)
- [MaxScale REST API](REST-API/API.md)
## Tutorials
@ -106,6 +107,15 @@ Documentation for MaxScale protocol modules.
- [Change Data Capture (CDC) Protocol](Protocols/CDC.md)
- [Change Data Capture (CDC) Users](Protocols/CDC_users.md)
## Authenticators
A short description of the authentication module type can be found in the
[Authentication Modules](Authenticators/Authentication-Modules.md)
document.
- [MySQL Authenticator](Authenticators/MySQL-Authenticator.md)
- [GSSAPI Authenticator](Authenticators/GSSAPI-Authenticator.md)
## Utilities
- [RabbitMQ Consumer Client](Filters/RabbitMQ-Consumer-Client.md)

View File

@ -0,0 +1,70 @@
# Maxrows
## Overview
The maxrows filter is capable of restricting the amount of rows that a SELECT,
a prepared statement or stored procedure could return to the client application.
If a resultset from a backend server has more rows than the configured limit
or the resultset size exceeds the configured size,
an empty result will be sent to the client.
## Configuration
The maxrows filter is easy to configure and to add to any existing service.
```
[MaxRows]
type=filter
module=maxrows
[MaxRows Routing Service]
type=service
...
filters=maxrows
```
### Filter Parameters
The maxrows filter has no mandatory parameters.
Optional parameters are:
#### `max_resultset_rows`
Specifies the maximum number of rows a resultset can have in order to be returned
to the user.
If a resultset is larger than this an empty result will be sent instead.
```
max_resultset_rows=1000
```
Zero or a negative value is interpreted as no limitation.
The default value is `-1`.
#### `max_resultset_size`
Specifies the maximum size a resultset can have, measured in kibibytes,
in order to be sent to the client. A resultset larger than this, will
not be sent: an empty resultset will be sent instead.
```
max_resultset_size=128
```
The default value is 64.
#### `debug`
An integer value, using which the level of debug logging made by the maxrows
filter can be controlled. The value is actually a bitfield with different bits
denoting different logging.
* ` 0` (`0b00000`) No logging is made.
* ` 1` (`0b00001`) A decision to handle data form server is logged.
* ` 2` (`0b00010`) Reached max_resultset_rows or max_resultset_size is logged.
Default is `0`. To log everything, give `debug` a value of `3`.
```
debug=2
```

View File

@ -30,8 +30,10 @@ The QLA filter accepts the following options.
|ignorecase|Use case-insensitive matching |
|case |Use case-sensitive matching |
|extended |Use extended regular expression syntax (ERE)|
To use multiple filter options, list them in a comma-separated list.
|session_file| Use session-specific file (default)|
|unified_file| Use one file for all sessions|
|flush_writes| Flush after every write|
To use multiple filter options, list them in a comma-separated list. If no file settings are given, default will be used. Multiple file settings can be enabled simultaneously.
```
options=case,extended

View File

@ -378,6 +378,16 @@ Configure the directory where the executable files reside. All internal processe
execdir=/usr/local/bin/
```
#### `persistdir`
Configure the directory where persisted configurations are stored. When a new
server is created via MaxAdmin, it will be stored in this directory. Do not use
or modify the contents of this directory, use _/etc/maxscale.cnf.d/_ instead.
```
persistdir=/var/lib/maxscale/maxscale.cnf.d/
```
#### `language`
Set the folder where the errmsg.sys file is located in. MariaDB MaxScale will look for the errmsg.sys file installed with MariaDB MaxScale from this folder.

View File

@ -64,7 +64,16 @@ use_priority=true
## Interaction with Server Priorities
If the `use_priority` option is set and a server is configured with the `priority=<int>` parameter, galeramon will use that as the basis on which the master node is chosen. This requires the `disable_master_role_setting` to be undefined or disabled. The server with the lowest value in `priority` will be chosen as the master node when a replacement Galera node is promoted to a master server inside MaxScale.
If the `use_priority` option is set and a server is configured with the
`priority=<int>` parameter, galeramon will use that as the basis on which the
master node is chosen. This requires the `disable_master_role_setting` to be
undefined or disabled. The server with the lowest positive value in _priority_
will be chosen as the master node when a replacement Galera node is promoted to
a master server inside MaxScale.
Nodes with a non-positive value (_priority_ <= 0) will never be chosen as the master. This allows
you to mark some servers as permanent slaves by assigning a non-positive value
into _priority_.
Here is an example with two servers.
@ -86,8 +95,21 @@ type=server
address=192.168.122.103
port=3306
priority=2
[node-4]
type=server
address=192.168.122.104
port=3306
priority=0
```
In this example `node-1` is always used as the master if available. If `node-1` is not available, then the next node with the highest priority rank is used. In this case it would be `node-3`. If both `node-1` and `node-3` were down, then `node-2` would be used. Nodes without priority are considered as having the lowest priority rank and will be used only if all nodes with priority ranks are not available.
In this example `node-1` is always used as the master if available. If `node-1`
is not available, then the next node with the highest priority rank is used. In
this case it would be `node-3`. If both `node-1` and `node-3` were down, then
`node-2` would be used. Because `node-4` has a value of 0 in _priority_, it will
never be the master. Nodes without priority are considered as having the lowest
priority rank and will be used only if all nodes with priority ranks are not
available.
With priority ranks you can control the order in which MaxScale chooses the master node. This will allow for a controlled failure and replacement of nodes.
With priority ranks you can control the order in which MaxScale chooses the
master node. This will allow for a controlled failure and replacement of nodes.

View File

@ -0,0 +1,453 @@
# REST API design document
This document describes the version 1 of the MaxScale REST API.
## Table of Contents
- [HTTP Headers](#http-headers)
- [Request Headers](#request-headers)
- [Response Headers](#response-headers)
- [Response Codes](#response-codes)
- [2xx Success](#2xx-success)
- [3xx Redirection](#3xx-redirection)
- [4xx Client Error](#4xx-client-error)
- [5xx Server Error](#5xx-server-error)
- [Resources](#resources)
- [Common Request Parameter](#common-request-parameters)
## HTTP Headers
### Request Headers
REST makes use of the HTTP protocols in its aim to provide a natural way to
understand the workings of an API. The following request headers are understood
by this API.
#### Accept-Charset
Acceptable character sets.
#### Authorization
Credentials for authentication.
#### Content-Type
All PUT and POST requests must use the `Content-Type: application/json` media
type and the request body must be a valid JSON representation of a resource. All
PATCH requests must use the `Content-Type: application/json-patch` media type
and the request body must be a valid JSON Patch document which is applied to the
resource. Curently, only _add_, _remove_, _replace_ and _test_ operations are
supported.
Read the [JSON Patch](https://tools.ietf.org/html/draft-ietf-appsawg-json-patch-08)
draft for more details on how to use it with PATCH.
#### Date
This header is required and should be in the RFC 1123 standard form, e.g. Mon,
18 Nov 2013 08:14:29 -0600. Please note that the date must be in English. It
will be checked by the API for being close to the current date and time.
#### Host
The address and port of the server.
#### If-Match
The request is performed only if the provided ETag value matches the one on the
server. This field should be used with PUT requests to prevent concurrent
updates to the same resource.
The value of this header must be a value from the `ETag` header retrieved from
the same resource at an earlier point in time.
#### If-Modified-Since
If the content has not changed the server responds with a 304 status code. If
the content has changed the server responds with a 200 status code and the
requested resource.
The value of this header must be a date value in the
["HTTP-date"](https://www.ietf.org/rfc/rfc2822.txt) format.
#### If-None-Match
If the content has not changed the server responds with a 304 status code. If
the content has changed the server responds with a 200 status code and the
requested resource.
The value of this header must be a value from the `ETag` header retrieved from
the same resource at an earlier point in time.
#### If-Unmodified-Since
The request is performed only if the requested resource has not been modified
since the provided date.
The value of this header must be a date value in the
["HTTP-date"](https://www.ietf.org/rfc/rfc2822.txt) format.
#### X-HTTP-Method-Override
Some clients only support GET and PUT requests. By providing the string value of
the intended method in the `X-HTTP-Method-Override` header, a client can perform
a POST, PATCH or DELETE request with the PUT method
(e.g. `X-HTTP-Method-Override: PATCH`).
_TODO: Add API version header?_
### Response Headers
#### Allow
All resources return the Allow header with the supported HTTP methods. For
example the resource `/service` will always return the `Accept: GET, PATCH, PUT`
header.
#### Accept-Patch
All PATCH capable resources return the `Accept-Patch: application/json-patch`
header.
#### Date
Returns the RFC 1123 standard form date when the reply was sent. The date is in
English and it uses the server's local timezone.
#### ETag
An identifier for a specific version of a resource. The value of this header
changes whenever a resource is modified.
When the client sends the `If-Match` or `If-None-Match` header, the provided
value should be the value of the `ETag` header of an earlier GET.
#### Last-Modified
The date when the resource was last modified in "HTTP-date" format.
#### Location
If an out of date resource location is requested, a HTTP return code of 3XX with
the `Location` header is returned. The value of the header contains the new
location of the requested resource as a relative URI.
#### WWW-Authenticate
The requested authentication method. For example, `WWW-Authenticate: Basic`
would require basic HTTP authentication.
## Response Codes
Every HTTP response starts with a line with a return code which indicates the
outcome of the request. The API uses some of the standard HTTP values:
### 2xx Success
- 200 OK
- Successful HTTP requests, response has a body.
- 201 Created
- A new resource was created.
- 202 Accepted
- The request has been accepted for processing, but the processing has not
been completed.
- 204 No Content
- Successful HTTP requests, response has no body.
### 3xx Redirection
This class of status code indicates the client must take additional action to
complete the request.
- 301 Moved Permanently
- This and all future requests should be directed to the given URI.
- 302 Found
- The response to the request can be found under another URI using the same
method as in the original request.
- 303 See Other
- The response to the request can be found under another URI using a GET
method.
- 304 Not Modified
- Indicates that the resource has not been modified since the version
specified by the request headers If-Modified-Since or If-None-Match.
- 307 Temporary Redirect
- The request should be repeated with another URI but future requests should
use the original URI.
- 308 Permanent Redirect
- The request and all future requests should be repeated using another URI.
### 4xx Client Error
The 4xx class of status code is when the client seems to have erred. Except when
responding to a HEAD request, the body of the response contains a JSON
representation of the error in the following format.
```
{
"error": "Method not supported",
"description": "The `/service` resource does not support POST."
}
```
The _error_ field contains a short error description and the _description_ field
contains a more detailed version of the error message.
- 400 Bad Request
- The server cannot or will not process the request due to client error.
- 401 Unauthorized
- Authentication is required. The response includes a WWW-Authenticate header.
- 403 Forbidden
- The request was a valid request, but the client does not have the necessary
permissions for the resource.
- 404 Not Found
- The requested resource could not be found.
- 405 Method Not Allowed
- A request method is not supported for the requested resource.
- 406 Not Acceptable
- The requested resource is capable of generating only content not acceptable
according to the Accept headers sent in the request.
- 409 Conflict
- Indicates that the request could not be processed because of conflict in the
request, such as an edit conflict be tween multiple simultaneous updates.
- 411 Length Required
- The request did not specify the length of its content, which is required by
the requested resource.
- 412 Precondition Failed
- The server does not meet one of the preconditions that the requester put on
the request.
- 413 Payload Too Large
- The request is larger than the server is willing or able to process.
- 414 URI Too Long
- The URI provided was too long for the server to process.
- 415 Unsupported Media Type
- The request entity has a media type which the server or resource does not
support.
- 422 Unprocessable Entity
- The request was well-formed but was unable to be followed due to semantic
errors.
- 423 Locked
- The resource that is being accessed is locked.
- 428 Precondition Required
- The origin server requires the request to be conditional. This error code is
returned when none of the `Modified-Since` or `Match` type headers are used.
- 431 Request Header Fields Too Large
- The server is unwilling to process the request because either an individual
header field, or all the header fields collectively, are too large.
### 5xx Server Error
The server failed to fulfill an apparently valid request.
Response status codes beginning with the digit "5" indicate cases in which the
server is aware that it has encountered an error or is otherwise incapable of
performing the request. Except when responding to a HEAD request, the server
includes an entity containing an explanation of the error situation.
```
{
"error": "Log rotation failed",
"description": "Failed to rotate log files: 13, Permission denied"
}
```
The _error_ field contains a short error description and the _description_ field
contains a more detailed version of the error message.
- 500 Internal Server Error
- A generic error message, given when an unexpected condition was encountered
and no more specific message is suitable.
- 501 Not Implemented
- The server either does not recognize the request method, or it lacks the
ability to fulfill the request.
- 502 Bad Gateway
- The server was acting as a gateway or proxy and received an invalid response
from the upstream server.
- 503 Service Unavailable
- The server is currently unavailable (because it is overloaded or down for
maintenance). Generally, this is a temporary state.
- 504 Gateway Timeout
- The server was acting as a gateway or proxy and did not receive a timely
response from the upstream server.
- 505 HTTP Version Not Supported
- The server does not support the HTTP protocol version used in the request.
- 506 Variant Also Negotiates
- Transparent content negotiation for the request results in a circular
reference.
- 507 Insufficient Storage
- The server is unable to store the representation needed to complete the
request.
- 508 Loop Detected
- The server detected an infinite loop while processing the request (sent in
lieu of 208 Already Reported).
- 510 Not Extended
- Further extensions to the request are required for the server to fulfil it.
### Response Headers Reserved for Future Use
The following response headers are not currently in use. Future versions of the
API could return them.
- 206 Partial Content
- The server is delivering only part of the resource (byte serving) due to a
range header sent by the client.
- 300 Multiple Choices
- Indicates multiple options for the resource from which the client may choose
(via agent-driven content negotiation).
- 407 Proxy Authentication Required
- The client must first authenticate itself with the proxy.
- 408 Request Timeout
- The server timed out waiting for the request. According to HTTP
specifications: "The client did not produce a request within the time that
the server was prepared to wait. The client MAY repeat the request without
modifications at any later time."
- 410 Gone
- Indicates that the resource requested is no longer available and will not be
available again.
- 416 Range Not Satisfiable
- The client has asked for a portion of the file (byte serving), but the
server cannot supply that portion.
- 417 Expectation Failed
- The server cannot meet the requirements of the Expect request-header field.
- 421 Misdirected Request
- The request was directed at a server that is not able to produce a response.
- 424 Failed Dependency
- The request failed due to failure of a previous request.
- 426 Upgrade Required
- The client should switch to a different protocol such as TLS/1.0, given in
the Upgrade header field.
- 429 Too Many Requests
- The user has sent too many requests in a given amount of time. Intended for
use with rate-limiting schemes.
## Resources
The MaxScale REST API provides the following resources.
- [/maxscale](Resources-MaxScale.md)
- [/services](Resources-Service.md)
- [/servers](Resources-Server.md)
- [/filters](Resources-Filter.md)
- [/monitors](Resources-Monitor.md)
- [/sessions](Resources-Session.md)
- [/users](Resources-User.md)
## Common Request Parameters
Most of the resources that support GET also support the following
parameters. See the resource documentation for a list of supported request
parameters.
- `fields`
- A list of fields to return.
This allows the returned object to be filtered so that only needed
parts are returned. The value of this parameter is a comma separated
list of fields to return.
For example, the parameter `?fields=id,name` would return object which
would only contain the _id_ and _name_ fields.
- `range`
- Return a subset of the object array.
The value of this parameter is the range of objects to return given as
a inclusive range separated by a hyphen. If the size of the array is
less than the end of the range, only the objects between the requested
start of the range and the actual end of the array are returned. This
means that
For example, the parameter `?range=10-20` would return objects 10
through 20 from the object array if the actual size of the original
array is greater than or equal to 20.

View File

@ -0,0 +1,151 @@
# Filter Resource
A filter resource represents an instance of a filter inside MaxScale. Multiple
services can use the same filter and a single service can use multiple filters.
## Resource Operations
### Get a filter
Get a single filter. The _:name_ in the URI must be a valid filter name with all
whitespace replaced with hyphens. The filter names are case-insensitive.
```
GET /filters/:name
```
#### Response
```
Status: 200 OK
{
"name": "Query Logging Filter",
"module": "qlafilter",
"parameters": {
"filebase": {
"value": "/var/log/maxscale/qla/log.",
"configurable": false
},
"match": {
"value": "select.*from.*t1",
"configurable": true
}
},
"services": [
"/services/my-service",
"/services/my-second-service"
]
}
```
#### Supported Request Parameter
- `fields`
### Get all filters
Get all filters.
```
GET /filters
```
#### Response
```
Status: 200 OK
[
{
"name": "Query Logging Filter",
"module": "qlafilter",
"parameters": {
"filebase": {
"value": "/var/log/maxscale/qla/log.",
"configurable": false
},
"match": {
"value": "select.*from.*t1",
"configurable": true
}
},
"services": [
"/services/my-service",
"/services/my-second-service
]
},
{
"name": "DBFW Filter",
"module": "dbfwfilter",
"parameters": {
{
"name": "rules",
"value": "/etc/maxscale-rules",
"configurable": false
}
},
"services": [
"/services/my-second-service
]
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Update a filter
**Note**: The update mechanisms described here are provisional and most likely
will change in the future. This description is only for design purposes and
does not yet work.
Partially update a filter. The _:name_ in the URI must map to a filter name
and the request body must be a valid JSON Patch document which is applied to the
resource.
```
PATCH /filter/:name
```
### Modifiable Fields
|Field |Type |Description |
|------------|-------|---------------------------------|
|parameters |object |Module specific filter parameters|
```
[
{ "op": "replace", "path": "/parameters/rules/value", "value": "/etc/new-rules" },
{ "op": "add", "path": "/parameters/action/value", "value": "allow" }
]
```
#### Response
Response contains the modified resource.
```
Status: 200 OK
{
"name": "DBFW Filter",
"module": "dbfwfilter",
"parameters": {
"rules": {
"value": "/etc/new-rules",
"configurable": false
},
"action": {
"value": "allow",
"configurable": true
}
}
"services": [
"/services/my-second-service"
]
}
```

View File

@ -0,0 +1,216 @@
# MaxScale Resource
The MaxScale resource represents a MaxScale instance and it is the core on top
of which the modules build upon.
## Resource Operations
## Get global information
Retrieve global information about a MaxScale instance. This includes various
file locations, configuration options and version information.
```
GET /maxscale
```
#### Response
```
Status: 200 OK
{
"config": "/etc/maxscale.cnf",
"cachedir": "/var/cache/maxscale/",
"datadir": "/var/lib/maxscale/"
"libdir": "/usr/lib64/maxscale/",
"piddir": "/var/run/maxscale/",
"execdir": "/usr/bin/",
"languagedir": "/var/lib/maxscale/",
"user": "maxscale",
"threads": 4,
"version": "2.1.0",
"commit": "12e7f17eb361e353f7ac413b8b4274badb41b559"
"started": "Wed, 31 Aug 2016 23:29:26 +0300"
}
```
#### Supported Request Parameter
- `fields`
## Get thread information
Get detailed information and statistics about the threads.
```
GET /maxscale/threads
```
#### Response
```
Status: 200 OK
{
"load_average": {
"historic": 1.05,
"current": 1.00,
"1min": 0.00,
"5min": 0.00,
"15min": 0.00
},
"threads": [
{
"id": 0,
"state": "processing",
"file_descriptors": 1,
"event": [
"in",
"out"
],
"run_time": 300
},
{
"id": 1,
"state": "polling",
"file_descriptors": 0,
"event": [],
"run_time": 0
}
]
}
```
#### Supported Request Parameter
- `fields`
## Get logging information
Get information about the current state of logging, enabled log files and the
location where the log files are stored.
```
GET /maxscale/logs
```
#### Response
```
Status: 200 OK
{
"logdir": "/var/log/maxscale/",
"maxlog": true,
"syslog": false,
"log_levels": {
"error": true,
"warning": true,
"notice": true,
"info": false,
"debug": false
},
"log_augmentation": {
"function": true
},
"log_throttling": {
"limit": 8,
"window": 2000,
"suppression": 10000
},
"last_flushed": "Wed, 31 Aug 2016 23:29:26 +0300"
}
```
#### Supported Request Parameter
- `fields`
## Flush and rotate log files
Flushes any pending messages to disk and reopens the log files. The body of the
message is ignored.
```
POST /maxscale/logs/flush
```
#### Response
```
Status: 204 No Content
```
## Get task schedule
Retrieve all pending tasks that are queued for execution.
```
GET /maxscale/tasks
```
#### Response
```
Status: 200 OK
[
{
"name": "Load Average",
"type": "repeated",
"frequency": 10,
"next_due": "Fri Sep 9 14:12:37 2016"
}
}
```
#### Supported Request Parameter
- `fields`
## Get loaded modules
Retrieve information about all loaded modules. This includes version, API and
maturity information.
```
GET /maxscale/modules
```
#### Response
```
Status: 200 OK
[
{
"name": "MySQLBackend",
"type": "Protocol",
"version": "V2.0.0",
"api_version": "1.1.0",
"maturity": "GA"
},
{
"name": "qlafilter",
"type": "Filter",
"version": "V1.1.1",
"api_version": "1.1.0",
"maturity": "GA"
},
{
"name": "readwritesplit",
"type": "Router",
"version": "V1.1.0",
"api_version": "1.0.0",
"maturity": "GA"
}
}
```
#### Supported Request Parameter
- `fields`
- `range`
TODO: Add epoll statistics and rest of the supported methods.

View File

@ -0,0 +1,176 @@
# Monitor Resource
A monitor resource represents a monitor inside MaxScale that monitors one or
more servers.
## Resource Operations
### Get a monitor
Get a single monitor. The _:name_ in the URI must be a valid monitor name with
all whitespace replaced with hyphens. The monitor names are case-insensitive.
```
GET /monitors/:name
```
#### Response
```
Status: 200 OK
{
"name": "MySQL Monitor",
"module": "mysqlmon",
"state": "started",
"monitor_interval": 2500,
"connect_timeout": 5,
"read_timeout": 2,
"write_timeout": 3,
"servers": [
"/servers/db-serv-1",
"/servers/db-serv-2",
"/servers/db-serv-3"
]
}
```
#### Supported Request Parameter
- `fields`
### Get all monitors
Get all monitors.
```
GET /monitors
```
#### Response
```
Status: 200 OK
[
{
"name": "MySQL Monitor",
"module": "mysqlmon",
"state": "started",
"monitor_interval": 2500,
"connect_timeout": 5,
"read_timeout": 2,
"write_timeout": 3,
"servers": [
"/servers/db-serv-1",
"/servers/db-serv-2",
"/servers/db-serv-3"
]
},
{
"name": "Galera Monitor",
"module": "galeramon",
"state": "started",
"monitor_interval": 5000,
"connect_timeout": 10,
"read_timeout": 5,
"write_timeout": 5,
"servers": [
"/servers/db-galera-1",
"/servers/db-galera-2",
"/servers/db-galera-3"
]
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Stop a monitor
Stops a started monitor.
```
PUT /monitor/:name/stop
```
#### Response
```
Status: 204 No Content
```
### Start a monitor
Starts a stopped monitor.
```
PUT /monitor/:name/start
```
#### Response
```
Status: 204 No Content
```
### Update a monitor
**Note**: The update mechanisms described here are provisional and most likely
will change in the future. This description is only for design purposes and
does not yet work.
Partially update a monitor. The _:name_ in the URI must map to a monitor name
and the request body must be a valid JSON Patch document which is applied to the
resource.
```
PATCH /monitor/:name
```
### Modifiable Fields
The following values can be modified with the PATCH method.
|Field |Type |Description |
|-----------------|------------|---------------------------------------------------|
|servers |string array|Servers monitored by this monitor |
|monitor_interval |number |Monitoring interval in milliseconds |
|connect_timeout |number |Connection timeout in seconds |
|read_timeout |number |Read timeout in seconds |
|write_timeout |number |Write timeout in seconds |
```
[
{ "op": "remove", "path": "/servers/0" },
{ "op": "replace", "path": "/monitor_interval", "value": 2000 },
{ "op": "replace", "path": "/connect_timeout", "value": 2 },
{ "op": "replace", "path": "/read_timeout", "value": 2 },
{ "op": "replace", "path": "/write_timeout", "value": 2 }
]
```
#### Response
Response contains the modified resource.
```
Status: 200 OK
{
"name": "MySQL Monitor",
"module": "mysqlmon",
"servers": [
"/servers/db-serv-2",
"/servers/db-serv-3"
],
"state": "started",
"monitor_interval": 2000,
"connect_timeout": 2,
"read_timeout": 2,
"write_timeout": 2
}
```

View File

@ -0,0 +1,207 @@
# Server Resource
A server resource represents a backend database server.
## Resource Operations
### Get a server
Get a single server. The _:name_ in the URI must be a valid server name with all
whitespace replaced with hyphens. The server names are case-insensitive.
```
GET /servers/:name
```
#### Response
```
Status: 200 OK
{
"name": "db-serv-1",
"address": "192.168.121.58",
"port": 3306,
"protocol": "MySQLBackend",
"status": [
"master",
"running"
],
"parameters": {
"report_weight": 10,
"app_weight": 2
}
}
```
**Note**: The _parameters_ field contains all custom parameters for
servers, including the server weighting parameters.
#### Supported Request Parameter
- `fields`
### Get all servers
```
GET /servers
```
#### Response
```
Status: 200 OK
[
{
"name": "db-serv-1",
"address": "192.168.121.58",
"port": 3306,
"protocol": "MySQLBackend",
"status": [
"master",
"running"
],
"parameters": {
"report_weight": 10,
"app_weight": 2
}
},
{
"name": "db-serv-2",
"address": "192.168.121.175",
"port": 3306,
"status": [
"slave",
"running"
],
"protocol": "MySQLBackend",
"parameters": {
"app_weight": 6
}
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Update a server
**Note**: The update mechanisms described here are provisional and most likely
will change in the future. This description is only for design purposes and
does not yet work.
Partially update a server. The _:name_ in the URI must map to a server name with
all whitespace replaced with hyphens and the request body must be a valid JSON
Patch document which is applied to the resource.
```
PATCH /servers/:name
```
### Modifiable Fields
|Field |Type |Description |
|-----------|------------|-----------------------------------------------------------------------------|
|address |string |Server address |
|port |number |Server port |
|parameters |object |Server extra parameters |
|state |string array|Server state, array of `master`, `slave`, `synced`, `running` or `maintenance`. An empty array is interpreted as a server that is down.|
```
{
{ "op": "replace", "path": "/address", "value": "192.168.0.100" },
{ "op": "replace", "path": "/port", "value": 4006 },
{ "op": "add", "path": "/state/0", "value": "maintenance" },
{ "op": "replace", "path": "/parameters/report_weight", "value": 1 }
}
```
#### Response
Response contains the modified resource.
```
Status: 200 OK
{
"name": "db-serv-1",
"protocol": "MySQLBackend",
"address": "192.168.0.100",
"port": 4006,
"state": [
"maintenance",
"running"
],
"parameters": {
"report_weight": 1,
"app_weight": 2
}
}
```
### Get all connections to a server
Get all connections that are connected to a server.
```
GET /servers/:name/connections
```
#### Response
```
Status: 200 OK
[
{
"state": "DCB in the polling loop",
"role": "Backend Request Handler",
"server": "/servers/db-serv-01",
"service": "/services/my-service",
"statistics": {
"reads": 2197
"writes": 1562
"buffered_writes": 0
"high_water_events": 0
"low_water_events": 0
}
},
{
"state": "DCB in the polling loop",
"role": "Backend Request Handler",
"server": "/servers/db-serv-01",
"service": "/services/my-second-service"
"statistics": {
"reads": 0
"writes": 0
"buffered_writes": 0
"high_water_events": 0
"low_water_events": 0
}
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Close all connections to a server
Close all connections to a particular server. This will forcefully close all
backend connections.
```
DELETE /servers/:name/connections
```
#### Response
```
Status: 204 No Content
```

View File

@ -0,0 +1,272 @@
# Service Resource
A service resource represents a service inside MaxScale. A service is a
collection of network listeners, filters, a router and a set of backend servers.
## Resource Operations
### Get a service
Get a single service. The _:name_ in the URI must be a valid service name with
all whitespace replaced with hyphens. The service names are case-insensitive.
```
GET /services/:name
```
#### Response
```
Status: 200 OK
{
"name": "My Service",
"router": "readwritesplit",
"router_options": {
"disable_sescmd_history": "true"
},
"state": "started",
"total_connections": 10,
"current_connections": 2,
"started": "2016-08-29T12:52:31+03:00",
"filters": [
"/filters/Query-Logging-Filter"
],
"servers": [
"/servers/db-serv-1",
"/servers/db-serv-2",
"/servers/db-serv-3"
]
}
```
#### Supported Request Parameter
- `fields`
### Get all services
Get all services.
```
GET /services
```
#### Response
```
Status: 200 OK
[
{
"name": "My Service",
"router": "readwritesplit",
"router_options": {
"disable_sescmd_history": "true"
},
"state": "started",
"total_connections": 10,
"current_connections": 2,
"started": "2016-08-29T12:52:31+03:00",
"filters": [
"/filters/Query-Logging-Filter"
],
"servers": [
"/servers/db-serv-1",
"/servers/db-serv-2",
"/servers/db-serv-3"
]
},
{
"name": "My Second Service",
"router": "readconnroute",
"router_options": {
"type": "master"
},
"state": "started",
"total_connections": 10,
"current_connections": 2,
"started": "2016-08-29T12:52:31+03:00",
"servers": [
"/servers/db-serv-1",
"/servers/db-serv-2"
]
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Get service listeners
Get the listeners of a service. The _:name_ in the URI must be a valid service
name with all whitespace replaced with hyphens. The service names are
case-insensitive.
```
GET /services/:name/listeners
```
#### Response
```
Status: 200 OK
[
{
"name": "My Listener",
"protocol": "MySQLClient",
"address": "0.0.0.0",
"port": 4006
},
{
"name": "My SSL Listener",
"protocol": "MySQLClient",
"address": "127.0.0.1",
"port": 4006,
"ssl": "required",
"ssl_cert": "/home/markusjm/newcerts/server-cert.pem",
"ssl_key": "/home/markusjm/newcerts/server-key.pem",
"ssl_ca_cert": "/home/markusjm/newcerts/ca.pem"
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Update a service
**Note**: The update mechanisms described here are provisional and most likely
will change in the future. This description is only for design purposes and
does not yet work.
Partially update a service. The _:name_ in the URI must map to a service name
and the request body must be a valid JSON Patch document which is applied to the
resource.
```
PATCH /services/:name
```
### Modifiable Fields
|Field |Type |Description |
|--------------|------------|---------------------------------------------------|
|servers |string array|Servers used by this service, must be relative links to existing server resources|
|router_options|object |Router specific options|
|filters |string array|Service filters, configured in the same order they are declared in the array (`filters[0]` => first filter, `filters[1]` => second filter)|
|user |string |The username for the service user|
|password |string |The password for the service user|
|root_user |boolean |Allow root user to connect via this service|
|version_string|string |Custom version string given to connecting clients|
|weightby |string |Name of a server weigting parameter which is used for connection weighting|
|connection_timeout|number |Client idle timeout in seconds|
|max_connection|number |Maximum number of allowed connections|
|strip_db_esc|boolean |Strip escape characters from default database name|
```
[
{ "op": "replace", "path": "/servers", "value": ["/servers/db-serv-2","/servers/db-serv-3"] },
{ "op": "add", "path": "/router_options/master_failover_mode", "value": "fail_on_write" },
{ "op": "remove", "path": "/filters" }
]
```
#### Response
Response contains the modified resource.
```
Status: 200 OK
{
"name": "My Service",
"router": "readwritesplit",
"router_options": {
"disable_sescmd_history=false",
"master_failover_mode": "fail_on_write"
}
"state": "started",
"total_connections": 10,
"current_connections": 2,
"started": "2016-08-29T12:52:31+03:00",
"servers": [
"/servers/db-serv-2",
"/servers/db-serv-3"
]
}
```
### Stop a service
Stops a started service.
```
PUT /service/:name/stop
```
#### Response
```
Status: 204 No Content
```
### Start a service
Starts a stopped service.
```
PUT /service/:name/start
```
#### Response
```
Status: 204 No Content
```
### Get all sessions for a service
Get all sessions for a particular service.
```
GET /services/:name/sessions
```
#### Response
Relative links to all sessions for this service.
```
Status: 200 OK
[
"/sessions/1",
"/sessions/2"
]
```
#### Supported Request Parameter
- `range`
### Close all sessions for a service
Close all sessions for a particular service. This will forcefully close all
client connections and any backend connections they have made.
```
DELETE /services/:name/sessions
```
#### Response
```
Status: 204 No Content
```

View File

@ -0,0 +1,138 @@
# Session Resource
A session consists of a client connection, any number of related backend
connections, a router module session and possibly filter module sessions. Each
session is created on a service and a service can have multiple sessions.
## Resource Operations
### Get a session
Get a single session. _:id_ must be a valid session ID.
```
GET /sessions/:id
```
#### Response
```
Status: 200 OK
{
"id": 1,
"state": "Session ready for routing",
"user": "jdoe",
"address": "192.168.0.200",
"service": "/services/my-service",
"connected": "Wed Aug 31 03:03:12 2016",
"idle": 260
}
```
#### Supported Request Parameter
- `fields`
### Get all sessions
Get all sessions.
```
GET /sessions
```
#### Response
```
Status: 200 OK
[
{
"id": 1,
"state": "Session ready for routing",
"user": "jdoe",
"address": "192.168.0.200",
"service": "/services/My-Service",
"connected": "Wed Aug 31 03:03:12 2016",
"idle": 260
},
{
"id": 2,
"state": "Session ready for routing",
"user": "dba",
"address": "192.168.0.201",
"service": "/services/My-Service",
"connected": "Wed Aug 31 03:10:00 2016",
"idle": 1
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Get all connections created by a session
Get all backend connections created by a session. _:id_ must be a valid session ID.
```
GET /sessions/:id/connections
```
#### Response
```
Status: 200 OK
[
{
"state": "DCB in the polling loop",
"role": "Backend Request Handler",
"server": "/servers/db-serv-01",
"service": "/services/my-service",
"statistics": {
"reads": 2197
"writes": 1562
"buffered_writes": 0
"high_water_events": 0
"low_water_events": 0
}
},
{
"state": "DCB in the polling loop",
"role": "Backend Request Handler",
"server": "/servers/db-serv-02",
"service": "/services/my-service",
"statistics": {
"reads": 0
"writes": 0
"buffered_writes": 0
"high_water_events": 0
"low_water_events": 0
}
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Close a session
Close a session. This will forcefully close the client connection and any
backend connections.
```
DELETE /sessions/:id
```
#### Response
```
Status: 204 No Content
```

View File

@ -0,0 +1,81 @@
# Admin User Resource
Admin users represent administrative users that are able to query and change
MaxScale's configuration.
## Resource Operations
### Get all users
Get all administrative users.
```
GET /users
```
#### Response
```
Status: 200 OK
[
{
"name": "jdoe"
},
{
"name": "dba"
},
{
"name": "admin"
}
]
#### Supported Request Parameter
- `fields`
- `range`
### Create a user
Create a new administrative user.
```
PUT /users
```
### Modifiable Fields
All of the following fields need to be defined in the request body.
|Field |Type |Description |
|---------|------|-------------------------|
|name |string|Username, consisting of alphanumeric characters|
|password |string|Password for the new user|
```
{
"name": "foo",
"password": "bar"
}
```
#### Response
```
Status: 204 No Content
```
### Delete a user
Delete a user. The _:name_ part of the URI must be a valid user name. The user
names are case-insensitive.
```
DELETE /users/:name
```
#### Response
```
Status: 204 No Content
```

View File

@ -57,6 +57,18 @@ by default, is configured using the new global configuration entry `log_throttli
For more information about this configuration entry, please see
[Global Settings](../Getting-Started/Configuration-Guide.md#global-settings).
### Persistent Connections
Starting with the 2.1 version of MariaDB MaxScale, when a MySQL protocol
persistent connection is taken from the persistent connection pool, the
state of the MySQL session will be reset when the the connection is used
for the first time. This allows persistent connections to be used with no
functional limitations and makes them behave like normal MySQL
connections.
For more information about persistent connections, please read the
[Administration Tutorial](../Tutorials/Administration-Tutorial.md).
### User data cache
The user data cache stores the cached credentials that are used by some router
@ -75,6 +87,23 @@ removed if they are no longer used by older versions of MaxScale.
## New Features
### Dynamic server configuration
MaxScale can now change the servers of a service or a monitor at run-time. New
servers can also be created and they will persisted even after a restart. The
following new commands were added to maxadmin, see output of `maxadmin help
<command>` for more details.
- `create server`: Creates a new server
- `destroy server`: Destroys a created server
- `add server`: Adds a server to a service or a monitor
- `remove server`: Removes a server from a service or a monitor
- `alter server`: Alter server configuration
- `alter monitor`: Alter monitor configuration
With these new features, you can start MaxScale without the servers and define
them later.
### Amazon RDS Aurora monitor
The new [Aurora Monitor](../Monitors/Aurora-Monitor.md) module allows monitoring

View File

@ -106,13 +106,22 @@ than `persistmaxtime` seconds. It was also be discarded if it has been disconne
by the back end server. Connections will be selected that match the user name and
protocol for the new request.
**Please note** that because persistent connections have previously been in use, they
may give a different environment from a fresh connection. For example, if the
previous use of the connection issued "use mydatabase" then this setting will be
carried over into the reuse of the same connection. For many applications this will
not be noticeable, since each request will assume that nothing has been set and
will issue fresh requests such as "use" to establish the desired configuration. In
exceptional cases this feature could be a problem.
Starting with the 2.1 version of MaxScale, when a MySQL protocol connection is
taken from the pool the backend protocol module resets the session state. This
allows persistent connections to be used with no functional limitations.
The session state is reset when the first outgoing network transmission is
done. This _lazy initialization_ of the persistent connections allows
MaxScale to take multiple new connections into use but only initialize the
ones that it actually needs.
**Please note** that in versions before 2.1 the persistent connections may give
a different environment when compared to a fresh connection. For example, if the
previous use of the connection issued a "USE mydatabase;" statement then this
setting will be carried over into the reuse of the same connection. For many
applications this will not be noticeable, since each request will assume that
nothing has been set and will issue fresh requests such as "USE" to establish
the desired configuration. In exceptional cases this feature could be a problem.
It is possible to have pools for as many servers as you wish, with configuration
values in each server section.

View File

@ -91,12 +91,11 @@ prompt(EditLine *el __attribute__((__unused__)))
}
#endif
static struct option long_options[] =
{
{"host", required_argument, 0, 'h'},
{"user", required_argument, 0, 'u'},
{"password", required_argument, 0, 'p'},
{"password", optional_argument, 0, 'p'},
{"port", required_argument, 0, 'P'},
{"socket", required_argument, 0, 'S'},
{"version", no_argument, 0, 'v'},
@ -108,6 +107,7 @@ static struct option long_options[] =
#define MAXADMIN_DEFAULT_HOST "localhost"
#define MAXADMIN_DEFAULT_PORT "6603"
#define MAXADMIN_DEFAULT_USER "admin"
#define MAXADMIN_BUFFER_SIZE 2048
/**
* The main for the maxadmin client
@ -125,7 +125,7 @@ main(int argc, char **argv)
History *hist;
HistEvent ev;
#else
char buf[1024];
char buf[MAXADMIN_BUFFER_SIZE];
#endif
char *hostname = NULL;
char *port = NULL;
@ -141,7 +141,7 @@ main(int argc, char **argv)
int option_index = 0;
char c;
while ((c = getopt_long(argc, argv, "h:p:P:u:S:v?e",
while ((c = getopt_long(argc, argv, "h:p::P:u:S:v?e",
long_options, &option_index)) >= 0)
{
switch (c)
@ -153,8 +153,12 @@ main(int argc, char **argv)
case 'p':
use_inet_socket = true;
passwd = strdup(optarg);
memset(optarg, '\0', strlen(optarg));
// If password was not given, ask for it later
if (optarg != NULL)
{
passwd = strdup(optarg);
memset(optarg, '\0', strlen(optarg));
}
break;
case 'P':

View File

@ -20,6 +20,7 @@ set(DEFAULT_CACHE_SUBPATH "cache/maxscale" CACHE PATH "Default cache subpath")
set(DEFAULT_LANG_SUBPATH "lib/maxscale" CACHE PATH "Default language file subpath")
set(DEFAULT_EXEC_SUBPATH "${MAXSCALE_BINDIR}" CACHE PATH "Default executable subpath")
set(DEFAULT_CONFIG_SUBPATH "etc" CACHE PATH "Default configuration subpath")
set(DEFAULT_CONFIG_PERSIST_SUBPATH "maxscale.cnf.d" CACHE PATH "Default persisted configuration subpath")
set(DEFAULT_PIDDIR ${MAXSCALE_VARDIR}/${DEFAULT_PID_SUBPATH} CACHE PATH "Default PID file directory")
set(DEFAULT_LOGDIR ${MAXSCALE_VARDIR}/${DEFAULT_LOG_SUBPATH} CACHE PATH "Default log directory")
@ -29,6 +30,7 @@ set(DEFAULT_CACHEDIR ${MAXSCALE_VARDIR}/${DEFAULT_CACHE_SUBPATH} CACHE PATH "Def
set(DEFAULT_LANGDIR ${MAXSCALE_VARDIR}/${DEFAULT_LANG_SUBPATH} CACHE PATH "Default language file directory")
set(DEFAULT_EXECDIR ${CMAKE_INSTALL_PREFIX}/${DEFAULT_EXEC_SUBPATH} CACHE PATH "Default executable directory")
set(DEFAULT_CONFIGDIR /${DEFAULT_CONFIG_SUBPATH} CACHE PATH "Default configuration directory")
set(DEFAULT_CONFIG_PERSISTDIR ${DEFAULT_DATADIR}/${DEFAULT_CONFIG_PERSIST_SUBPATH} CACHE PATH "Default persisted configuration directory")
# Massage TARGET_COMPONENT into a list
if (TARGET_COMPONENT)

View File

@ -29,7 +29,42 @@
MXS_BEGIN_DECLS
int atomic_add(int *variable, int value);
/**
* Implementation of an atomic add operation for the GCC environment, or the
* X86 processor. If we are working within GNU C then we can use the GCC
* atomic add built in function, which is portable across platforms that
* implement GCC. Otherwise, this function currently supports only X86
* architecture (without further development).
*
* Adds a value to the contents of a location pointed to by the first parameter.
* The add operation is atomic and the return value is the value stored in the
* location prior to the operation. The number that is added may be signed,
* therefore atomic_subtract is merely an atomic add with a negative value.
*
* @param variable Pointer the the variable to add to
* @param value Value to be added
* @return The value of variable before the add occurred
*/
int atomic_add(int *variable, int value);
/**
* @brief Impose a full memory barrier
*
* A full memory barrier guarantees that all store and load operations complete
* before the function is called.
*
* Currently, only the GNUC __sync_synchronize() is used. C11 introduces
* standard functions for atomic memory operations and should be taken into use.
*
* @see https://www.kernel.org/doc/Documentation/memory-barriers.txt
*/
static inline void atomic_synchronize()
{
#ifdef __GNUC__
__sync_synchronize(); /* Memory barrier. */
#else
#error "No GNUC atomics available."
#endif
}
MXS_END_DECLS

View File

@ -276,6 +276,7 @@ typedef struct dcb
bool ssl_write_want_read; /*< Flag */
bool ssl_write_want_write; /*< Flag */
int dcb_port; /**< port of target server */
bool was_persistent; /**< Whether this DCB was in the persistent pool */
skygw_chk_t dcb_chk_tail;
} DCB;
@ -301,15 +302,6 @@ typedef enum
DCB_USAGE_ALL
} DCB_USAGE;
#if defined(FAKE_CODE)
extern unsigned char dcb_fake_write_errno[10240];
extern __int32_t dcb_fake_write_ev[10240];
extern bool fail_next_backend_fd;
extern bool fail_next_client_fd;
extern int fail_next_accept;
extern int fail_accept_errno;
#endif /* FAKE_CODE */
/* A few useful macros */
#define DCB_SESSION(x) (x)->session
#define DCB_PROTOCOL(x, type) (type *)((x)->protocol)

View File

@ -561,8 +561,4 @@ typedef enum skygw_chk_t
}
#if defined(FAKE_CODE)
static bool conn_open[10240];
#endif /* FAKE_CODE */
MXS_END_DECLS

View File

@ -83,6 +83,7 @@ typedef struct filter_object
int (*clientReply)(FILTER *instance, void *fsession, GWBUF *queue);
void (*diagnostics)(FILTER *instance, void *fsession, DCB *dcb);
uint64_t (*getCapabilities)(void);
void (*destroyInstance)(FILTER *instance);
} FILTER_OBJECT;
/**

View File

@ -32,10 +32,12 @@ MXS_BEGIN_DECLS
#define MXS_DEFAULT_LANG_SUBPATH "@DEFAULT_LANG_SUBPATH@"
#define MXS_DEFAULT_EXEC_SUBPATH "@DEFAULT_EXEC_SUBPATH@"
#define MXS_DEFAULT_CONFIG_SUBPATH "@DEFAULT_CONFIG_SUBPATH@"
#define MXS_DEFAULT_CONFIG_PERSIST_SUBPATH "@DEFAULT_CONFIG_PERSIST_SUBPATH@"
/** Default file locations, configured by CMake */
static const char* default_cnf_fname = "maxscale.cnf";
static const char* default_configdir = "@DEFAULT_CONFIGDIR@";
/*< This should be changed to just /run eventually,
* the /var/run folder is an old standard and the newer FSH 3.0
* uses /run for PID files.*/
@ -46,8 +48,10 @@ static const char* default_libdir = "@DEFAULT_LIBDIR@";
static const char* default_cachedir = "@DEFAULT_CACHEDIR@";
static const char* default_langdir = "@DEFAULT_LANGDIR@";
static const char* default_execdir = "@DEFAULT_EXECDIR@";
static const char* default_config_persistdir = "@DEFAULT_CONFIG_PERSISTDIR@";
static char* configdir = NULL;
static char* configdir = NULL; /*< Where the config file is found e.g. /etc/ */
static char* config_persistdir = NULL;/*< Persisted configs e.g. /var/lib/maxscale.cnf.d/ */
static char* logdir = NULL;
static char* libdir = NULL;
static char* cachedir = NULL;
@ -62,6 +66,7 @@ void set_datadir(char* param);
void set_process_datadir(char* param);
void set_cachedir(char* param);
void set_configdir(char* param);
void set_config_persistdir(char* param);
void set_logdir(char* param);
void set_langdir(char* param);
void set_piddir(char* param);
@ -71,6 +76,7 @@ char* get_datadir();
char* get_process_datadir();
char* get_cachedir();
char* get_configdir();
char* get_config_persistdir();
char* get_piddir();
char* get_logdir();
char* get_langdir();

View File

@ -51,11 +51,34 @@ typedef struct hktask
struct hktask *next; /*< Next task in the list */
} HKTASK;
extern void hkinit();
/**
* Initialises the housekeeper mechanism.
*
* A call to any of the other housekeeper functions can be made only if
* this function returns successfully.
*
* @return True if the housekeeper mechanism was initialized, false otherwise.
*/
extern bool hkinit();
/**
* Shuts down the housekeeper mechanism.
*
* Should be called @b only if @c hkinit() returned successfully.
*
* @see hkinit hkfinish
*/
extern void hkshutdown();
/**
* Waits for the housekeeper thread to finish. Should be called only after
* hkshutdown() has been called.
*/
extern void hkfinish();
extern int hktask_add(const char *name, void (*task)(void *), void *data, int frequency);
extern int hktask_oneshot(const char *name, void (*task)(void *), void *data, int when);
extern int hktask_remove(const char *name);
extern void hkshutdown();
extern void hkshow_tasks(DCB *pdcb);
MXS_END_DECLS

View File

@ -44,4 +44,14 @@ void maxscale_reset_starttime(void);
time_t maxscale_started(void);
int maxscale_uptime(void);
/**
* Initiate shutdown of MaxScale.
*
* This functions informs all threads that they should stop the
* processing and exit.
*
* @return How many times maxscale_shutdown() has been called.
*/
int maxscale_shutdown(void);
MXS_END_DECLS

View File

@ -137,6 +137,9 @@ typedef enum
#define MONITOR_INTERVAL 10000 // in milliseconds
#define MONITOR_DEFAULT_ID 1UL // unsigned long value
#define MAX_MONITOR_USER_LEN 512
#define MAX_MONITOR_PASSWORD_LEN 512
/*
* Create declarations of the enum for monitor events and also the array of
* structs containing the matching names. The data is taken from def_monitor_event.h
@ -177,8 +180,8 @@ typedef struct monitor_servers
struct monitor
{
char *name; /**< The name of the monitor module */
char *user; /*< Monitor username */
char *password; /*< Monitor password */
char user[MAX_MONITOR_USER_LEN]; /*< Monitor username */
char password[MAX_MONITOR_PASSWORD_LEN]; /*< Monitor password */
SPINLOCK lock;
CONFIG_PARAMETER* parameters; /*< configuration parameters */
MONITOR_SERVERS* databases; /*< List of databases the monitor monitors */
@ -201,7 +204,8 @@ struct monitor
extern MONITOR *monitor_alloc(char *, char *);
extern void monitor_free(MONITOR *);
extern MONITOR *monitor_find(char *);
extern void monitorAddServer(MONITOR *, SERVER *);
extern void monitorAddServer(MONITOR *mon, SERVER *server);
extern void monitorRemoveServer(MONITOR *mon, SERVER *server);
extern void monitorAddUser(MONITOR *, char *, char *);
extern void monitorAddParameters(MONITOR *monitor, CONFIG_PARAMETER *params);
extern void monitorStop(MONITOR *);
@ -229,4 +233,11 @@ connect_result_t mon_connect_to_db(MONITOR* mon, MONITOR_SERVERS *database);
void mon_log_connect_error(MONITOR_SERVERS* database, connect_result_t rval);
void mon_log_state_change(MONITOR_SERVERS *ptr);
/**
* Check if a monitor uses @c servers
* @param server Server that is queried
* @return True if server is used by at least one monitor
*/
bool monitor_server_in_use(const SERVER *server);
MXS_END_DECLS

View File

@ -262,28 +262,25 @@ typedef struct server_command_st
typedef struct
{
#if defined(SS_DEBUG)
skygw_chk_t protocol_chk_top;
skygw_chk_t protocol_chk_top;
#endif
int fd; /*< The socket descriptor */
struct dcb *owner_dcb; /*< The DCB of the socket
* we are running on */
SPINLOCK protocol_lock;
mysql_server_cmd_t current_command; /**< Current command being executed */
server_command_t protocol_command; /*< session command list */
server_command_t* protocol_cmd_history; /*< session command history */
mxs_auth_state_t protocol_auth_state; /*< Authentication status */
mysql_protocol_state_t protocol_state; /*< Protocol struct status */
uint8_t scramble[MYSQL_SCRAMBLE_LEN]; /*< server scramble,
* created or received */
uint32_t server_capabilities; /*< server capabilities,
* created or received */
uint32_t client_capabilities; /*< client capabilities,
* created or received */
unsigned long tid; /*< MySQL Thread ID, in
* handshake */
unsigned int charset; /*< MySQL character set at connect time */
int fd; /*< The socket descriptor */
struct dcb* owner_dcb; /*< The DCB of the socket we are running on */
SPINLOCK protocol_lock; /*< Protocol lock */
mysql_server_cmd_t current_command; /*< Current command being executed */
server_command_t protocol_command; /*< session command list */
server_command_t* protocol_cmd_history; /*< session command history */
mxs_auth_state_t protocol_auth_state; /*< Authentication status */
mysql_protocol_state_t protocol_state; /*< Protocol struct status */
uint8_t scramble[MYSQL_SCRAMBLE_LEN]; /*< server scramble, created or received */
uint32_t server_capabilities; /*< server capabilities, created or received */
uint32_t client_capabilities; /*< client capabilities, created or received */
unsigned long tid; /*< MySQL Thread ID, in handshake */
unsigned int charset; /*< MySQL character set at connect time */
bool ignore_reply; /*< If the reply should be discarded */
GWBUF* stored_query; /*< Temporarily stored queries */
#if defined(SS_DEBUG)
skygw_chk_t protocol_chk_tail;
skygw_chk_t protocol_chk_tail;
#endif
} MySQLProtocol;

View File

@ -86,6 +86,38 @@ typedef enum qc_parse_result
QC_QUERY_PARSED = 3 /*< The query was fully parsed; completely classified. */
} qc_parse_result_t;
/**
* qc_field_usage_t defines where a particular field appears.
*
* QC_USED_IN_SELECT : The field appears on the left side of FROM in a top-level SELECT statement.
* QC_USED_IN_SUBSELECT: The field appears on the left side of FROM in a sub-select SELECT statement.
* QC_USED_IN_WHERE : The field appears in a WHERE clause.
* QC_USED_IN_SET : The field appears in the SET clause of an UPDATE statement.
* QC_USED_IN_GROUP_BY : The field appears in a GROUP BY clause.
*
* Note that multiple bits may be set at the same time. For instance, for a statement like
* "SELECT fld FROM tbl WHERE fld = 1 GROUP BY fld", the bits QC_USED_IN_SELECT, QC_USED_IN_WHERE
* and QC_USED_IN_GROUP_BY will be set.
*/
typedef enum qc_field_usage
{
QC_USED_IN_SELECT = 0x01, /*< SELECT fld FROM... */
QC_USED_IN_SUBSELECT = 0x02, /*< SELECT 1 FROM ... SELECT fld ... */
QC_USED_IN_WHERE = 0x04, /*< SELECT ... FROM ... WHERE fld = ... */
QC_USED_IN_SET = 0x08, /*< UPDATE ... SET fld = ... */
QC_USED_IN_GROUP_BY = 0x10, /*< ... GROUP BY fld */
} qc_field_usage_t;
/**
* QC_FIELD_INFO contains information about a field used in a statement.
*/
typedef struct qc_field_info
{
char* database; /** Present if the field is of the form "a.b.c", NULL otherwise. */
char* table; /** Present if the field is of the form "a.b", NULL otherwise. */
char* column; /** Always present. */
uint32_t usage; /** Bitfield denoting where the column appears. */
} QC_FIELD_INFO;
/**
* QUERY_CLASSIFIER defines the object a query classifier plugin must
@ -113,9 +145,10 @@ typedef struct query_classifier
char** (*qc_get_table_names)(GWBUF* stmt, int* tblsize, bool fullnames);
char* (*qc_get_canonical)(GWBUF* stmt);
bool (*qc_query_has_clause)(GWBUF* stmt);
char* (*qc_get_affected_fields)(GWBUF* stmt);
char** (*qc_get_database_names)(GWBUF* stmt, int* size);
char* (*qc_get_prepare_name)(GWBUF* stmt);
qc_query_op_t (*qc_get_prepare_operation)(GWBUF* stmt);
void (*qc_get_field_info)(GWBUF* stmt, const QC_FIELD_INFO** infos, size_t* n_infos);
} QUERY_CLASSIFIER;
/**
@ -212,15 +245,38 @@ void qc_thread_end(void);
qc_parse_result_t qc_parse(GWBUF* stmt);
/**
* Returns the fields the statement affects, as a string of names separated
* by spaces. Note that the fields do not contain any table information.
* Convert a qc_field_usage_t enum to corresponding string.
*
* @param stmt A buffer containing a COM_QUERY packet.
* @param usage The value to be converted
*
* @return A string containing the fields or NULL if a memory allocation
* failure occurs. The string must be freed by the caller.
* @return The corresponding string. Must @b not be freed.
*/
char* qc_get_affected_fields(GWBUF* stmt);
const char* qc_field_usage_to_string(qc_field_usage_t usage);
/**
* Convert a mask of qc_field_usage_t enum values to corresponding string.
*
* @param usage_mask Mask of qc_field_usage_t values.
*
* @return The corresponding string, or NULL if memory allocation fails.
* @b Must be freed by the caller.
*/
char* qc_field_usage_mask_to_string(uint32_t usage_mask);
/**
* Returns information about affected fields.
*
* @param stmt A buffer containing a COM_QUERY packet.
* @param infos Pointer to pointer that after the call will point to an
* array of QC_FIELD_INFO:s.
* @param n_infos Pointer to size_t variable where the number of items
* in @c infos will be returned.
*
* @note The returned array belongs to the GWBUF and remains valid for as
* long as the GWBUF is valid. If the data is needed for longer than
* that, it must be copied.
*/
void qc_get_field_info(GWBUF* stmt, const QC_FIELD_INFO** infos, size_t* n_infos);
/**
* Returns the statement, with literals replaced with question marks.
@ -285,6 +341,17 @@ qc_query_op_t qc_get_operation(GWBUF* stmt);
*/
char* qc_get_prepare_name(GWBUF* stmt);
/**
* Returns the operator of the prepared statement, if the statement
* is a PREPARE statement.
*
* @param stmt A buffer containing a COM_QUERY packet.
*
* @return The operator of the prepared statement, if the statement
* is a PREPARE statement; otherwise QUERY_OP_UNDEFINED.
*/
qc_query_op_t qc_get_prepare_operation(GWBUF* stmt);
/**
* Returns the tables accessed by the statement.
*
@ -361,7 +428,7 @@ const char* qc_op_to_string(qc_query_op_t op);
*/
static inline bool qc_query_is_type(uint32_t typemask, qc_query_type_t type)
{
return (typemask & type) == type;
return (typemask & (uint32_t)type) == (uint32_t)type;
}
/**
@ -396,12 +463,4 @@ const char* qc_type_to_string(qc_query_type_t type);
*/
char* qc_typemask_to_string(uint32_t typemask);
/**
* @deprecated
* Synonym for qc_query_is_type().
*
* @see qc_query_is_type
*/
#define QUERY_IS_TYPE(typemask, type) qc_query_is_type(typemask, type)
MXS_END_DECLS

View File

@ -24,6 +24,7 @@
* 16/07/2013 Massimiliano Pinto Added router commands values
* 22/10/2013 Massimiliano Pinto Added router errorReply entry point
* 27/10/2015 Martin Brampton Add RCAP_TYPE_NO_RSESSION
* 08/11/2016 Massimiliano Pinto Add destroyInstance() entry point
*
*/
@ -82,6 +83,7 @@ typedef struct router_object
error_action_t action,
bool* succp);
uint64_t (*getCapabilities)(void);
void (*destroyInstance)(ROUTER *instance);
} ROUTER_OBJECT;
/**

View File

@ -0,0 +1,22 @@
#pragma once
/*
* Copyright (c) 2016 MariaDB Corporation Ab
*
* Use of this software is governed by the Business Source License included
* in the LICENSE.TXT file and at www.mariadb.com/bsl.
*
* Change Date: 2019-07-01
*
* On the date above, in accordance with the Business Source License, use
* of this software will be governed by version 2 or later of the General
* Public License.
*/
/**
* @file semaphore.h Semaphores used by MaxScale.
*/
// As a minimal preparation for other environments than Linux, components
// include <maxscale/semaphore.h>, instead of including <semaphore.h>
// directly.
#include <semaphore.h>

View File

@ -48,6 +48,8 @@
MXS_BEGIN_DECLS
#define MAX_SERVER_NAME_LEN 1024
#define MAX_SERVER_MONUSER_LEN 512
#define MAX_SERVER_MONPW_LEN 512
#define MAX_NUM_SLAVES 128 /**< Maximum number of slaves under a single server*/
/**
@ -86,15 +88,16 @@ typedef struct server
#endif
SPINLOCK lock; /**< Common access lock */
char *unique_name; /**< Unique name for the server */
char *name; /**< Server name/IP address*/
char name[MAX_SERVER_NAME_LEN]; /**< Server name/IP address*/
unsigned short port; /**< Port to listen on */
char *protocol; /**< Protocol module to use */
char *authenticator; /**< Authenticator module name */
void *auth_instance; /**< Authenticator instance */
char *auth_options; /**< Authenticator options */
SSL_LISTENER *server_ssl; /**< SSL data structure for server, if any */
unsigned int status; /**< Status flag bitmap for the server */
char *monuser; /**< User name to use to monitor the db */
char *monpw; /**< Password to use to monitor the db */
char monuser[MAX_SERVER_MONUSER_LEN]; /**< User name to use to monitor the db */
char monpw[MAX_SERVER_MONPW_LEN]; /**< Password to use to monitor the db */
SERVER_STATS stats; /**< The server statistics */
struct server *next; /**< Next server */
struct server *nextdb; /**< Next server in list attached to a service */
@ -112,6 +115,7 @@ typedef struct server
long persistpoolmax; /**< Maximum size of persistent connections pool */
long persistmaxtime; /**< Maximum number of seconds connection can live */
int persistmax; /**< Maximum pool size actually achieved since startup */
bool is_active; /**< Server is active and has not been "destroyed" */
#if defined(SS_DEBUG)
skygw_chk_t server_chk_tail;
#endif
@ -136,6 +140,11 @@ typedef struct server
#define SERVER_STALE_SLAVE 0x2000 /**<< Slave status is possible even without a master */
#define SERVER_RELAY_MASTER 0x4000 /**<< Server is a relay master */
/**
* Is the server valid and active
*/
#define SERVER_IS_ACTIVE(server) (server->is_active)
/**
* Is the server running - the macro returns true if the server is marked as running
* regardless of it's state as a master or slave
@ -197,9 +206,57 @@ typedef struct server
(((server)->status & (SERVER_RUNNING|SERVER_MASTER|SERVER_SLAVE|SERVER_MAINT)) == \
(SERVER_RUNNING|SERVER_MASTER|SERVER_SLAVE))
extern SERVER *server_alloc(char *, char *, unsigned short, char*, char*);
/**
* @brief Allocate a new server
*
* This will create a new server that represents a backend server that services
* can use. This function will add the server to the running configuration but
* will not persist the changes.
*
* @param name Unique server name
* @param address The server address
* @param port The port to connect to
* @param protocol The protocol to use to connect to the server
* @param authenticator The server authenticator module
* @param auth_options Options for the authenticator module
* @return The newly created server or NULL if an error occurred
*/
extern SERVER* server_alloc(const char *name, const char *address, unsigned short port,
const char *protocol, const char *authenticator,
const char *auth_options);
/**
* @brief Create a new server
*
* This function creates a new, persistent server by first allocating a new
* server and then storing the resulting configuration file on disk. This
* function should be used only from administrative interface modules and internal
* modules should use server_alloc() instead.
*
* @param name Server name
* @param address Network address
* @param port Network port
* @param protocol Protocol module name
* @param authenticator Authenticator module name
* @param options Options for the authenticator module
* @return True on success, false if an error occurred
*/
extern bool server_create(const char *name, const char *address, const char *port,
const char *protocol, const char *authenticator,
const char *options);
/**
* @brief Destroy a server
*
* This removes any created server configuration files and marks the server removed
* If the server is not in use.
* @param server Server to destroy
* @return True if server was destroyed
*/
bool server_destroy(SERVER *server);
extern int server_free(SERVER *);
extern SERVER *server_find_by_unique_name(char *);
extern SERVER *server_find_by_unique_name(const char *name);
extern SERVER *server_find(char *, unsigned short);
extern void printServer(SERVER *);
extern void printAllServers();
@ -216,13 +273,14 @@ extern void server_transfer_status(SERVER *dest_server, SERVER *source_server);
extern void serverAddMonUser(SERVER *, char *, char *);
extern void serverAddParameter(SERVER *, char *, char *);
extern char *serverGetParameter(SERVER *, char *);
extern void server_update(SERVER *, char *, char *, char *);
extern void server_set_unique_name(SERVER *, char *);
extern void server_update_credentials(SERVER *, char *, char *);
extern DCB *server_get_persistent(SERVER *, char *, const char *);
extern void server_update_address(SERVER *, char *);
extern void server_update_port(SERVER *, unsigned short);
extern RESULTSET *serverGetList();
extern unsigned int server_map_status(char *str);
extern bool server_set_version_string(SERVER* server, const char* string);
extern bool server_is_ssl_parameter(const char *key);
extern void server_update_ssl(SERVER *server, const char *key, const char *value);
MXS_END_DECLS

View File

@ -97,10 +97,16 @@ typedef struct
typedef struct server_ref_t
{
struct server_ref_t *next;
SERVER* server;
struct server_ref_t *next; /**< Next server reference */
SERVER* server; /**< The actual server */
int weight; /**< Weight of this server */
int connections; /**< Number of connections created through this reference */
bool active; /**< Whether this reference is valid and in use*/
} SERVER_REF;
/** Macro to check whether a SERVER_REF is active */
#define SERVER_REF_IS_ACTIVE(ref) (ref->active && SERVER_IS_ACTIVE(ref->server))
#define SERVICE_MAX_RETRY_INTERVAL 3600 /*< The maximum interval between service start retries */
/** Value of service timeout if timeout checks are disabled */
@ -144,6 +150,7 @@ typedef struct service
void *router_instance; /**< The router instance for this service */
char *version_string; /** version string for this service listeners */
SERVER_REF *dbref; /** server references */
int n_dbref; /** Number of server references */
SERVICE_USER credentials; /**< The cedentials of the service user */
SPINLOCK spin; /**< The service spinlock */
SERVICE_STATS stats; /**< The service statistics */
@ -192,6 +199,7 @@ extern int serviceAddProtocol(SERVICE *service, char *name, char *protocol,
extern int serviceHasProtocol(SERVICE *service, const char *protocol,
const char* address, unsigned short port);
extern void serviceAddBackend(SERVICE *, SERVER *);
extern void serviceRemoveBackend(SERVICE *, const SERVER *);
extern int serviceHasBackend(SERVICE *, SERVER *);
extern void serviceAddRouterOption(SERVICE *, char *);
extern void serviceClearRouterOptions(SERVICE *);
@ -249,4 +257,11 @@ static inline uint64_t service_get_capabilities(const SERVICE *service)
return service->capabilities;
}
/**
* Check if a service uses @c servers
* @param server Server that is queried
* @return True if server is used by at least one service
*/
bool service_server_in_use(const SERVER *server);
MXS_END_DECLS

View File

@ -61,16 +61,16 @@ typedef struct users
unsigned char cksum[SHA_DIGEST_LENGTH]; /**< The users' table ckecksum */
} USERS;
extern USERS *users_alloc(); /**< Allocate a users table */
extern void users_free(USERS *); /**< Free a users table */
extern int users_add(USERS *, char *, char *); /**< Add a user to the users table */
extern int users_delete(USERS *, char *); /**< Delete a user from the users table */
extern char *users_fetch(USERS *, char *); /**< Fetch the authentication data for a user */
extern int users_update(USERS *, char *, char *); /**< Change the password data for a user in
the users table */
extern int users_default_loadusers(SERV_LISTENER *port); /**< A generic implementation of the authenticator
* loadusers entry point */
extern void usersPrint(USERS *); /**< Print data about the users loaded */
extern void dcb_usersPrint(DCB *, USERS *); /**< Print data about the users loaded */
extern USERS *users_alloc(); /**< Allocate a users table */
extern void users_free(USERS *); /**< Free a users table */
extern int users_add(USERS *, const char *, const char *); /**< Add a user to the users table */
extern int users_delete(USERS *, const char *); /**< Delete a user from the users table */
extern const char *users_fetch(USERS *, const char *); /**< Fetch the authentication data for a user*/
extern int users_update(USERS *, const char *, const char *); /**< Change the password data for a user in
the users table */
extern int users_default_loadusers(SERV_LISTENER *port); /**< A generic implementation of the
authenticator loadusers entry point */
extern void usersPrint(const USERS *); /**< Print data about the users loaded */
extern void dcb_usersPrint(DCB *, const USERS *); /**< Print data about the users loaded */
MXS_END_DECLS

View File

@ -45,11 +45,6 @@ bool qc_is_drop_table_query(GWBUF* querybuf)
return false;
}
char* qc_get_affected_fields(GWBUF* buf)
{
return NULL;
}
bool qc_query_has_clause(GWBUF* buf)
{
return false;
@ -66,6 +61,22 @@ qc_query_op_t qc_get_operation(GWBUF* querybuf)
return QUERY_OP_UNDEFINED;
}
char* qc_sqlite_get_prepare_name(GWBUF* query)
{
return NULL;
}
qc_query_op_t qc_sqlite_get_prepare_operation(GWBUF* query)
{
return QUERY_OP_UNDEFINED;
}
void qc_sqlite_get_field_info(GWBUF* query, const QC_FIELD_INFO** infos, size_t* n_infos)
{
*infos = NULL;
*n_infos = 0;
}
bool qc_init(const char* args)
{
return true;
@ -125,8 +136,10 @@ extern "C"
qc_get_table_names,
NULL,
qc_query_has_clause,
qc_get_affected_fields,
qc_get_database_names,
qc_get_prepare_name,
qc_get_prepare_operation,
qc_get_field_info,
};
QUERY_CLASSIFIER* GetModuleObject()

File diff suppressed because it is too large Load Diff

View File

@ -60,9 +60,6 @@ typedef struct qc_sqlite_info
uint32_t types; // The types of the query.
qc_query_op_t operation; // The operation in question.
char* affected_fields; // The affected fields.
size_t affected_fields_len; // The used length of affected_fields.
size_t affected_fields_capacity; // The capacity of affected_fields.
bool is_real_query; // SELECT, UPDATE, INSERT, DELETE or a variation.
bool has_clause; // Has WHERE or HAVING.
char** table_names; // Array of table names used in the query.
@ -79,8 +76,12 @@ typedef struct qc_sqlite_info
int keyword_1; // The first encountered keyword.
int keyword_2; // The second encountered keyword.
char* prepare_name; // The name of a prepared statement.
size_t preparable_stmt_offset; // The start of the preparable statement.
qc_query_op_t prepare_operation; // The operation of a prepared statement.
char* preparable_stmt; // The preparable statement.
size_t preparable_stmt_length; // The length of the preparable statement.
QC_FIELD_INFO *field_infos; // Pointer to array of QC_FIELD_INFOs.
size_t field_infos_len; // The used entries in field_infos.
size_t field_infos_capacity; // The capacity of the field_infos array.
} QC_SQLITE_INFO;
typedef enum qc_log_level
@ -122,11 +123,11 @@ typedef enum qc_token_position
QC_TOKEN_RIGHT, // To the right, e.g: "b" in "a = b".
} qc_token_position_t;
static void append_affected_field(QC_SQLITE_INFO* info, const char* s);
static void buffer_object_free(void* data);
static char** copy_string_array(char** strings, int* pn);
static void enlarge_string_array(size_t n, size_t len, char*** ppzStrings, size_t* pCapacity);
static bool ensure_query_is_parsed(GWBUF* query);
static void free_field_infos(QC_FIELD_INFO* infos, size_t n_infos);
static void free_string_array(char** sa);
static QC_SQLITE_INFO* get_query_info(GWBUF* query);
static QC_SQLITE_INFO* info_alloc(void);
@ -139,17 +140,34 @@ static bool parse_query(GWBUF* query);
static void parse_query_string(const char* query, size_t len);
static bool query_is_parsed(GWBUF* query);
static bool should_exclude(const char* zName, const ExprList* pExclude);
static void update_affected_fields(QC_SQLITE_INFO* info,
int prev_token,
const Expr* pExpr,
qc_token_position_t pos,
const ExprList* pExclude);
static void update_affected_fields_from_exprlist(QC_SQLITE_INFO* info,
const ExprList* pEList, const ExprList* pExclude);
static void update_affected_fields_from_idlist(QC_SQLITE_INFO* info,
const IdList* pIds, const ExprList* pExclude);
static void update_affected_fields_from_select(QC_SQLITE_INFO* info,
const Select* pSelect, const ExprList* pExclude);
static void update_field_info(QC_SQLITE_INFO* info,
const char* database,
const char* table,
const char* column,
uint32_t usage,
const ExprList* pExclude);
static void update_field_infos_from_expr(QC_SQLITE_INFO* info,
const struct Expr* pExpr,
uint32_t usage,
const ExprList* pExclude);
static void update_field_infos(QC_SQLITE_INFO* info,
int prev_token,
const Expr* pExpr,
uint32_t usage,
qc_token_position_t pos,
const ExprList* pExclude);
static void update_field_infos_from_exprlist(QC_SQLITE_INFO* info,
const ExprList* pEList,
uint32_t usage,
const ExprList* pExclude);
static void update_field_infos_from_idlist(QC_SQLITE_INFO* info,
const IdList* pIds,
uint32_t usage,
const ExprList* pExclude);
static void update_field_infos_from_select(QC_SQLITE_INFO* info,
const Select* pSelect,
uint32_t usage,
const ExprList* pExclude);
static void update_database_names(QC_SQLITE_INFO* info, const char* name);
static void update_names(QC_SQLITE_INFO* info, const char* zDatabase, const char* zTable);
static void update_names_from_srclist(QC_SQLITE_INFO* info, const SrcList* pSrc);
@ -184,7 +202,7 @@ extern void exposed_sqlite3StartTable(Parse *pParse, /* Parser context */
int isView, /* True if this is a VIEW */
int isVirtual, /* True if this is a VIRTUAL table */
int noErr); /* Do nothing if table already exists */
extern void maxscaleCollectInfoFromSelect(Parse*, Select*);
extern void maxscaleCollectInfoFromSelect(Parse*, Select*, int);
/**
* Used for freeing a QC_SQLITE_INFO object added to a GWBUF.
@ -247,6 +265,21 @@ static bool ensure_query_is_parsed(GWBUF* query)
return parsed;
}
static void free_field_infos(QC_FIELD_INFO* infos, size_t n_infos)
{
if (infos)
{
for (int i = 0; i < n_infos; ++i)
{
MXS_FREE(infos[i].database);
MXS_FREE(infos[i].table);
MXS_FREE(infos[i].column);
}
MXS_FREE(infos);
}
}
static void free_string_array(char** sa)
{
if (sa)
@ -288,11 +321,13 @@ static QC_SQLITE_INFO* info_alloc(void)
static void info_finish(QC_SQLITE_INFO* info)
{
free(info->affected_fields);
free_string_array(info->table_names);
free_string_array(info->table_fullnames);
free(info->created_table_name);
free_string_array(info->database_names);
free(info->prepare_name);
free(info->preparable_stmt);
free_field_infos(info->field_infos, info->field_infos_len);
}
static void info_free(QC_SQLITE_INFO* info)
@ -312,9 +347,6 @@ static QC_SQLITE_INFO* info_init(QC_SQLITE_INFO* info)
info->types = QUERY_TYPE_UNKNOWN;
info->operation = QUERY_OP_UNDEFINED;
info->affected_fields = NULL;
info->affected_fields_len = 0;
info->affected_fields_capacity = 0;
info->is_real_query = false;
info->has_clause = false;
info->table_names = NULL;
@ -331,8 +363,12 @@ static QC_SQLITE_INFO* info_init(QC_SQLITE_INFO* info)
info->keyword_1 = 0; // Sqlite3 starts numbering tokens from 1, so 0 means
info->keyword_2 = 0; // that we have not seen a keyword.
info->prepare_name = NULL;
info->preparable_stmt_offset = 0;
info->prepare_operation = QUERY_OP_UNDEFINED;
info->preparable_stmt = NULL;
info->preparable_stmt_length = 0;
info->field_infos = NULL;
info->field_infos_len = 0;
info->field_infos_capacity = 0;
return info;
}
@ -465,7 +501,7 @@ static bool parse_query(GWBUF* query)
this_thread.info->query = NULL;
this_thread.info->query_len = 0;
if (info->types & QUERY_TYPE_PREPARE_NAMED_STMT)
if ((info->types & QUERY_TYPE_PREPARE_NAMED_STMT) && info->preparable_stmt)
{
QC_SQLITE_INFO* preparable_info = info_alloc();
@ -473,7 +509,7 @@ static bool parse_query(GWBUF* query)
{
this_thread.info = preparable_info;
const char *preparable_s = s + info->preparable_stmt_offset;
const char *preparable_s = info->preparable_stmt;
size_t preparable_len = info->preparable_stmt_length;
this_thread.info->query = preparable_s;
@ -482,9 +518,7 @@ static bool parse_query(GWBUF* query)
this_thread.info->query = NULL;
this_thread.info->query_len = 0;
// TODO: Perhaps the rest of the stuff should be
// TODO: copied as well.
info->operation = preparable_info->operation;
info->prepare_operation = preparable_info->operation;
info_free(preparable_info);
}
@ -626,42 +660,6 @@ static void log_invalid_data(GWBUF* query, const char* message)
}
}
static void append_affected_field(QC_SQLITE_INFO* info, const char* s)
{
size_t len = strlen(s);
size_t required_len = info->affected_fields_len + len + 1; // 1 for NULL
if (info->affected_fields_len != 0)
{
required_len += 1; // " " between fields
}
if (required_len > info->affected_fields_capacity)
{
if (info->affected_fields_capacity == 0)
{
info->affected_fields_capacity = 32;
}
while (required_len > info->affected_fields_capacity)
{
info->affected_fields_capacity *= 2;
}
info->affected_fields = MXS_REALLOC(info->affected_fields, info->affected_fields_capacity);
MXS_ABORT_IF_NULL(info->affected_fields);
}
if (info->affected_fields_len != 0)
{
strcpy(info->affected_fields + info->affected_fields_len, " ");
info->affected_fields_len += 1;
}
strcpy(info->affected_fields + info->affected_fields_len, s);
info->affected_fields_len += len;
}
static bool should_exclude(const char* zName, const ExprList* pExclude)
{
int i;
@ -679,54 +677,214 @@ static bool should_exclude(const char* zName, const ExprList* pExclude)
Expr* pExpr = item->pExpr;
if (pExpr->op == TK_DOT)
if (pExpr->op == TK_EQ)
{
// We end up here e.g with "UPDATE t set t.col = 5 ..."
// So, we pick the left branch.
pExpr = pExpr->pLeft;
}
while (pExpr->op == TK_DOT)
{
pExpr = pExpr->pRight;
}
// We need to ensure that we do not report fields where there
// is only a difference in case. E.g.
// SELECT A FROM tbl WHERE a = "foo";
// Affected fields is "A" and not "A a".
if ((pExpr->op == TK_ID) && (strcasecmp(pExpr->u.zToken, zName) == 0))
if (pExpr->op == TK_ID)
{
break;
// We need to ensure that we do not report fields where there
// is only a difference in case. E.g.
// SELECT A FROM tbl WHERE a = "foo";
// Affected fields is "A" and not "A a".
if (strcasecmp(pExpr->u.zToken, zName) == 0)
{
break;
}
}
}
return i != pExclude->nExpr;
}
static void update_affected_fields(QC_SQLITE_INFO* info,
int prev_token,
const Expr* pExpr,
qc_token_position_t pos,
const ExprList* pExclude)
static void update_field_info(QC_SQLITE_INFO* info,
const char* database,
const char* table,
const char* column,
uint32_t usage,
const ExprList* pExclude)
{
ss_dassert(column);
QC_FIELD_INFO item = { (char*)database, (char*)table, (char*)column, usage };
int i;
for (i = 0; i < info->field_infos_len; ++i)
{
QC_FIELD_INFO* field_info = info->field_infos + i;
if (strcasecmp(item.column, field_info->column) == 0)
{
if (!item.table && !field_info->table)
{
ss_dassert(!item.database && !field_info->database);
break;
}
else if (item.table && field_info->table && (strcmp(item.table, field_info->table) == 0))
{
if (!item.database && !field_info->database)
{
break;
}
else if (item.database &&
field_info->database &&
(strcmp(item.database, field_info->database) == 0))
{
break;
}
}
}
}
QC_FIELD_INFO* field_infos = NULL;
if (i == info->field_infos_len) // If true, the field was not present already.
{
// If only a column is specified, but not a table or database and we
// have a list of expressions that should be excluded, we check if the column
// value is present in that list. This is in order to exclude the second "d" in
// a statement like "select a as d from x where d = 2".
if (!(column && !table && !database && pExclude && should_exclude(column, pExclude)))
{
if (info->field_infos_len < info->field_infos_capacity)
{
field_infos = info->field_infos;
}
else
{
size_t capacity = info->field_infos_capacity ? 2 * info->field_infos_capacity : 8;
field_infos = MXS_REALLOC(info->field_infos, capacity * sizeof(QC_FIELD_INFO));
if (field_infos)
{
info->field_infos = field_infos;
info->field_infos_capacity = capacity;
}
}
}
}
else
{
info->field_infos[i].usage |= usage;
}
// If field_infos is NULL, then the field was found and has already been noted.
if (field_infos)
{
item.database = item.database ? MXS_STRDUP(item.database) : NULL;
item.table = item.table ? MXS_STRDUP(item.table) : NULL;
ss_dassert(item.column);
item.column = MXS_STRDUP(item.column);
// We are happy if we at least could dup the column.
if (item.column)
{
field_infos[info->field_infos_len++] = item;
}
}
}
static void update_field_infos_from_expr(QC_SQLITE_INFO* info,
const struct Expr* pExpr,
uint32_t usage,
const ExprList* pExclude)
{
QC_FIELD_INFO item = {};
if (pExpr->op == TK_ASTERISK)
{
item.column = "*";
}
else if (pExpr->op == TK_ID)
{
// select a from...
item.column = pExpr->u.zToken;
}
else if (pExpr->op == TK_DOT)
{
if (pExpr->pLeft->op == TK_ID &&
(pExpr->pRight->op == TK_ID || pExpr->pRight->op == TK_ASTERISK))
{
// select a.b from...
item.table = pExpr->pLeft->u.zToken;
if (pExpr->pRight->op == TK_ID)
{
item.column = pExpr->pRight->u.zToken;
}
else
{
item.column = "*";
}
}
else if (pExpr->pLeft->op == TK_ID &&
pExpr->pRight->op == TK_DOT &&
pExpr->pRight->pLeft->op == TK_ID &&
(pExpr->pRight->pRight->op == TK_ID || pExpr->pRight->pRight->op == TK_ASTERISK))
{
// select a.b.c from...
item.database = pExpr->pLeft->u.zToken;
item.table = pExpr->pRight->pLeft->u.zToken;
if (pExpr->pRight->pRight->op == TK_ID)
{
item.column = pExpr->pRight->pRight->u.zToken;
}
else
{
item.column = "*";
}
}
}
if (item.column)
{
bool should_update = true;
if ((pExpr->flags & EP_DblQuoted) == 0)
{
if ((strcasecmp(item.column, "true") == 0) || (strcasecmp(item.column, "false") == 0))
{
should_update = false;
}
}
if (should_update)
{
update_field_info(info, item.database, item.table, item.column, usage, pExclude);
}
}
}
static void update_field_infos(QC_SQLITE_INFO* info,
int prev_token,
const Expr* pExpr,
uint32_t usage,
qc_token_position_t pos,
const ExprList* pExclude)
{
const char* zToken = pExpr->u.zToken;
switch (pExpr->op)
{
case TK_ASTERISK: // "select *"
append_affected_field(info, "*");
case TK_ASTERISK: // select *
update_field_infos_from_expr(info, pExpr, usage, pExclude);
break;
case TK_DOT:
// In case of "X.Y" qc_mysqlembedded returns "Y".
update_affected_fields(info, TK_DOT, pExpr->pRight, QC_TOKEN_RIGHT, pExclude);
case TK_DOT: // select a.b ... select a.b.c
update_field_infos_from_expr(info, pExpr, usage, pExclude);
break;
case TK_ID:
if ((pExpr->flags & EP_DblQuoted) == 0)
{
if ((strcasecmp(zToken, "true") != 0) && (strcasecmp(zToken, "false") != 0))
{
if (!pExclude || !should_exclude(zToken, pExclude))
{
append_affected_field(info, zToken);
}
}
}
case TK_ID: // select a
update_field_infos_from_expr(info, pExpr, usage, pExclude);
break;
case TK_VARIABLE:
@ -787,12 +945,17 @@ static void update_affected_fields(QC_SQLITE_INFO* info,
if (pExpr->pLeft)
{
update_affected_fields(info, pExpr->op, pExpr->pLeft, QC_TOKEN_LEFT, pExclude);
update_field_infos(info, pExpr->op, pExpr->pLeft, usage, QC_TOKEN_LEFT, pExclude);
}
if (pExpr->pRight)
{
update_affected_fields(info, pExpr->op, pExpr->pRight, QC_TOKEN_RIGHT, pExclude);
if (usage & QC_USED_IN_SET)
{
usage &= ~QC_USED_IN_SET;
}
update_field_infos(info, pExpr->op, pExpr->pRight, usage, QC_TOKEN_RIGHT, pExclude);
}
if (pExpr->x.pList)
@ -802,7 +965,7 @@ static void update_affected_fields(QC_SQLITE_INFO* info,
case TK_BETWEEN:
case TK_CASE:
case TK_FUNCTION:
update_affected_fields_from_exprlist(info, pExpr->x.pList, pExclude);
update_field_infos_from_exprlist(info, pExpr->x.pList, usage, pExclude);
break;
case TK_EXISTS:
@ -810,11 +973,15 @@ static void update_affected_fields(QC_SQLITE_INFO* info,
case TK_SELECT:
if (pExpr->flags & EP_xIsSelect)
{
update_affected_fields_from_select(info, pExpr->x.pSelect, pExclude);
uint32_t sub_usage = usage;
sub_usage &= ~QC_USED_IN_SELECT;
sub_usage |= QC_USED_IN_SUBSELECT;
update_field_infos_from_select(info, pExpr->x.pSelect, sub_usage, pExclude);
}
else
{
update_affected_fields_from_exprlist(info, pExpr->x.pList, pExclude);
update_field_infos_from_exprlist(info, pExpr->x.pList, usage, pExclude);
}
break;
}
@ -823,36 +990,36 @@ static void update_affected_fields(QC_SQLITE_INFO* info,
}
}
static void update_affected_fields_from_exprlist(QC_SQLITE_INFO* info,
const ExprList* pEList,
const ExprList* pExclude)
static void update_field_infos_from_exprlist(QC_SQLITE_INFO* info,
const ExprList* pEList,
uint32_t usage,
const ExprList* pExclude)
{
for (int i = 0; i < pEList->nExpr; ++i)
{
struct ExprList_item* pItem = &pEList->a[i];
update_affected_fields(info, 0, pItem->pExpr, QC_TOKEN_MIDDLE, pExclude);
update_field_infos(info, 0, pItem->pExpr, usage, QC_TOKEN_MIDDLE, pExclude);
}
}
static void update_affected_fields_from_idlist(QC_SQLITE_INFO* info,
const IdList* pIds,
const ExprList* pExclude)
static void update_field_infos_from_idlist(QC_SQLITE_INFO* info,
const IdList* pIds,
uint32_t usage,
const ExprList* pExclude)
{
for (int i = 0; i < pIds->nId; ++i)
{
struct IdList_item* pItem = &pIds->a[i];
if (!pExclude || !should_exclude(pItem->zName, pExclude))
{
append_affected_field(info, pItem->zName);
}
update_field_info(info, NULL, NULL, pItem->zName, usage, pExclude);
}
}
static void update_affected_fields_from_select(QC_SQLITE_INFO* info,
const Select* pSelect,
const ExprList* pExclude)
static void update_field_infos_from_select(QC_SQLITE_INFO* info,
const Select* pSelect,
uint32_t usage,
const ExprList* pExclude)
{
if (pSelect->pSrc)
{
@ -868,7 +1035,12 @@ static void update_affected_fields_from_select(QC_SQLITE_INFO* info,
if (pSrc->a[i].pSelect)
{
update_affected_fields_from_select(info, pSrc->a[i].pSelect, pExclude);
uint32_t sub_usage = usage;
sub_usage &= ~QC_USED_IN_SELECT;
sub_usage |= QC_USED_IN_SUBSELECT;
update_field_infos_from_select(info, pSrc->a[i].pSelect, sub_usage, pExclude);
}
#ifdef QC_COLLECT_NAMES_FROM_USING
@ -878,7 +1050,7 @@ static void update_affected_fields_from_select(QC_SQLITE_INFO* info,
// does not reveal its value, right?
if (pSrc->a[i].pUsing)
{
update_affected_fields_from_idlist(info, pSrc->a[i].pUsing, pSelect->pEList);
update_field_infos_from_idlist(info, pSrc->a[i].pUsing, 0, pSelect->pEList);
}
#endif
}
@ -886,24 +1058,28 @@ static void update_affected_fields_from_select(QC_SQLITE_INFO* info,
if (pSelect->pEList)
{
update_affected_fields_from_exprlist(info, pSelect->pEList, NULL);
update_field_infos_from_exprlist(info, pSelect->pEList, usage, NULL);
}
if (pSelect->pWhere)
if (pSelect->pWhere)
{
info->has_clause = true;
update_affected_fields(info, 0, pSelect->pWhere, QC_TOKEN_MIDDLE, pSelect->pEList);
update_field_infos(info, 0, pSelect->pWhere, QC_USED_IN_WHERE, QC_TOKEN_MIDDLE, pSelect->pEList);
}
if (pSelect->pGroupBy)
{
update_affected_fields_from_exprlist(info, pSelect->pGroupBy, pSelect->pEList);
update_field_infos_from_exprlist(info, pSelect->pGroupBy, QC_USED_IN_GROUP_BY, pSelect->pEList);
}
if (pSelect->pHaving)
{
info->has_clause = true;
update_affected_fields(info, 0, pSelect->pHaving, QC_TOKEN_MIDDLE, pSelect->pEList);
#if defined(COLLECT_HAVING_AS_WELL)
// A HAVING clause can only refer to fields that already have been
// mentioned. Consequently, they need not be collected.
update_field_infos(info, 0, pSelect->pHaving, 0, QC_TOKEN_MIDDLE, pSelect->pEList);
#endif
}
}
@ -1148,7 +1324,7 @@ void mxs_sqlite3CreateView(Parse *pParse, /* The parsing context */
if (pSelect)
{
update_affected_fields_from_select(info, pSelect, NULL);
update_field_infos_from_select(info, pSelect, QC_USED_IN_SELECT, NULL);
info->is_real_query = false;
}
@ -1218,7 +1394,7 @@ void mxs_sqlite3DeleteFrom(Parse* pParse, SrcList* pTabList, Expr* pWhere, SrcLi
if (pWhere)
{
update_affected_fields(info, 0, pWhere, QC_TOKEN_MIDDLE, 0);
update_field_infos(info, 0, pWhere, QC_USED_IN_WHERE, QC_TOKEN_MIDDLE, 0);
}
exposed_sqlite3ExprDelete(pParse->db, pWhere);
@ -1282,7 +1458,7 @@ void mxs_sqlite3EndTable(Parse *pParse, /* Parse context */
{
if (pSelect)
{
update_affected_fields_from_select(info, pSelect, NULL);
update_field_infos_from_select(info, pSelect, QC_USED_IN_SELECT, NULL);
info->is_real_query = false;
}
else if (pOldTable)
@ -1328,17 +1504,28 @@ void mxs_sqlite3Insert(Parse* pParse,
if (pColumns)
{
update_affected_fields_from_idlist(info, pColumns, NULL);
update_field_infos_from_idlist(info, pColumns, 0, NULL);
}
if (pSelect)
{
update_affected_fields_from_select(info, pSelect, NULL);
uint32_t usage;
if (pSelect->selFlags & SF_Values) // Synthesized from VALUES clause
{
usage = 0;
}
else
{
usage = QC_USED_IN_SELECT;
}
update_field_infos_from_select(info, pSelect, usage, NULL);
}
if (pSet)
{
update_affected_fields_from_exprlist(info, pSet, NULL);
update_field_infos_from_exprlist(info, pSet, 0, NULL);
}
exposed_sqlite3SrcListDelete(pParse->db, pTabList);
@ -1374,7 +1561,7 @@ int mxs_sqlite3Select(Parse* pParse, Select* p, SelectDest* pDest)
info->status = QC_QUERY_PARSED;
info->operation = QUERY_OP_SELECT;
maxscaleCollectInfoFromSelect(pParse, p);
maxscaleCollectInfoFromSelect(pParse, p, 0);
// NOTE: By convention, the select is deleted in parse.y.
}
else
@ -1462,18 +1649,13 @@ void mxs_sqlite3Update(Parse* pParse, SrcList* pTabList, ExprList* pChanges, Exp
{
struct ExprList_item* pItem = &pChanges->a[i];
if (pItem->zName)
{
append_affected_field(info, pItem->zName);
}
update_affected_fields(info, 0, pItem->pExpr, QC_TOKEN_MIDDLE, NULL);
update_field_infos(info, 0, pItem->pExpr, QC_USED_IN_SET, QC_TOKEN_MIDDLE, NULL);
}
}
if (pWhere)
{
update_affected_fields(info, 0, pWhere, QC_TOKEN_MIDDLE, NULL);
update_field_infos(info, 0, pWhere, QC_USED_IN_WHERE, QC_TOKEN_MIDDLE, pChanges);
}
exposed_sqlite3SrcListDelete(pParse->db, pTabList);
@ -1481,7 +1663,7 @@ void mxs_sqlite3Update(Parse* pParse, SrcList* pTabList, ExprList* pChanges, Exp
exposed_sqlite3ExprDelete(pParse->db, pWhere);
}
void maxscaleCollectInfoFromSelect(Parse* pParse, Select* pSelect)
void maxscaleCollectInfoFromSelect(Parse* pParse, Select* pSelect, int sub_select)
{
QC_SQLITE_INFO* info = this_thread.info;
ss_dassert(info);
@ -1499,7 +1681,9 @@ void maxscaleCollectInfoFromSelect(Parse* pParse, Select* pSelect)
info->types = QUERY_TYPE_READ;
}
update_affected_fields_from_select(info, pSelect, NULL);
uint32_t usage = sub_select ? QC_USED_IN_SUBSELECT : QC_USED_IN_SELECT;
update_field_infos_from_select(info, pSelect, usage, NULL);
}
void maxscaleAlterTable(Parse *pParse, /* Parser context. */
@ -1651,9 +1835,13 @@ void maxscaleExplain(Parse* pParse, SrcList* pName)
info->status = QC_QUERY_PARSED;
info->types = QUERY_TYPE_READ;
update_names(info, "information_schema", "COLUMNS");
append_affected_field(info,
"COLUMN_DEFAULT COLUMN_KEY COLUMN_NAME "
"COLUMN_TYPE EXTRA IS_NULLABLE");
uint32_t u = QC_USED_IN_SELECT;
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_DEFAULT", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_KEY", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_NAME", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_TYPE", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "EXTRA", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "IS_NULLABLE", u, NULL);
exposed_sqlite3SrcListDelete(pParse->db, pName);
}
@ -2014,11 +2202,12 @@ void maxscalePrepare(Parse* pParse, Token* pName, Token* pStmt)
info->prepare_name[pName->n] = 0;
}
// We store the position of the preparable statement inside the original
// statement. That will allow us to later create a new GWBUF of the
// parsable statment and parse that.
info->preparable_stmt_offset = pParse->sLastToken.z - pParse->zTail + 1; // Ignore starting quote.
info->preparable_stmt_length = pStmt->n - 2; // Remove starting and ending quotes.
info->preparable_stmt_length = pStmt->n - 2;
info->preparable_stmt = MXS_MALLOC(info->preparable_stmt_length);
if (info->preparable_stmt)
{
memcpy(info->preparable_stmt, pStmt->z + 1, pStmt->n - 2);
}
}
void maxscalePrivileges(Parse* pParse, int kind)
@ -2172,7 +2361,8 @@ void maxscaleSet(Parse* pParse, int scope, mxs_set_t kind, ExprList* pList)
if (pValue->op == TK_SELECT)
{
update_affected_fields_from_select(info, pValue->x.pSelect, NULL);
update_field_infos_from_select(info, pValue->x.pSelect,
QC_USED_IN_SUBSELECT, NULL);
info->is_real_query = false; // TODO: This is what qc_mysqlembedded claims.
}
}
@ -2220,6 +2410,8 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
zName = name;
}
uint32_t u = QC_USED_IN_SELECT;
switch (pShow->what)
{
case MXS_SHOW_COLUMNS:
@ -2228,16 +2420,24 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
update_names(info, "information_schema", "COLUMNS");
if (pShow->data == MXS_SHOW_COLUMNS_FULL)
{
append_affected_field(info,
"COLLATION_NAME COLUMN_COMMENT COLUMN_DEFAULT "
"COLUMN_KEY COLUMN_NAME COLUMN_TYPE EXTRA "
"IS_NULLABLE PRIVILEGES");
update_field_info(info, "information_schema", "COLUMNS", "COLLATION_NAME", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_COMMENT", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_DEFAULT", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_KEY", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_NAME", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_TYPE", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "EXTRA", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "IS_NULLABLE", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "PRIVILEGES", u, NULL);
}
else
{
append_affected_field(info,
"COLUMN_DEFAULT COLUMN_KEY COLUMN_NAME "
"COLUMN_TYPE EXTRA IS_NULLABLE");
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_DEFAULT", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_KEY", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_NAME", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "COLUMN_TYPE", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "EXTRA", u, NULL);
update_field_info(info, "information_schema", "COLUMNS", "IS_NULLABLE", u, NULL);
}
}
break;
@ -2260,7 +2460,7 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
{
info->types = QUERY_TYPE_SHOW_DATABASES;
update_names(info, "information_schema", "SCHEMATA");
append_affected_field(info, "SCHEMA_NAME");
update_field_info(info, "information_schema", "SCHEMATA", "SCHEMA_NAME", u, NULL);
}
break;
@ -2270,10 +2470,19 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
{
info->types = QUERY_TYPE_WRITE;
update_names(info, "information_schema", "STATISTICS");
append_affected_field(info,
"CARDINALITY COLLATION COLUMN_NAME COMMENT INDEX_COMMENT "
"INDEX_NAME INDEX_TYPE NON_UNIQUE NULLABLE PACKED SEQ_IN_INDEX "
"SUB_PART TABLE_NAME");
update_field_info(info, "information_schema", "STATISTICS", "CARDINALITY", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "COLLATION", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "COLUMN_NAME", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "COMMENT", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "INDEX_COMMENT", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "INDEX_NAME", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "INDEX_TYPE", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "NON_UNIQUE", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "NULLABLE", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "PACKED", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "SEQ_IN_INDEX", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "SUB_PART", u, NULL);
update_field_info(info, "information_schema", "STATISTICS", "TABLE_NAME", u, NULL);
}
break;
@ -2281,12 +2490,24 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
{
info->types = QUERY_TYPE_WRITE;
update_names(info, "information_schema", "TABLES");
append_affected_field(info,
"AUTO_INCREMENT AVG_ROW_LENGTH CHECKSUM CHECK_TIME "
"CREATE_OPTIONS CREATE_TIME DATA_FREE DATA_LENGTH "
"ENGINE INDEX_LENGTH MAX_DATA_LENGTH ROW_FORMAT "
"TABLE_COLLATION TABLE_COMMENT TABLE_NAME "
"TABLE_ROWS UPDATE_TIME VERSION");
update_field_info(info, "information_schema", "TABLES", "AUTO_INCREMENT", u, NULL);
update_field_info(info, "information_schema", "TABLES", "AVG_ROW_LENGTH", u, NULL);
update_field_info(info, "information_schema", "TABLES", "CHECKSUM", u, NULL);
update_field_info(info, "information_schema", "TABLES", "CHECK_TIME", u, NULL);
update_field_info(info, "information_schema", "TABLES", "CREATE_OPTIONS", u, NULL);
update_field_info(info, "information_schema", "TABLES", "CREATE_TIME", u, NULL);
update_field_info(info, "information_schema", "TABLES", "DATA_FREE", u, NULL);
update_field_info(info, "information_schema", "TABLES", "DATA_LENGTH", u, NULL);
update_field_info(info, "information_schema", "TABLES", "ENGINE", u, NULL);
update_field_info(info, "information_schema", "TABLES", "INDEX_LENGTH", u, NULL);
update_field_info(info, "information_schema", "TABLES", "MAX_DATA_LENGTH", u, NULL);
update_field_info(info, "information_schema", "TABLES", "ROW_FORMAT", u, NULL);
update_field_info(info, "information_schema", "TABLES", "TABLE_COLLATION", u, NULL);
update_field_info(info, "information_schema", "TABLES", "TABLE_COMMENT", u, NULL);
update_field_info(info, "information_schema", "TABLES", "TABLE_NAME", u, NULL);
update_field_info(info, "information_schema", "TABLES", "TABLE_ROWS", u, NULL);
update_field_info(info, "information_schema", "TABLES", "UPDATE_TIME", u, NULL);
update_field_info(info, "information_schema", "TABLES", "VERSION", u, NULL);
}
break;
@ -2300,7 +2521,8 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
// TODO: qc_mysqlembedded does not set the type bit.
info->types = QUERY_TYPE_UNKNOWN;
update_names(info, "information_schema", "SESSION_STATUS");
append_affected_field(info, "VARIABLE_NAME VARIABLE_VALUE");
update_field_info(info, "information_schema", "SESSION_STATUS", "VARIABLE_NAME", u, NULL);
update_field_info(info, "information_schema", "SESSION_STATUS", "VARIABLE_VALUE", u, NULL);
break;
case MXS_SHOW_STATUS_MASTER:
@ -2325,7 +2547,7 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
{
info->types = QUERY_TYPE_SHOW_TABLES;
update_names(info, "information_schema", "TABLE_NAMES");
append_affected_field(info, "TABLE_NAME");
update_field_info(info, "information_schema", "TABLE_NAMES", "TABLE_NAME", u, NULL);
}
break;
@ -2340,7 +2562,8 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
info->types = QUERY_TYPE_SYSVAR_READ;
}
update_names(info, "information_schema", "SESSION_VARIABLES");
append_affected_field(info, "VARIABLE_NAME VARIABLE_VALUE");
update_field_info(info, "information_schema", "SESSION_STATUS", "VARIABLE_NAME", u, NULL);
update_field_info(info, "information_schema", "SESSION_STATUS", "VARIABLE_VALUE", u, NULL);
}
break;
@ -2417,7 +2640,6 @@ static bool qc_sqlite_is_real_query(GWBUF* query);
static char** qc_sqlite_get_table_names(GWBUF* query, int* tblsize, bool fullnames);
static char* qc_sqlite_get_canonical(GWBUF* query);
static bool qc_sqlite_query_has_clause(GWBUF* query);
static char* qc_sqlite_get_affected_fields(GWBUF* query);
static char** qc_sqlite_get_database_names(GWBUF* query, int* sizep);
static bool get_key_and_value(char* arg, const char** pkey, const char** pvalue)
@ -2825,42 +3047,6 @@ static bool qc_sqlite_query_has_clause(GWBUF* query)
return has_clause;
}
static char* qc_sqlite_get_affected_fields(GWBUF* query)
{
QC_TRACE();
ss_dassert(this_unit.initialized);
ss_dassert(this_thread.initialized);
char* affected_fields = NULL;
QC_SQLITE_INFO* info = get_query_info(query);
if (info)
{
if (qc_info_is_valid(info->status))
{
affected_fields = info->affected_fields;
}
else if (MXS_LOG_PRIORITY_IS_ENABLED(LOG_INFO))
{
log_invalid_data(query, "cannot report what fields are affected");
}
}
else
{
MXS_ERROR("The query could not be parsed. Response not valid.");
}
if (!affected_fields)
{
affected_fields = "";
}
affected_fields = MXS_STRDUP(affected_fields);
MXS_ABORT_IF_NULL(affected_fields);
return affected_fields;
}
static char** qc_sqlite_get_database_names(GWBUF* query, int* sizep)
{
QC_TRACE();
@ -2923,6 +3109,63 @@ static char* qc_sqlite_get_prepare_name(GWBUF* query)
return name;
}
static qc_query_op_t qc_sqlite_get_prepare_operation(GWBUF* query)
{
QC_TRACE();
ss_dassert(this_unit.initialized);
ss_dassert(this_thread.initialized);
qc_query_op_t op = QUERY_OP_UNDEFINED;
QC_SQLITE_INFO* info = get_query_info(query);
if (info)
{
if (qc_info_is_valid(info->status))
{
op = info->prepare_operation;
}
else if (MXS_LOG_PRIORITY_IS_ENABLED(LOG_INFO))
{
log_invalid_data(query, "cannot report the operation of a prepared statement");
}
}
else
{
MXS_ERROR("The query could not be parsed. Response not valid.");
}
return op;
}
void qc_sqlite_get_field_info(GWBUF* query, const QC_FIELD_INFO** infos, size_t* n_infos)
{
QC_TRACE();
ss_dassert(this_unit.initialized);
ss_dassert(this_thread.initialized);
*infos = NULL;
*n_infos = 0;
QC_SQLITE_INFO* info = get_query_info(query);
if (info)
{
if (qc_info_is_valid(info->status))
{
*infos = info->field_infos;
*n_infos = info->field_infos_len;
}
else if (MXS_LOG_PRIORITY_IS_ENABLED(LOG_INFO))
{
log_invalid_data(query, "cannot report field info");
}
}
else
{
MXS_ERROR("The query could not be parsed. Response not valid.");
}
}
/**
* EXPORTS
*/
@ -2944,9 +3187,10 @@ static QUERY_CLASSIFIER qc =
qc_sqlite_get_table_names,
NULL,
qc_sqlite_query_has_clause,
qc_sqlite_get_affected_fields,
qc_sqlite_get_database_names,
qc_sqlite_get_prepare_name,
qc_sqlite_get_prepare_operation,
qc_sqlite_get_field_info,
};

View File

@ -94,7 +94,7 @@ extern int mxs_sqlite3Select(Parse*, Select*, SelectDest*);
extern void mxs_sqlite3StartTable(Parse*,Token*,Token*,int,int,int,int);
extern void mxs_sqlite3Update(Parse*, SrcList*, ExprList*, Expr*, int);
extern void maxscaleCollectInfoFromSelect(Parse*, Select*);
extern void maxscaleCollectInfoFromSelect(Parse*, Select*, int);
extern void maxscaleAlterTable(Parse*, mxs_alter_t command, SrcList*, Token*);
extern void maxscaleCall(Parse*, SrcList* pName);
@ -1444,7 +1444,7 @@ table_factor(A) ::= nm(X) DOT nm(Y) as_opt id(Z). {
}
table_factor(A) ::= LP oneselect(S) RP as_opt id. {
maxscaleCollectInfoFromSelect(pParse, S);
maxscaleCollectInfoFromSelect(pParse, S, 1);
sqlite3SelectDelete(pParse->db, S);
A = 0;
}

View File

@ -196,10 +196,12 @@ int test(FILE* input, FILE* expected)
{
tok = strpbrk(strbuff, ";");
unsigned int qlen = tok - strbuff + 1;
GWBUF* buff = gwbuf_alloc(qlen + 6);
*((unsigned char*)(GWBUF_DATA(buff))) = qlen;
*((unsigned char*)(GWBUF_DATA(buff) + 1)) = (qlen >> 8);
*((unsigned char*)(GWBUF_DATA(buff) + 2)) = (qlen >> 16);
unsigned int payload_len = qlen + 1;
unsigned int buf_len = payload_len + 4;
GWBUF* buff = gwbuf_alloc(buf_len);
*((unsigned char*)(GWBUF_DATA(buff))) = payload_len;
*((unsigned char*)(GWBUF_DATA(buff) + 1)) = (payload_len >> 8);
*((unsigned char*)(GWBUF_DATA(buff) + 2)) = (payload_len >> 16);
*((unsigned char*)(GWBUF_DATA(buff) + 3)) = 0x00;
*((unsigned char*)(GWBUF_DATA(buff) + 4)) = 0x03;
memcpy(GWBUF_DATA(buff) + 5, strbuff, qlen);

View File

@ -715,68 +715,6 @@ ostream& operator << (ostream& o, const std::set<string>& s)
return o;
}
bool compare_get_affected_fields(QUERY_CLASSIFIER* pClassifier1, GWBUF* pCopy1,
QUERY_CLASSIFIER* pClassifier2, GWBUF* pCopy2)
{
bool success = false;
const char HEADING[] = "qc_get_affected_fields : ";
char* rv1 = pClassifier1->qc_get_affected_fields(pCopy1);
char* rv2 = pClassifier2->qc_get_affected_fields(pCopy2);
std::set<string> fields1;
std::set<string> fields2;
if (rv1)
{
add_fields(fields1, rv1);
}
if (rv2)
{
add_fields(fields2, rv2);
}
stringstream ss;
ss << HEADING;
if ((!rv1 && !rv2) || (rv1 && rv2 && (fields1 == fields2)))
{
ss << "Ok : " << fields1;
success = true;
}
else
{
ss << "ERR: ";
if (rv1)
{
ss << fields1;
}
else
{
ss << "NULL";
}
ss << " != ";
if (rv2)
{
ss << fields2;
}
else
{
ss << "NULL";
}
}
report(success, ss.str());
free(rv1);
free(rv2);
return success;
}
bool compare_get_database_names(QUERY_CLASSIFIER* pClassifier1, GWBUF* pCopy1,
QUERY_CLASSIFIER* pClassifier2, GWBUF* pCopy2)
{
@ -844,6 +782,280 @@ bool compare_get_prepare_name(QUERY_CLASSIFIER* pClassifier1, GWBUF* pCopy1,
return success;
}
bool compare_get_prepare_operation(QUERY_CLASSIFIER* pClassifier1, GWBUF* pCopy1,
QUERY_CLASSIFIER* pClassifier2, GWBUF* pCopy2)
{
bool success = false;
const char HEADING[] = "qc_get_prepare_operation : ";
qc_query_op_t rv1 = pClassifier1->qc_get_prepare_operation(pCopy1);
qc_query_op_t rv2 = pClassifier2->qc_get_prepare_operation(pCopy2);
stringstream ss;
ss << HEADING;
if (rv1 == rv2)
{
ss << "Ok : " << qc_op_to_string(rv1);
success = true;
}
else
{
ss << "ERR: " << qc_op_to_string(rv1) << " != " << qc_op_to_string(rv2);
}
report(success, ss.str());
return success;
}
bool operator == (const QC_FIELD_INFO& lhs, const QC_FIELD_INFO& rhs)
{
bool rv = false;
if (lhs.column && rhs.column && (strcasecmp(lhs.column, rhs.column) == 0))
{
if (!lhs.table && !rhs.table)
{
rv = true;
}
else if (lhs.table && rhs.table && (strcmp(lhs.table, rhs.table) == 0))
{
if (!lhs.database && !rhs.database)
{
rv = true;
}
else if (lhs.database && rhs.database && (strcmp(lhs.database, rhs.database) == 0))
{
rv = true;
}
}
}
return rv;
}
ostream& operator << (ostream& out, const QC_FIELD_INFO& x)
{
if (x.database)
{
out << x.database;
out << ".";
ss_dassert(x.table);
}
if (x.table)
{
out << x.table;
out << ".";
}
ss_dassert(x.column);
out << x.column;
return out;
}
class QcFieldInfo
{
public:
QcFieldInfo(const QC_FIELD_INFO& info)
: m_database(info.database ? info.database : "")
, m_table(info.table ? info.table : "")
, m_column(info.column ? info.column : "")
, m_usage(info.usage)
{}
bool eq(const QcFieldInfo& rhs) const
{
return
m_database == rhs.m_database &&
m_table == rhs.m_table &&
m_column == rhs.m_column &&
m_usage == rhs.m_usage;
}
bool lt(const QcFieldInfo& rhs) const
{
bool rv = false;
if (m_database < rhs.m_database)
{
rv = true;
}
else if (m_database > rhs.m_database)
{
rv = false;
}
else
{
if (m_table < rhs.m_table)
{
rv = true;
}
else if (m_table > rhs.m_table)
{
rv = false;
}
else
{
if (m_column < rhs.m_column)
{
rv = true;
}
else if (m_column > rhs.m_column)
{
rv = false;
}
else
{
rv = (m_usage < rhs.m_usage);
}
}
}
return rv;
}
void print(ostream& out) const
{
if (!m_database.empty())
{
out << m_database;
out << ".";
}
if (!m_table.empty())
{
out << m_table;
out << ".";
}
out << m_column;
out << "(";
char* s = qc_field_usage_mask_to_string(m_usage);
out << s;
free(s);
out << ")";
}
private:
std::string m_database;
std::string m_table;
std::string m_column;
uint32_t m_usage;
};
ostream& operator << (ostream& out, const QcFieldInfo& x)
{
x.print(out);
return out;
}
ostream& operator << (ostream& out, std::set<QcFieldInfo>& x)
{
std::set<QcFieldInfo>::iterator i = x.begin();
std::set<QcFieldInfo>::iterator end = x.end();
while (i != end)
{
out << *i++;
if (i != end)
{
out << " ";
}
}
return out;
}
bool operator < (const QcFieldInfo& lhs, const QcFieldInfo& rhs)
{
return lhs.lt(rhs);
}
bool operator == (const QcFieldInfo& lhs, const QcFieldInfo& rhs)
{
return lhs.eq(rhs);
}
bool are_equal(const QC_FIELD_INFO* fields1, size_t n_fields1,
const QC_FIELD_INFO* fields2, size_t n_fields2)
{
bool rv = (n_fields1 == n_fields2);
if (rv)
{
size_t i = 0;
while (rv && (i < n_fields1))
{
rv = *fields1 == *fields2;
++i;
}
}
return rv;
}
ostream& print(ostream& out, const QC_FIELD_INFO* fields, size_t n_fields)
{
size_t i = 0;
while (i < n_fields)
{
out << fields[i++];
if (i != n_fields)
{
out << " ";
}
}
return out;
}
bool compare_get_field_info(QUERY_CLASSIFIER* pClassifier1, GWBUF* pCopy1,
QUERY_CLASSIFIER* pClassifier2, GWBUF* pCopy2)
{
bool success = false;
const char HEADING[] = "qc_get_field_info : ";
const QC_FIELD_INFO* infos1;
const QC_FIELD_INFO* infos2;
size_t n_infos1;
size_t n_infos2;
pClassifier1->qc_get_field_info(pCopy1, &infos1, &n_infos1);
pClassifier2->qc_get_field_info(pCopy2, &infos2, &n_infos2);
stringstream ss;
ss << HEADING;
int i;
std::set<QcFieldInfo> f1;
f1.insert(infos1, infos1 + n_infos1);
std::set<QcFieldInfo> f2;
f2.insert(infos2, infos2 + n_infos2);
if (f1 == f2)
{
ss << "Ok : ";
ss << f1;
success = true;
}
else
{
ss << "ERR: " << f1 << " != " << f2;
}
report(success, ss.str());
return success;
}
bool compare(QUERY_CLASSIFIER* pClassifier1, QUERY_CLASSIFIER* pClassifier2, const string& s)
{
GWBUF* pCopy1 = create_gwbuf(s);
@ -860,9 +1072,10 @@ bool compare(QUERY_CLASSIFIER* pClassifier1, QUERY_CLASSIFIER* pClassifier2, con
errors += !compare_get_table_names(pClassifier1, pCopy1, pClassifier2, pCopy2, false);
errors += !compare_get_table_names(pClassifier1, pCopy1, pClassifier2, pCopy2, true);
errors += !compare_query_has_clause(pClassifier1, pCopy1, pClassifier2, pCopy2);
errors += !compare_get_affected_fields(pClassifier1, pCopy1, pClassifier2, pCopy2);
errors += !compare_get_database_names(pClassifier1, pCopy1, pClassifier2, pCopy2);
errors += !compare_get_prepare_name(pClassifier1, pCopy1, pClassifier2, pCopy2);
errors += !compare_get_prepare_operation(pClassifier1, pCopy1, pClassifier2, pCopy2);
errors += !compare_get_field_info(pClassifier1, pCopy1, pClassifier2, pCopy2);
gwbuf_free(pCopy1);
gwbuf_free(pCopy2);

View File

@ -222,9 +222,11 @@ replace into t1 values (4, 4);
select row_count();
# Reports that 2 rows are affected. This conforms to documentation.
# (Useful for differentiating inserts from updates).
insert into t1 values (2, 2) on duplicate key update data= data + 10;
# MXSTODO: insert into t1 values (2, 2) on duplicate key update data= data + 10;
# qc_sqlite: Cannot parse "on duplicate"
select row_count();
insert into t1 values (5, 5) on duplicate key update data= data + 10;
# MXSTODO: insert into t1 values (5, 5) on duplicate key update data= data + 10;
# qc_sqlite: Cannot parse "on duplicate"
select row_count();
drop table t1;

View File

@ -20,3 +20,8 @@ SET @x:= (SELECT h FROM t1 WHERE (a,b,c,d,e,f,g)=(1,2,3,4,5,6,7));
# REMOVE: expr(A) ::= LP(B) expr(X) RP(E). {A.pExpr = X.pExpr; spanSet(&A,&B,&E);}
# REMOVE: expr(A) ::= LP expr(X) COMMA(OP) expr(Y) RP. {spanBinaryExpr(&A,pParse,@OP,&X,&Y);}
# ADD : expr(A) ::= LP exprlist RP. { ... }
insert into t1 values (2, 2) on duplicate key update data= data + 10;
# Problem: warning: [qc_sqlite] Statement was only partially parsed (Sqlite3 error: SQL logic error
# or missing database, near "on": syntax error): "insert into t1 values (2, 2) on duplicate
# key update data= data + 10;"

View File

@ -12,7 +12,7 @@
*/
/**
* @file atomic.c - Implementation of atomic opertions for the gateway
* @file atomic.c - Implementation of atomic operations for MaxScale
*
* @verbatim
* Revision History
@ -23,22 +23,6 @@
* @endverbatim
*/
/**
* Implementation of an atomic add operation for the GCC environment, or the
* X86 processor. If we are working within GNU C then we can use the GCC
* atomic add built in function, which is portable across platforms that
* implement GCC. Otherwise, this function currently supports only X86
* architecture (without further development).
*
* Adds a value to the contents of a location pointed to by the first parameter.
* The add operation is atomic and the return value is the value stored in the
* location prior to the operation. The number that is added may be signed,
* therefore atomic_subtract is merely an atomic add with a negative value.
*
* @param variable Pointer the the variable to add to
* @param value Value to be added
* @return The value of variable before the add occurred
*/
int
atomic_add(int *variable, int value)
{

View File

@ -65,6 +65,7 @@
#include <maxscale/service.h>
#include <maxscale/spinlock.h>
#include <maxscale/utils.h>
#include <maxscale/gwdirs.h>
typedef struct duplicate_context
{
@ -540,6 +541,46 @@ static bool config_load_dir(const char *dir, DUPLICATE_CONTEXT *dcontext, CONFIG
return rv == 0;
}
/**
* Check if a directory exists
*
* This function also logs warnings if the directory cannot be accessed or if
* the file is not a directory.
* @param dir Directory to check
* @return True if the file is an existing directory
*/
static bool is_directory(const char *dir)
{
bool rval = false;
struct stat st;
if (stat(dir, &st) == -1)
{
if (errno == ENOENT)
{
MXS_NOTICE("%s does not exist, not reading.", dir);
}
else
{
char errbuf[MXS_STRERROR_BUFLEN];
MXS_WARNING("Could not access %s, not reading: %s",
dir, strerror_r(errno, errbuf, sizeof(errbuf)));
}
}
else
{
if (S_ISDIR(st.st_mode))
{
rval = true;
}
else
{
MXS_WARNING("%s exists, but it is not a directory. Ignoring.", dir);
}
}
return rval;
}
/**
* @brief Load the specified configuration file for MaxScale
*
@ -573,37 +614,25 @@ config_load_and_process(const char* filename, bool (*process_config)(CONFIG_CONT
rval = true;
struct stat st;
if (stat(dir, &st) == -1)
if (is_directory(dir))
{
if (errno == ENOENT)
{
MXS_NOTICE("%s does not exist, not reading.", dir);
}
else
{
char errbuf[MXS_STRERROR_BUFLEN];
MXS_WARNING("Could not access %s, not reading: %s",
dir, strerror_r(errno, errbuf, sizeof(errbuf)));
}
rval = config_load_dir(dir, &dcontext, &ccontext);
}
else
/** Create the persisted configuration directory if it doesn't exist */
const char* persist_cnf = get_config_persistdir();
mxs_mkdir_all(persist_cnf, S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH);
if (is_directory(persist_cnf))
{
if (S_ISDIR(st.st_mode))
{
rval = config_load_dir(dir, &dcontext, &ccontext);
}
else
{
MXS_WARNING("%s exists, but it is not a directory. Ignoring.", dir);
}
rval = config_load_dir(persist_cnf, &dcontext, &ccontext);
}
if (rval)
{
if (check_config_objects(ccontext.next) && process_config(ccontext.next))
if (!check_config_objects(ccontext.next) || !process_config(ccontext.next))
{
rval = true;
rval = false;
}
}
}
@ -633,13 +662,9 @@ config_load(const char *filename)
global_defaults();
feedback_defaults();
config_file = filename;
bool rval = config_load_and_process(filename, process_config_context);
if (rval)
{
config_file = filename;
}
return rval;
}
@ -1804,10 +1829,9 @@ process_config_update(CONFIG_CONTEXT *context)
if (address && port &&
(server = server_find(address, atoi(port))) != NULL)
{
char *protocol = config_get_value(obj->parameters, "protocol");
char *monuser = config_get_value(obj->parameters, "monuser");
char *monpw = config_get_value(obj->parameters, "monpw");
server_update(server, protocol, monuser, monpw);
server_update_credentials(server, monuser, monpw);
obj->element = server;
}
else
@ -2706,11 +2730,7 @@ int create_new_server(CONFIG_CONTEXT *obj)
if (address && port && protocol)
{
if ((obj->element = server_alloc(address, protocol, atoi(port), auth, auth_opts)))
{
server_set_unique_name(obj->element, obj->object);
}
else
if ((obj->element = server_alloc(obj->object, address, atoi(port), protocol, auth, auth_opts)) == NULL)
{
MXS_ERROR("Failed to create a new server, memory allocation failed.");
error_count++;
@ -2743,22 +2763,32 @@ int create_new_server(CONFIG_CONTEXT *obj)
const char *poolmax = config_get_value_string(obj->parameters, "persistpoolmax");
if (poolmax)
{
server->persistpoolmax = strtol(poolmax, &endptr, 0);
if (*endptr != '\0')
long int persistpoolmax = strtol(poolmax, &endptr, 0);
if (*endptr != '\0' || persistpoolmax < 0)
{
MXS_ERROR("Invalid value for 'persistpoolmax' for server %s: %s",
server->unique_name, poolmax);
error_count++;
}
else
{
server->persistpoolmax = persistpoolmax;
}
}
const char *persistmax = config_get_value_string(obj->parameters, "persistmaxtime");
if (persistmax)
{
server->persistmaxtime = strtol(persistmax, &endptr, 0);
if (*endptr != '\0')
long int persistmaxtime = strtol(persistmax, &endptr, 0);
if (*endptr != '\0' || persistmaxtime < 0)
{
MXS_ERROR("Invalid value for 'persistmaxtime' for server %s: %s",
server->unique_name, persistmax);
error_count++;
}
else
{
server->persistmaxtime = persistmaxtime;
}
}
@ -2828,12 +2858,6 @@ int configure_new_service(CONFIG_CONTEXT *context, CONFIG_CONTEXT *obj)
s = strtok_r(NULL, ",", &lasts);
}
}
else if (servers == NULL && !is_internal_service(router))
{
MXS_ERROR("The service '%s' is missing a definition of the servers "
"that provide the service.", obj->object);
error_count++;
}
if (roptions)
{
@ -2881,31 +2905,39 @@ int create_new_monitor(CONFIG_CONTEXT *context, CONFIG_CONTEXT *obj, HASHTABLE*
else
{
obj->element = NULL;
MXS_ERROR("Monitor '%s' is missing the require 'module' parameter.", obj->object);
MXS_ERROR("Monitor '%s' is missing the required 'module' parameter.", obj->object);
error_count++;
}
char *servers = config_get_value(obj->parameters, "servers");
if (servers == NULL)
{
MXS_ERROR("Monitor '%s' is missing the 'servers' parameter that "
"lists the servers that it monitors.", obj->object);
error_count++;
}
if (error_count == 0)
{
monitorAddParameters(obj->element, obj->parameters);
char *interval = config_get_value(obj->parameters, "monitor_interval");
if (interval)
char *interval_str = config_get_value(obj->parameters, "monitor_interval");
if (interval_str)
{
monitorSetInterval(obj->element, atoi(interval));
char *endptr;
long interval = strtol(interval_str, &endptr, 0);
/* The interval must be >0 because it is used as a divisor.
Perhaps a greater minimum value should be added? */
if (*endptr == '\0' && interval > 0)
{
monitorSetInterval(obj->element, (unsigned long)interval);
}
else
{
MXS_NOTICE("Invalid 'monitor_interval' parameter for monitor '%s', "
"using default value of %d milliseconds.",
obj->object, MONITOR_INTERVAL);
}
}
else
{
MXS_NOTICE("Monitor '%s' is missing the 'monitor_interval' parameter, "
"using default value of 10000 milliseconds.", obj->object);
"using default value of %d milliseconds.",
obj->object, MONITOR_INTERVAL);
}
char *connect_timeout = config_get_value(obj->parameters, "backend_connect_timeout");
@ -2938,36 +2970,39 @@ int create_new_monitor(CONFIG_CONTEXT *context, CONFIG_CONTEXT *obj, HASHTABLE*
}
}
/* get the servers to monitor */
char *s, *lasts;
s = strtok_r(servers, ",", &lasts);
while (s)
if (servers)
{
CONFIG_CONTEXT *obj1 = context;
int found = 0;
while (obj1)
/* get the servers to monitor */
char *s, *lasts;
s = strtok_r(servers, ",", &lasts);
while (s)
{
if (strcmp(trim(s), obj1->object) == 0 && obj->element && obj1->element)
CONFIG_CONTEXT *obj1 = context;
int found = 0;
while (obj1)
{
found = 1;
if (hashtable_add(monitorhash, obj1->object, "") == 0)
if (strcmp(trim(s), obj1->object) == 0 && obj->element && obj1->element)
{
MXS_WARNING("Multiple monitors are monitoring server [%s]. "
"This will cause undefined behavior.",
obj1->object);
found = 1;
if (hashtable_add(monitorhash, obj1->object, "") == 0)
{
MXS_WARNING("Multiple monitors are monitoring server [%s]. "
"This will cause undefined behavior.",
obj1->object);
}
monitorAddServer(obj->element, obj1->element);
}
monitorAddServer(obj->element, obj1->element);
obj1 = obj1->next;
}
if (!found)
{
MXS_ERROR("Unable to find server '%s' that is "
"configured in the monitor '%s'.", s, obj->object);
error_count++;
}
obj1 = obj1->next;
}
if (!found)
{
MXS_ERROR("Unable to find server '%s' that is "
"configured in the monitor '%s'.", s, obj->object);
error_count++;
}
s = strtok_r(NULL, ",", &lasts);
s = strtok_r(NULL, ",", &lasts);
}
}
char *user = config_get_value(obj->parameters, "user");

View File

@ -93,15 +93,6 @@
#include <maxscale/alloc.h>
#include <maxscale/utils.h>
#if defined(FAKE_CODE)
unsigned char dcb_fake_write_errno[10240];
__int32_t dcb_fake_write_ev[10240];
bool fail_next_backend_fd;
bool fail_next_client_fd;
int fail_next_accept;
int fail_accept_errno;
#endif /* FAKE_CODE */
/* The list of all DCBs */
static LIST_CONFIG DCBlist =
{LIST_TYPE_RECYCLABLE, sizeof(DCB), SPINLOCK_INIT};
@ -130,9 +121,6 @@ static int dcb_create_SSL(DCB* dcb, SSL_LISTENER *ssl);
static int dcb_read_SSL(DCB *dcb, GWBUF **head);
static GWBUF *dcb_basic_read(DCB *dcb, int bytesavailable, int maxbytes, int nreadtotal, int *nsingleread);
static GWBUF *dcb_basic_read_SSL(DCB *dcb, int *nsingleread);
#if defined(FAKE_CODE)
static inline void dcb_write_fake_code(DCB *dcb);
#endif
static void dcb_log_write_failure(DCB *dcb, GWBUF *queue, int eno);
static inline void dcb_write_tidy_up(DCB *dcb, bool below_water);
static int gw_write(DCB *dcb, GWBUF *writeq, bool *stop_writing);
@ -684,9 +672,6 @@ dcb_process_victim_queue(DCB *listofdcb)
}
else
{
#if defined(FAKE_CODE)
conn_open[dcb->fd] = false;
#endif /* FAKE_CODE */
dcb->fd = DCBFD_CLOSED;
MXS_DEBUG("%lu [dcb_process_victim_queue] Closed socket "
@ -772,6 +757,7 @@ dcb_connect(SERVER *server, SESSION *session, const char *protocol)
MXS_DEBUG("%lu [dcb_connect] Reusing a persistent connection, dcb %p\n",
pthread_self(), dcb);
dcb->persistentstart = 0;
dcb->was_persistent = true;
return dcb;
}
else
@ -867,6 +853,8 @@ dcb_connect(SERVER *server, SESSION *session, const char *protocol)
dcb->dcb_server_status = server->status;
dcb->dcb_port = server->port;
dcb->was_persistent = false;
/**
* backend_dcb is connected to backend server, and once backend_dcb
* is added to poll set, authentication takes place as part of
@ -1384,34 +1372,6 @@ dcb_write(DCB *dcb, GWBUF *queue)
return 1;
}
#if defined(FAKE_CODE)
/**
* Fake code for dcb_write
* (Should have fuller description)
*
* @param dcb The DCB of the client
*/
static inline void
dcb_write_fake_code(DCB *dcb)
{
if (dcb->session != NULL)
{
if (dcb->dcb_role == DCB_ROLE_CLIENT_HANDLER && fail_next_client_fd)
{
dcb_fake_write_errno[dcb->fd] = 32;
dcb_fake_write_ev[dcb->fd] = 29;
fail_next_client_fd = false;
}
else if (dcb->dcb_role == DCB_ROLE_BACKEND_HANDLER && fail_next_backend_fd)
{
dcb_fake_write_errno[dcb->fd] = 32;
dcb_fake_write_ev[dcb->fd] = 29;
fail_next_backend_fd = false;
}
}
}
#endif /* FAKE_CODE */
/**
* Check the parameters for dcb_write
*
@ -1824,6 +1784,7 @@ dcb_maybe_add_persistent(DCB *dcb)
MXS_DEBUG("%lu [dcb_maybe_add_persistent] Adding DCB to persistent pool, user %s.\n",
pthread_self(),
dcb->user);
dcb->was_persistent = false;
dcb->dcb_is_zombie = false;
dcb->persistentstart = time(NULL);
if (dcb->session)
@ -2421,28 +2382,10 @@ gw_write(DCB *dcb, GWBUF *writeq, bool *stop_writing)
errno = 0;
#if defined(FAKE_CODE)
if (fd > 0 && dcb_fake_write_errno[fd] != 0)
{
ss_dassert(dcb_fake_write_ev[fd] != 0);
written = write(fd, buf, nbytes / 2); /*< leave peer to read missing bytes */
if (written > 0)
{
written = -1;
errno = dcb_fake_write_errno[fd];
}
}
else if (fd > 0)
{
written = write(fd, buf, nbytes);
}
#else
if (fd > 0)
{
written = write(fd, buf, nbytes);
}
#endif /* FAKE_CODE */
saved_errno = errno;
errno = 0;
@ -3089,14 +3032,9 @@ dcb_accept(DCB *listener, GWPROTOCOL *protocol_funcs)
if ((c_sock = dcb_accept_one_connection(listener, (struct sockaddr *)&client_conn)) >= 0)
{
listener->stats.n_accepts++;
#if defined(SS_DEBUG)
MXS_DEBUG("%lu [gw_MySQLAccept] Accepted fd %d.",
pthread_self(),
c_sock);
#endif /* SS_DEBUG */
#if defined(FAKE_CODE)
conn_open[c_sock] = true;
#endif /* FAKE_CODE */
/* set nonblocking */
sendbuf = MXS_CLIENT_SO_SNDBUF;
@ -3229,27 +3167,12 @@ dcb_accept_one_connection(DCB *listener, struct sockaddr *client_conn)
socklen_t client_len = sizeof(struct sockaddr_storage);
int eno = 0;
#if defined(FAKE_CODE)
if (fail_next_accept > 0)
{
c_sock = -1;
eno = fail_accept_errno;
fail_next_accept -= 1;
}
else
{
fail_accept_errno = 0;
#endif /* FAKE_CODE */
/* new connection from client */
c_sock = accept(listener->fd,
client_conn,
&client_len);
eno = errno;
errno = 0;
#if defined(FAKE_CODE)
}
#endif /* FAKE_CODE */
/* new connection from client */
c_sock = accept(listener->fd,
client_conn,
&client_len);
eno = errno;
errno = 0;
if (c_sock == -1)
{
@ -3367,9 +3290,6 @@ dcb_listen(DCB *listener, const char *config, const char *protocol_name)
"attempting to register on an epoll instance.");
return -1;
}
#if defined(FAKE_CODE)
conn_open[listener_socket] = true;
#endif /* FAKE_CODE */
return 0;
}

View File

@ -128,6 +128,7 @@ static struct option long_options[] =
{"configdir", required_argument, 0, 'C'},
{"datadir", required_argument, 0, 'D'},
{"execdir", required_argument, 0, 'E'},
{"persistdir", required_argument, 0, 'F'},
{"language", required_argument, 0, 'N'},
{"piddir", required_argument, 0, 'P'},
{"basedir", required_argument, 0, 'R'},
@ -181,7 +182,6 @@ static int set_user(const char* user);
bool pid_file_exists();
void write_child_exit_code(int fd, int code);
static bool change_cwd();
void shutdown_server();
static void log_exit_status();
static bool daemonize();
static bool sniff_configuration(const char* filepath);
@ -288,20 +288,45 @@ static void sigusr1_handler (int i)
}
static const char shutdown_msg[] = "\n\nShutting down MaxScale\n\n";
static const char patience_msg[] =
"\n"
"Patience is a virtue...\n"
"Shutdown in progress, but one more Ctrl-C or SIGTERM and MaxScale goes down,\n"
"no questions asked.\n";
static void sigterm_handler(int i)
{
last_signal = i;
shutdown_server();
write(STDERR_FILENO, shutdown_msg, sizeof(shutdown_msg) - 1);
int n_shutdowns = maxscale_shutdown();
if (n_shutdowns == 1)
{
write(STDERR_FILENO, shutdown_msg, sizeof(shutdown_msg) - 1);
}
else
{
exit(EXIT_FAILURE);
}
}
static void
sigint_handler(int i)
{
last_signal = i;
shutdown_server();
write(STDERR_FILENO, shutdown_msg, sizeof(shutdown_msg) - 1);
int n_shutdowns = maxscale_shutdown();
if (n_shutdowns == 1)
{
write(STDERR_FILENO, shutdown_msg, sizeof(shutdown_msg) - 1);
}
else if (n_shutdowns == 2)
{
write(STDERR_FILENO, patience_msg, sizeof(patience_msg) - 1);
}
else
{
exit(EXIT_FAILURE);
}
}
static void
@ -895,8 +920,9 @@ static void usage(void)
" -B, --libdir=PATH path to module directory\n"
" -C, --configdir=PATH path to configuration file directory\n"
" -D, --datadir=PATH path to data directory,\n"
" stored embedded mysql tables\n"
" stores internal MaxScale data\n"
" -E, --execdir=PATH path to the maxscale and other executable files\n"
" -F, --persistdir=PATH path to persisted configuration directory\n"
" -N, --language=PATH path to errmsg.sys file\n"
" -P, --piddir=PATH path to PID file directory\n"
" -R, --basedir=PATH base path for all other paths\n"
@ -920,6 +946,7 @@ static void usage(void)
" execdir : %s\n"
" language : %s\n"
" piddir : %s\n"
" persistdir : %s\n"
"\n"
"If '--basedir' is provided then all other paths, including the default\n"
"configuration file path, are defined relative to that. As an example,\n"
@ -930,7 +957,8 @@ static void usage(void)
progname,
get_configdir(), default_cnf_fname,
get_configdir(), get_logdir(), get_cachedir(), get_libdir(),
get_datadir(), get_execdir(), get_langdir(), get_piddir());
get_datadir(), get_execdir(), get_langdir(), get_piddir(),
get_config_persistdir());
}
@ -1205,6 +1233,12 @@ bool set_dirs(const char *basedir)
set_piddir(path);
}
if (rv && (rv = handle_path_arg(&path, basedir, MXS_DEFAULT_DATA_SUBPATH "/"
MXS_DEFAULT_CONFIG_PERSIST_SUBPATH, true, true)))
{
set_config_persistdir(path);
}
return rv;
}
@ -1276,15 +1310,6 @@ int main(int argc, char **argv)
progname = *argv;
snprintf(datadir, PATH_MAX, "%s", default_datadir);
datadir[PATH_MAX] = '\0';
#if defined(FAKE_CODE)
memset(conn_open, 0, sizeof(bool) * 10240);
memset(dcb_fake_write_errno, 0, sizeof(unsigned char) * 10240);
memset(dcb_fake_write_ev, 0, sizeof(__int32_t) * 10240);
fail_next_backend_fd = false;
fail_next_client_fd = false;
fail_next_accept = 0;
fail_accept_errno = 0;
#endif /* FAKE_CODE */
file_write_header(stderr);
/*<
* Register functions which are called at exit except libmysqld-related,
@ -1303,7 +1328,7 @@ int main(int argc, char **argv)
}
}
while ((opt = getopt_long(argc, argv, "dcf:l:vVs:S:?L:D:C:B:U:A:P:G:N:E:",
while ((opt = getopt_long(argc, argv, "dcf:l:vVs:S:?L:D:C:B:U:A:P:G:N:E:F:",
long_options, &option_index)) != -1)
{
bool succp = true;
@ -1453,6 +1478,16 @@ int main(int argc, char **argv)
succp = false;
}
break;
case 'F':
if (handle_path_arg(&tmp_path, optarg, NULL, true, true))
{
set_config_persistdir(tmp_path);
}
else
{
succp = false;
}
break;
case 'R':
if (handle_path_arg(&tmp_path, optarg, NULL, true, false))
{
@ -1936,7 +1971,13 @@ int main(int argc, char **argv)
/*
* Start the housekeeper thread
*/
hkinit();
if (!hkinit())
{
char* logerr = "Failed to start housekeeper thread.";
print_log_n_stderr(true, true, logerr, logerr, 0);
rc = MAXSCALE_INTERNALERROR;
goto return_main;
}
/*<
* Start the polling threads, note this is one less than is
@ -1974,6 +2015,11 @@ int main(int argc, char **argv)
*/
poll_waitevents((void *)0);
/*<
* Wait for the housekeeper to finish.
*/
hkfinish();
/*<
* Wait server threads' completion.
*/
@ -2030,14 +2076,22 @@ return_main:
/*<
* Shutdown MaxScale server
*/
void
shutdown_server()
int maxscale_shutdown()
{
service_shutdown();
poll_shutdown();
hkshutdown();
memlog_flush_all();
log_flush_shutdown();
static int n_shutdowns = 0;
int n = atomic_add(&n_shutdowns, 1);
if (n == 0)
{
service_shutdown();
poll_shutdown();
hkshutdown();
memlog_flush_all();
log_flush_shutdown();
}
return n + 1;
}
static void log_flush_shutdown(void)
@ -2481,6 +2535,20 @@ static int cnf_preparser(void* data, const char* section, const char* name, cons
}
}
}
else if (strcmp(name, "persistdir") == 0)
{
if (strcmp(get_config_persistdir(), default_config_persistdir) == 0)
{
if (handle_path_arg((char**)&tmp, (char*)value, NULL, true, false))
{
set_config_persistdir(tmp);
}
else
{
return 0;
}
}
}
else if (strcmp(name, "syslog") == 0)
{
if (!syslog_configured)

View File

@ -26,6 +26,17 @@ void set_configdir(char* str)
configdir = str;
}
/**
* Set the configuration parts file directory
* @param str Path to directory
*/
void set_config_persistdir(char* str)
{
MXS_FREE(config_persistdir);
clean_up_pathname(str);
config_persistdir = str;
}
/**
* Set the log file directory
* @param str Path to directory
@ -160,6 +171,15 @@ char* get_configdir()
return configdir ? configdir : (char*) default_configdir;
}
/**
* Get the configuration file directory
* @return The path to the configuration file directory
*/
char* get_config_persistdir()
{
return config_persistdir ? config_persistdir : (char*) default_config_persistdir;
}
/**
* Get the PID file directory which contains maxscale.pid
* @return Path to the PID file directory

View File

@ -10,13 +10,14 @@
* of this software will be governed by version 2 or later of the General
* Public License.
*/
#include <maxscale/housekeeper.h>
#include <stdlib.h>
#include <string.h>
#include <maxscale/alloc.h>
#include <maxscale/housekeeper.h>
#include <maxscale/thread.h>
#include <maxscale/atomic.h>
#include <maxscale/semaphore.h>
#include <maxscale/spinlock.h>
#include <maxscale/log_manager.h>
#include <maxscale/thread.h>
/**
* @file housekeeper.c Provide a mechanism to run periodic tasks
@ -49,22 +50,28 @@ static HKTASK *tasks = NULL;
*/
static SPINLOCK tasklock = SPINLOCK_INIT;
static int do_shutdown = 0;
static bool do_shutdown = 0;
long hkheartbeat = 0; /*< One heartbeat is 100 milliseconds */
static THREAD hk_thr_handle;
static void hkthread(void *);
/**
* Initialise the housekeeper thread
*/
void
bool
hkinit()
{
if (thread_start(&hk_thr_handle, hkthread, NULL) == NULL)
bool inited = false;
if (thread_start(&hk_thr_handle, hkthread, NULL) != NULL)
{
MXS_ERROR("Failed to start housekeeper thread.");
inited = true;
}
else
{
MXS_ALERT("Failed to start housekeeper thread.");
}
return inited;
}
/**
@ -255,21 +262,17 @@ hkthread(void *data)
void *taskdata;
int i;
for (;;)
while (!do_shutdown)
{
for (i = 0; i < 10; i++)
{
if (do_shutdown)
{
return;
}
thread_millisleep(100);
hkheartbeat++;
}
now = time(0);
spinlock_acquire(&tasklock);
ptr = tasks;
while (ptr)
while (!do_shutdown && ptr)
{
if (ptr->nextdue <= now)
{
@ -297,16 +300,25 @@ hkthread(void *data)
}
spinlock_release(&tasklock);
}
MXS_NOTICE("Housekeeper shutting down.");
}
/**
* Called to shutdown the housekeeper
*
*/
void
hkshutdown()
{
do_shutdown = 1;
do_shutdown = true;
atomic_synchronize();
}
void hkfinish()
{
ss_dassert(do_shutdown);
MXS_NOTICE("Waiting for housekeeper to shut down.");
thread_wait(hk_thr_handle);
do_shutdown = false;
MXS_NOTICE("Housekeeper has shut down.");
}
/**

View File

@ -58,7 +58,7 @@ const monitor_def_t monitor_event_definitions[MAX_MONITOR_EVENT] =
static MONITOR *allMonitors = NULL;
static SPINLOCK monLock = SPINLOCK_INIT;
static void monitor_servers_free(MONITOR_SERVERS *servers);
static void monitor_server_free_all(MONITOR_SERVERS *servers);
/**
* Allocate a new monitor, load the associated module for the monitor
@ -93,9 +93,8 @@ monitor_alloc(char *name, char *module)
mon->name = name;
mon->handle = NULL;
mon->databases = NULL;
mon->password = NULL;
mon->user = NULL;
mon->password = NULL;
*mon->password = '\0';
*mon->user = '\0';
mon->read_timeout = DEFAULT_READ_TIMEOUT;
mon->write_timeout = DEFAULT_WRITE_TIMEOUT;
mon->connect_timeout = DEFAULT_CONNECT_TIMEOUT;
@ -142,7 +141,7 @@ monitor_free(MONITOR *mon)
}
spinlock_release(&monLock);
free_config_parameter(mon->parameters);
monitor_servers_free(mon->databases);
monitor_server_free_all(mon->databases);
MXS_FREE(mon->name);
MXS_FREE(mon);
}
@ -258,6 +257,13 @@ monitorAddServer(MONITOR *mon, SERVER *server)
/* pending status is updated by get_replication_tree */
db->pending_status = 0;
monitor_state_t old_state = mon->state;
if (old_state == MONITOR_STATE_RUNNING)
{
monitorStop(mon);
}
spinlock_acquire(&mon->lock);
if (mon->databases == NULL)
@ -274,23 +280,87 @@ monitorAddServer(MONITOR *mon, SERVER *server)
ptr->next = db;
}
spinlock_release(&mon->lock);
if (old_state == MONITOR_STATE_RUNNING)
{
monitorStart(mon, mon->parameters);
}
}
static void monitor_server_free(MONITOR_SERVERS *tofree)
{
if (tofree)
{
if (tofree->con)
{
mysql_close(tofree->con);
}
MXS_FREE(tofree);
}
}
/**
* Free monitor server list
* @param servers Servers to free
*/
static void monitor_servers_free(MONITOR_SERVERS *servers)
static void monitor_server_free_all(MONITOR_SERVERS *servers)
{
while (servers)
{
MONITOR_SERVERS *tofree = servers;
servers = servers->next;
if (tofree->con)
monitor_server_free(tofree);
}
}
/**
* Remove a server from a monitor.
*
* @param mon The Monitor instance
* @param server The Server to remove
*/
void monitorRemoveServer(MONITOR *mon, SERVER *server)
{
monitor_state_t old_state = mon->state;
if (old_state == MONITOR_STATE_RUNNING)
{
monitorStop(mon);
}
spinlock_acquire(&mon->lock);
MONITOR_SERVERS *ptr = mon->databases;
if (ptr->server == server)
{
mon->databases = mon->databases->next;
}
else
{
MONITOR_SERVERS *prev = ptr;
while (ptr)
{
mysql_close(tofree->con);
if (ptr->server == server)
{
prev->next = ptr->next;
break;
}
prev = ptr;
ptr = ptr->next;
}
MXS_FREE(tofree);
}
spinlock_release(&mon->lock);
if (ptr)
{
monitor_server_free(ptr);
}
if (old_state == MONITOR_STATE_RUNNING)
{
monitorStart(mon, mon->parameters);
}
}
@ -305,8 +375,8 @@ static void monitor_servers_free(MONITOR_SERVERS *servers)
void
monitorAddUser(MONITOR *mon, char *user, char *passwd)
{
mon->user = MXS_STRDUP_A(user);
mon->password = MXS_STRDUP_A(passwd);
snprintf(mon->user, sizeof(mon->user), "%s", user);
snprintf(mon->password, sizeof(mon->password), "%s", passwd);
}
/**
@ -536,13 +606,8 @@ monitorGetList()
*/
bool check_monitor_permissions(MONITOR* monitor, const char* query)
{
if (monitor->databases == NULL)
{
MXS_ERROR("[%s] Monitor is missing the servers parameter.", monitor->name);
return false;
}
if (config_get_global_options()->skip_permission_checks)
if (monitor->databases == NULL || // No servers to check
config_get_global_options()->skip_permission_checks)
{
return true;
}
@ -992,8 +1057,15 @@ mon_connect_to_db(MONITOR* mon, MONITOR_SERVERS *database)
if ((database->con = mysql_init(NULL)))
{
char *uname = database->server->monuser ? database->server->monuser : mon->user;
char *passwd = database->server->monpw ? database->server->monpw : mon->password;
char *uname = mon->user;
char *passwd = mon->password;
if (database->server->monuser[0] && database->server->monpw[0])
{
uname = database->server->monuser;
passwd = database->server->monpw;
}
char *dpwd = decryptPassword(passwd);
mysql_options(database->con, MYSQL_OPT_CONNECT_TIMEOUT, (void *) &mon->connect_timeout);
@ -1036,12 +1108,9 @@ void
mon_log_connect_error(MONITOR_SERVERS* database, connect_result_t rval)
{
MXS_ERROR(rval == MONITOR_CONN_TIMEOUT ?
"Monitor timed out when connecting to "
"server %s:%d : \"%s\"" :
"Monitor was unable to connect to "
"server %s:%d : \"%s\"",
database->server->name,
database->server->port,
"Monitor timed out when connecting to server %s:%d : \"%s\"" :
"Monitor was unable to connect to server %s:%d : \"%s\"",
database->server->name, database->server->port,
mysql_error(database->con));
}
@ -1057,3 +1126,29 @@ void mon_log_state_change(MONITOR_SERVERS *ptr)
MXS_FREE(prev);
MXS_FREE(next);
}
bool monitor_server_in_use(const SERVER *server)
{
bool rval = false;
spinlock_acquire(&monLock);
for (MONITOR *mon = allMonitors; mon && !rval; mon = mon->next)
{
spinlock_acquire(&mon->lock);
for (MONITOR_SERVERS *db = mon->databases; db && !rval; db = db->next)
{
if (db->server == server)
{
rval = true;
}
}
spinlock_release(&mon->lock);
}
spinlock_release(&monLock);
return rval;
}

View File

@ -895,18 +895,6 @@ process_pollq(int thread_id)
thread_data[thread_id].event = ev;
}
#if defined(FAKE_CODE)
if (dcb_fake_write_ev[dcb->fd] != 0)
{
MXS_DEBUG("%lu [poll_waitevents] "
"Added fake events %d to ev %d.",
pthread_self(),
dcb_fake_write_ev[dcb->fd],
ev);
ev |= dcb_fake_write_ev[dcb->fd];
dcb_fake_write_ev[dcb->fd] = 0;
}
#endif /* FAKE_CODE */
ss_debug(spinlock_acquire(&dcb->dcb_initlock));
ss_dassert(dcb->state != DCB_STATE_ALLOC);
/* It isn't obvious that this is impossible */
@ -1007,20 +995,6 @@ process_pollq(int thread_id)
if (ev & EPOLLERR)
{
int eno = gw_getsockerrno(dcb->fd);
#if defined(FAKE_CODE)
if (eno == 0)
{
eno = dcb_fake_write_errno[dcb->fd];
char errbuf[MXS_STRERROR_BUFLEN];
MXS_DEBUG("%lu [poll_waitevents] "
"Added fake errno %d. "
"%s",
pthread_self(),
eno,
strerror_r(eno, errbuf, sizeof(errbuf)));
}
dcb_fake_write_errno[dcb->fd] = 0;
#endif /* FAKE_CODE */
if (eno != 0)
{
char errbuf[MXS_STRERROR_BUFLEN];

View File

@ -27,6 +27,12 @@
#define QC_TRACE()
#endif
struct type_name_info
{
const char* name;
size_t name_len;
};
static const char default_qc_name[] = "qc_sqlite";
static QUERY_CLASSIFIER* classifier;
@ -189,12 +195,12 @@ bool qc_query_has_clause(GWBUF* query)
return classifier->qc_query_has_clause(query);
}
char* qc_get_affected_fields(GWBUF* query)
void qc_get_field_info(GWBUF* query, const QC_FIELD_INFO** infos, size_t* n_infos)
{
QC_TRACE();
ss_dassert(classifier);
return classifier->qc_get_affected_fields(query);
classifier->qc_get_field_info(query, infos, n_infos);
}
char** qc_get_database_names(GWBUF* query, int* sizep)
@ -213,6 +219,141 @@ char* qc_get_prepare_name(GWBUF* query)
return classifier->qc_get_prepare_name(query);
}
struct type_name_info field_usage_to_type_name_info(qc_field_usage_t usage)
{
struct type_name_info info;
switch (usage)
{
case QC_USED_IN_SELECT:
{
static const char name[] = "QC_USED_IN_SELECT";
info.name = name;
info.name_len = sizeof(name) - 1;
}
break;
case QC_USED_IN_SUBSELECT:
{
static const char name[] = "QC_USED_IN_SUBSELECT";
info.name = name;
info.name_len = sizeof(name) - 1;
}
break;
case QC_USED_IN_WHERE:
{
static const char name[] = "QC_USED_IN_WHERE";
info.name = name;
info.name_len = sizeof(name) - 1;
}
break;
case QC_USED_IN_SET:
{
static const char name[] = "QC_USED_IN_SET";
info.name = name;
info.name_len = sizeof(name) - 1;
}
break;
case QC_USED_IN_GROUP_BY:
{
static const char name[] = "QC_USED_IN_GROUP_BY";
info.name = name;
info.name_len = sizeof(name) - 1;
}
break;
default:
{
static const char name[] = "UNKNOWN_FIELD_USAGE";
info.name = name;
info.name_len = sizeof(name) - 1;
}
break;
}
return info;
}
const char* qc_field_usage_to_string(qc_field_usage_t usage)
{
return field_usage_to_type_name_info(usage).name;
}
static const qc_field_usage_t FIELD_USAGE_VALUES[] =
{
QC_USED_IN_SELECT,
QC_USED_IN_SUBSELECT,
QC_USED_IN_WHERE,
QC_USED_IN_SET,
QC_USED_IN_GROUP_BY,
};
static const int N_FIELD_USAGE_VALUES =
sizeof(FIELD_USAGE_VALUES) / sizeof(FIELD_USAGE_VALUES[0]);
static const int FIELD_USAGE_MAX_LEN = 20; // strlen("QC_USED_IN_SUBSELECT");
char* qc_field_usage_mask_to_string(uint32_t mask)
{
size_t len = 0;
// First calculate how much space will be needed.
for (int i = 0; i < N_FIELD_USAGE_VALUES; ++i)
{
if (mask & FIELD_USAGE_VALUES[i])
{
if (len != 0)
{
++len; // strlen("|");
}
len += FIELD_USAGE_MAX_LEN;
}
}
++len;
// Then make one allocation and build the string.
char* s = (char*) MXS_MALLOC(len);
if (s)
{
if (len > 1)
{
char* p = s;
for (int i = 0; i < N_FIELD_USAGE_VALUES; ++i)
{
qc_field_usage_t value = FIELD_USAGE_VALUES[i];
if (mask & value)
{
if (p != s)
{
strcpy(p, "|");
++p;
}
struct type_name_info info = field_usage_to_type_name_info(value);
strcpy(p, info.name);
p += info.name_len;
}
}
}
else
{
*s = 0;
}
}
return s;
}
const char* qc_op_to_string(qc_query_op_t op)
{
switch (op)
@ -261,12 +402,6 @@ const char* qc_op_to_string(qc_query_op_t op)
}
}
struct type_name_info
{
const char* name;
size_t name_len;
};
struct type_name_info type_to_type_name_info(qc_query_type_t type)
{
struct type_name_info info;

View File

@ -175,8 +175,8 @@ secrets_readKeys(const char* path)
if (secret_stats.st_mode != (S_IRUSR | S_IFREG))
{
close(fd);
MXS_ERROR("Ignoring secrets file "
"%s, invalid permissions.",
MXS_ERROR("Ignoring secrets file %s, invalid permissions."
"The only permission on the file should be owner:read.",
secret_file);
return NULL;
}

View File

@ -35,6 +35,11 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <maxscale/service.h>
#include <maxscale/monitor.h>
#include <maxscale/session.h>
#include <maxscale/server.h>
#include <maxscale/spinlock.h>
@ -44,6 +49,7 @@
#include <maxscale/gw_ssl.h>
#include <maxscale/alloc.h>
#include <maxscale/modules.h>
#include <maxscale/gwdirs.h>
static SPINLOCK server_spin = SPINLOCK_INIT;
static SERVER *allServers = NULL;
@ -51,63 +57,62 @@ static SERVER *allServers = NULL;
static void spin_reporter(void *, char *, int);
static void server_parameter_free(SERVER_PARAM *tofree);
/**
* Allocate a new server withn the gateway
*
*
* @param servname The server name
* @param protocol The protocol to use to connect to the server
* @param port The port to connect to
*
* @return The newly created server or NULL if an error occured
*/
SERVER *
server_alloc(char *servname, char *protocol, unsigned short port, char *authenticator,
char *auth_options)
SERVER* server_alloc(const char *name, const char *address, unsigned short port,
const char *protocol, const char *authenticator, const char *auth_options)
{
if (authenticator)
if (authenticator == NULL && (authenticator = get_default_authenticator(protocol)) == NULL)
{
authenticator = MXS_STRDUP(authenticator);
}
else if ((authenticator = (char*)get_default_authenticator(protocol)) == NULL ||
(authenticator = MXS_STRDUP(authenticator)) == NULL)
{
MXS_ERROR("No authenticator defined for server at %s:%u and no default "
"authenticator for protocol '%s'.", servname, port, protocol);
MXS_ERROR("No authenticator defined for server '%s' and no default "
"authenticator for protocol '%s'.", name, protocol);
return NULL;
}
void *auth_instance = NULL;
if (!authenticator_init(&auth_instance, authenticator, auth_options))
{
MXS_ERROR("Failed to initialize authenticator module '%s' for server"
" at %s:%u.", authenticator, servname, port);
MXS_FREE(authenticator);
MXS_ERROR("Failed to initialize authenticator module '%s' for server '%s' ",
authenticator, name);
return NULL;
}
servname = MXS_STRNDUP(servname, MAX_SERVER_NAME_LEN);
protocol = MXS_STRDUP(protocol);
char *my_auth_options = NULL;
if (auth_options && (my_auth_options = MXS_STRDUP(auth_options)) == NULL)
{
return NULL;
}
SERVER *server = (SERVER *)MXS_CALLOC(1, sizeof(SERVER));
char *my_name = MXS_STRDUP(name);
char *my_protocol = MXS_STRDUP(protocol);
char *my_authenticator = MXS_STRDUP(authenticator);
if (!servname || !protocol || !server || !authenticator)
if (!server || !my_name || !my_protocol || !my_authenticator)
{
MXS_FREE(servname);
MXS_FREE(protocol);
MXS_FREE(server);
MXS_FREE(authenticator);
MXS_FREE(my_name);
MXS_FREE(my_protocol);
MXS_FREE(my_authenticator);
return NULL;
}
if (snprintf(server->name, sizeof(server->name), "%s", address) > sizeof(server->name))
{
MXS_WARNING("Truncated server address '%s' to the maximum size of %lu characters.",
address, sizeof(server->name));
}
#if defined(SS_DEBUG)
server->server_chk_top = CHK_NUM_SERVER;
server->server_chk_tail = CHK_NUM_SERVER;
#endif
server->name = servname;
server->protocol = protocol;
server->authenticator = authenticator;
server->unique_name = my_name;
server->protocol = my_protocol;
server->authenticator = my_authenticator;
server->auth_instance = auth_instance;
server->auth_options = my_auth_options;
server->port = port;
server->status = SERVER_RUNNING;
server->node_id = -1;
@ -121,6 +126,9 @@ server_alloc(char *servname, char *protocol, unsigned short port, char *authenti
server->persistmax = 0;
server->persistmaxtime = 0;
server->persistpoolmax = 0;
server->monuser[0] = '\0';
server->monpw[0] = '\0';
server->is_active = true;
spinlock_init(&server->persistlock);
spinlock_acquire(&server_spin);
@ -164,7 +172,6 @@ server_free(SERVER *tofreeserver)
spinlock_release(&server_spin);
/* Clean up session and free the memory */
MXS_FREE(tofreeserver->name);
MXS_FREE(tofreeserver->protocol);
MXS_FREE(tofreeserver->unique_name);
MXS_FREE(tofreeserver->server_string);
@ -243,42 +250,38 @@ server_get_persistent(SERVER *server, char *user, const char *protocol)
return NULL;
}
/**
* Set a unique name for the server
*
* @param server The server to set the name on
* @param name The unique name for the server
*/
void
server_set_unique_name(SERVER *server, char *name)
static inline SERVER* next_active_server(SERVER *server)
{
server->unique_name = MXS_STRDUP_A(name);
while (server && !server->is_active)
{
server = server->next;
}
return server;
}
/**
* Find an existing server using the unique section name in
* configuration file
* @brief Find a server with the specified name
*
* @param servname The Server name or address
* @param port The server port
* @return The server or NULL if not found
* @param name Name of the server
* @return The server or NULL if not found
*/
SERVER *
server_find_by_unique_name(char *name)
SERVER * server_find_by_unique_name(const char *name)
{
SERVER *server;
spinlock_acquire(&server_spin);
server = allServers;
SERVER *server = next_active_server(allServers);
while (server)
{
if (server->unique_name && strcmp(server->unique_name, name) == 0)
{
break;
}
server = server->next;
server = next_active_server(server->next);
}
spinlock_release(&server_spin);
return server;
}
@ -292,19 +295,20 @@ server_find_by_unique_name(char *name)
SERVER *
server_find(char *servname, unsigned short port)
{
SERVER *server;
spinlock_acquire(&server_spin);
server = allServers;
SERVER *server = next_active_server(allServers);
while (server)
{
if (strcmp(server->name, servname) == 0 && server->port == port)
{
break;
}
server = server->next;
server = next_active_server(server->next);
}
spinlock_release(&server_spin);
return server;
}
@ -335,15 +339,15 @@ printServer(SERVER *server)
void
printAllServers()
{
SERVER *server;
spinlock_acquire(&server_spin);
server = allServers;
SERVER *server = next_active_server(allServers);
while (server)
{
printServer(server);
server = server->next;
server = next_active_server(server->next);
}
spinlock_release(&server_spin);
}
@ -356,15 +360,15 @@ printAllServers()
void
dprintAllServers(DCB *dcb)
{
SERVER *server;
spinlock_acquire(&server_spin);
server = allServers;
SERVER *server = next_active_server(allServers);
while (server)
{
dprintServer(dcb, server);
server = server->next;
server = next_active_server(server->next);
}
spinlock_release(&server_spin);
}
@ -377,19 +381,20 @@ dprintAllServers(DCB *dcb)
void
dprintAllServersJson(DCB *dcb)
{
SERVER *server;
char *stat;
int len = 0;
int el = 1;
spinlock_acquire(&server_spin);
server = allServers;
SERVER *server = next_active_server(allServers);
while (server)
{
server = server->next;
server = next_active_server(server->next);
len++;
}
server = allServers;
server = next_active_server(allServers);
dcb_printf(dcb, "[\n");
while (server)
{
@ -456,9 +461,10 @@ dprintAllServersJson(DCB *dcb)
{
dcb_printf(dcb, " }\n");
}
server = server->next;
server = next_active_server(server->next);
el++;
}
dcb_printf(dcb, "]\n");
spinlock_release(&server_spin);
}
@ -473,6 +479,11 @@ dprintAllServersJson(DCB *dcb)
void
dprintServer(DCB *dcb, SERVER *server)
{
if (!SERVER_IS_ACTIVE(server))
{
return;
}
dcb_printf(dcb, "Server %p (%s)\n", server, server->unique_name);
dcb_printf(dcb, "\tServer: %s\n", server->name);
char* stat = server_status(server);
@ -602,30 +613,32 @@ dprintPersistentDCBs(DCB *pdcb, SERVER *server)
void
dListServers(DCB *dcb)
{
SERVER *server;
char *stat;
spinlock_acquire(&server_spin);
server = allServers;
SERVER *server = next_active_server(allServers);
bool have_servers = false;
if (server)
{
have_servers = true;
dcb_printf(dcb, "Servers.\n");
dcb_printf(dcb, "-------------------+-----------------+-------+-------------+--------------------\n");
dcb_printf(dcb, "%-18s | %-15s | Port | Connections | %-20s\n",
"Server", "Address", "Status");
dcb_printf(dcb, "-------------------+-----------------+-------+-------------+--------------------\n");
}
while (server)
{
stat = server_status(server);
char *stat = server_status(server);
dcb_printf(dcb, "%-18s | %-15s | %5d | %11d | %s\n",
server->unique_name, server->name,
server->port,
server->stats.n_current, stat);
MXS_FREE(stat);
server = server->next;
server = next_active_server(server->next);
}
if (allServers)
if (have_servers)
{
dcb_printf(dcb, "-------------------+-----------------+-------+-------------+--------------------\n");
}
@ -775,8 +788,16 @@ server_transfer_status(SERVER *dest_server, SERVER *source_server)
void
serverAddMonUser(SERVER *server, char *user, char *passwd)
{
server->monuser = MXS_STRDUP_A(user);
server->monpw = MXS_STRDUP_A(passwd);
if (snprintf(server->monuser, sizeof(server->monuser), "%s", user) > sizeof(server->monuser))
{
MXS_WARNING("Truncated monitor user for server '%s', maximum username "
"length is %lu characters.", server->unique_name, sizeof(server->monuser));
}
if (snprintf(server->monpw, sizeof(server->monpw), "%s", passwd) > sizeof(server->monpw))
{
MXS_WARNING("Truncated monitor password for server '%s', maximum password "
"length is %lu characters.", server->unique_name, sizeof(server->monpw));
}
}
/**
@ -793,28 +814,12 @@ serverAddMonUser(SERVER *server, char *user, char *passwd)
* @param passwd The password to use for the monitor user
*/
void
server_update(SERVER *server, char *protocol, char *user, char *passwd)
server_update_credentials(SERVER *server, char *user, char *passwd)
{
if (!strcmp(server->protocol, protocol))
{
MXS_NOTICE("Update server protocol for server %s to protocol %s.",
server->name,
protocol);
MXS_FREE(server->protocol);
server->protocol = MXS_STRDUP_A(protocol);
}
if (user != NULL && passwd != NULL)
{
if (strcmp(server->monuser, user) == 0 ||
strcmp(server->monpw, passwd) == 0)
{
MXS_NOTICE("Update server monitor credentials for server %s",
server->name);
MXS_FREE(server->monuser);
MXS_FREE(server->monpw);
serverAddMonUser(server, user, passwd);
}
serverAddMonUser(server, user, passwd);
MXS_NOTICE("Updated monitor credentials for server '%s'", server->name);
}
}
@ -922,16 +927,19 @@ serverRowCallback(RESULTSET *set, void *data)
return NULL;
}
(*rowno)++;
row = resultset_make_row(set);
resultset_row_set(row, 0, server->unique_name);
resultset_row_set(row, 1, server->name);
sprintf(buf, "%d", server->port);
resultset_row_set(row, 2, buf);
sprintf(buf, "%d", server->stats.n_current);
resultset_row_set(row, 3, buf);
stat = server_status(server);
resultset_row_set(row, 4, stat);
MXS_FREE(stat);
if (SERVER_IS_ACTIVE(server))
{
row = resultset_make_row(set);
resultset_row_set(row, 0, server->unique_name);
resultset_row_set(row, 1, server->name);
sprintf(buf, "%d", server->port);
resultset_row_set(row, 2, buf);
sprintf(buf, "%d", server->stats.n_current);
resultset_row_set(row, 3, buf);
stat = server_status(server);
resultset_row_set(row, 4, stat);
MXS_FREE(stat);
}
spinlock_release(&server_spin);
return row;
}
@ -979,11 +987,7 @@ server_update_address(SERVER *server, char *address)
spinlock_acquire(&server_spin);
if (server && address)
{
if (server->name)
{
MXS_FREE(server->name);
}
server->name = MXS_STRDUP_A(address);
strcpy(server->name, address);
}
spinlock_release(&server_spin);
}
@ -1070,3 +1074,294 @@ bool server_set_version_string(SERVER* server, const char* string)
return rval;
}
/**
* Creates a server configuration at the location pointed by @c filename
*
* @param server Server to serialize into a configuration
* @param filename Filename where configuration is written
* @return True on success, false on error
*/
static bool create_server_config(SERVER *server, const char *filename)
{
int file = open(filename, O_EXCL | O_CREAT | O_WRONLY, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
if (file == -1)
{
char errbuf[MXS_STRERROR_BUFLEN];
MXS_ERROR("Failed to open file '%s' when serializing server '%s': %d, %s",
filename, server->unique_name, errno, strerror_r(errno, errbuf, sizeof(errbuf)));
return false;
}
// TODO: Check for return values on all of the dprintf calls
dprintf(file, "[%s]\n", server->unique_name);
dprintf(file, "type=server\n");
dprintf(file, "protocol=%s\n", server->protocol);
dprintf(file, "address=%s\n", server->name);
dprintf(file, "port=%u\n", server->port);
dprintf(file, "authenticator=%s\n", server->authenticator);
if (server->auth_options)
{
dprintf(file, "authenticator_options=%s\n", server->auth_options);
}
if (*server->monpw && *server->monuser)
{
dprintf(file, "monitoruser=%s\n", server->monuser);
dprintf(file, "monitorpw=%s\n", server->monpw);
}
if (server->persistpoolmax)
{
dprintf(file, "persistpoolmax=%ld\n", server->persistpoolmax);
}
if (server->persistmaxtime)
{
dprintf(file, "persistmaxtime=%ld\n", server->persistmaxtime);
}
if (server->server_ssl)
{
dprintf(file, "ssl=required\n");
if (server->server_ssl->ssl_cert)
{
dprintf(file, "ssl_cert=%s\n", server->server_ssl->ssl_cert);
}
if (server->server_ssl->ssl_key)
{
dprintf(file, "ssl_key=%s\n", server->server_ssl->ssl_key);
}
if (server->server_ssl->ssl_ca_cert)
{
dprintf(file, "ssl_ca_cert=%s\n", server->server_ssl->ssl_ca_cert);
}
if (server->server_ssl->ssl_cert_verify_depth)
{
dprintf(file, "ssl_cert_verify_depth=%d\n", server->server_ssl->ssl_cert_verify_depth);
}
const char *version = NULL;
switch (server->server_ssl->ssl_method_type)
{
case SERVICE_TLS10:
version = "TLSV10";
break;
#ifdef OPENSSL_1_0
case SERVICE_TLS11:
version = "TLSV11";
break;
case SERVICE_TLS12:
version = "TLSV12";
break;
#endif
case SERVICE_SSL_TLS_MAX:
version = "MAX";
break;
default:
break;
}
if (version)
{
dprintf(file, "ssl_version=%s\n", version);
}
}
close(file);
return true;
}
/**
* @brief Serialize a server to a file
*
* This converts @c server into an INI format file. This allows created servers
* to be persisted to disk. This will replace any existing files with the same
* name.
*
* @param server Server to serialize
* @return False if the serialization of the server fails, true if it was successful
*/
static bool server_serialize(SERVER *server)
{
bool rval = false;
char filename[PATH_MAX];
snprintf(filename, sizeof(filename), "%s/%s.cnf.tmp", get_config_persistdir(),
server->unique_name);
if (unlink(filename) == -1 && errno != ENOENT)
{
char err[MXS_STRERROR_BUFLEN];
MXS_ERROR("Failed to remove temporary server configuration at '%s': %d, %s",
filename, errno, strerror_r(errno, err, sizeof(err)));
}
else if (create_server_config(server, filename))
{
char final_filename[PATH_MAX];
strcpy(final_filename, filename);
char *dot = strrchr(final_filename, '.');
ss_dassert(dot);
*dot = '\0';
if (rename(filename, final_filename) == 0)
{
rval = true;
}
else
{
char err[MXS_STRERROR_BUFLEN];
MXS_ERROR("Failed to rename temporary server configuration at '%s': %d, %s",
filename, errno, strerror_r(errno, err, sizeof(err)));
}
}
return rval;
}
/** Try to find a server with a matching name that has been destroyed */
static SERVER* find_destroyed_server(const char *name, const char *protocol,
const char *authenticator, const char *auth_options)
{
spinlock_acquire(&server_spin);
SERVER *server = allServers;
while (server)
{
CHK_SERVER(server);
if (strcmp(server->unique_name, name) == 0 &&
strcmp(server->protocol, protocol) == 0 &&
strcmp(server->authenticator, authenticator) == 0)
{
if ((auth_options == NULL && server->auth_options == NULL) ||
(auth_options && server->auth_options &&
strcmp(server->auth_options, auth_options) == 0))
{
break;
}
}
server = server->next;
}
spinlock_release(&server_spin);
return server;
}
bool server_create(const char *name, const char *address, const char *port,
const char *protocol, const char *authenticator,
const char *authenticator_options)
{
bool rval = false;
if (server_find_by_unique_name(name) == NULL)
{
// TODO: Get default values from the protocol module
if (port == NULL)
{
port = "3306";
}
if (protocol == NULL)
{
protocol = "MySQLBackend";
}
if (authenticator == NULL && (authenticator = get_default_authenticator(protocol)) == NULL)
{
MXS_ERROR("No authenticator defined for server '%s' and no default "
"authenticator for protocol '%s'.", name, protocol);
return false;
}
/** First check if this service has been created before */
SERVER *server = find_destroyed_server(name, protocol, authenticator,
authenticator_options);
if (server)
{
/** Found old server, replace network details with new ones and
* reactivate it */
snprintf(server->name, sizeof(server->name), "%s", address);
server->port = atoi(port);
server->is_active = true;
rval = true;
}
else if ((server = server_alloc(name, address, atoi(port), protocol, authenticator,
authenticator_options)))
{
if (server_serialize(server))
{
/** server_alloc will add the server to the global list of
* servers so we don't need to manually add it. */
rval = true;
}
}
}
return rval;
}
bool server_destroy(SERVER *server)
{
bool rval = false;
if (service_server_in_use(server) || monitor_server_in_use(server))
{
MXS_ERROR("Cannot destroy server '%s' as it is used by at least one "
"service or monitor", server->unique_name);
}
else
{
char filename[PATH_MAX];
snprintf(filename, sizeof(filename), "%s/%s.cnf", get_config_persistdir(),
server->unique_name);
if (unlink(filename) == -1)
{
if (errno != ENOENT)
{
char err[MXS_STRERROR_BUFLEN];
MXS_ERROR("Failed to remove persisted server configuration '%s': %d, %s",
filename, errno, strerror_r(errno, err, sizeof(err)));
}
else
{
rval = true;
MXS_WARNING("Server '%s' was not created at runtime. Remove the "
"server manually from the correct configuration file.",
server->unique_name);
}
}
else
{
rval = true;
}
if (rval)
{
MXS_NOTICE("Destroyed server '%s' at %s:%u", server->unique_name,
server->name, server->port);
server->is_active = false;
}
}
return rval;
}
bool server_is_ssl_parameter(const char *key)
{
// TODO: Implement this
return false;
}
void server_update_ssl(SERVER *server, const char *key, const char *value)
{
// TODO: Implement this
}

View File

@ -34,6 +34,7 @@
* 03/03/15 Massimiliano Pinto Added config_enable_feedback_task() call in serviceStartAll
* 19/06/15 Martin Brampton More meaningful names for temp variables
* 31/05/16 Martin Brampton Implement connection throttling
* 08/11/16 Massimiliano Pinto Added: service_shutdown() calls destroyInstance() hoosk for routers
*
* @endverbatim
*/
@ -66,6 +67,9 @@
#include <maxscale/alloc.h>
#include <maxscale/utils.h>
/** Base value for server weights */
#define SERVICE_BASE_SERVER_WEIGHT 1000
/** To be used with configuration type checks */
typedef struct typelib_st
{
@ -96,6 +100,7 @@ static void service_add_qualified_param(SERVICE* svc,
CONFIG_PARAMETER* param);
static void service_internal_restart(void *data);
static void service_queue_check(void *data);
static void service_calculate_weights(SERVICE *service);
/**
* Allocate a new service for the gateway to support
@ -145,6 +150,7 @@ service_alloc(const char *servname, const char *router)
service->capabilities = service->router->getCapabilities();
service->client_count = 0;
service->n_dbref = 0;
service->name = (char*)servname;
service->routerModule = (char*)router;
service->users_from_all = false;
@ -474,6 +480,9 @@ static void free_string_array(char** array)
int
serviceStart(SERVICE *service)
{
/** Calculate the server weights */
service_calculate_weights(service);
int listeners = 0;
char **router_options = copy_string_array(service->routerOptions);
@ -729,6 +738,28 @@ int serviceHasProtocol(SERVICE *service, const char *protocol,
return proto != NULL;
}
/**
* Allocate a new server reference
*
* @param server Server to refer to
* @return Server reference or NULL on error
*/
static SERVER_REF* server_ref_create(SERVER *server)
{
SERVER_REF *sref = MXS_MALLOC(sizeof(SERVER_REF));
if (sref)
{
sref->next = NULL;
sref->server = server;
sref->weight = SERVICE_BASE_SERVER_WEIGHT;
sref->connections = 0;
sref->active = true;
}
return sref;
}
/**
* Add a backend database server to a service
*
@ -738,31 +769,71 @@ int serviceHasProtocol(SERVICE *service, const char *protocol,
void
serviceAddBackend(SERVICE *service, SERVER *server)
{
SERVER_REF *sref = MXS_MALLOC(sizeof(SERVER_REF));
SERVER_REF *new_ref = server_ref_create(server);
if (sref)
if (new_ref)
{
sref->next = NULL;
sref->server = server;
spinlock_acquire(&service->spin);
service->n_dbref++;
if (service->dbref)
{
SERVER_REF *ref = service->dbref;
while (ref->next)
SERVER_REF *prev = ref;
while (ref)
{
if (ref->server == server)
{
ref->active = true;
break;
}
prev = ref;
ref = ref->next;
}
ref->next = sref;
if (ref == NULL)
{
/** A new server that hasn't been used by this service */
atomic_synchronize();
prev->next = new_ref;
}
}
else
{
service->dbref = sref;
atomic_synchronize();
service->dbref = new_ref;
}
spinlock_release(&service->spin);
}
}
/**
* @brief Remove a server from a service
*
* This function sets the server reference into an inactive state. This does not
* remove the server from the list or free any of the memory.
*
* @param service Service to modify
* @param server Server to remove
*/
void serviceRemoveBackend(SERVICE *service, const SERVER *server)
{
spinlock_acquire(&service->spin);
for (SERVER_REF *ref = service->dbref; ref; ref = ref->next)
{
if (ref->server == server)
{
ref->active = false;
service->n_dbref--;
break;
}
}
spinlock_release(&service->spin);
}
/**
* Test if a server is part of a service
*
@ -1294,8 +1365,11 @@ void dprintService(DCB *dcb, SERVICE *service)
dcb_printf(dcb, "\tBackend databases:\n");
while (server)
{
dcb_printf(dcb, "\t\t%s:%d Protocol: %s\n", server->server->name, server->server->port,
server->server->protocol);
if (SERVER_REF_IS_ACTIVE(server))
{
dcb_printf(dcb, "\t\t%s:%d Protocol: %s\n", server->server->name,
server->server->port, server->server->protocol);
}
server = server->next;
}
if (service->weightby)
@ -1802,6 +1876,23 @@ void service_shutdown()
while (svc != NULL)
{
svc->svc_do_shutdown = true;
/* Call destroyInstance hook for routers */
if (svc->router->destroyInstance)
{
svc->router->destroyInstance(svc->router_instance);
}
if (svc->n_filters)
{
FILTER_DEF **filters = svc->filters;
for (int i=0; i < svc->n_filters; i++)
{
if (filters[i]->obj->destroyInstance)
{
/* Call destroyInstance hook for filters */
filters[i]->obj->destroyInstance(filters[i]->filter);
}
}
}
svc = svc->next;
}
spinlock_release(&service_spin);
@ -2027,3 +2118,102 @@ bool service_all_services_have_listeners()
spinlock_release(&service_spin);
return rval;
}
static void service_calculate_weights(SERVICE *service)
{
char *weightby = serviceGetWeightingParameter(service);
if (weightby && service->dbref)
{
/** Service has a weighting parameter and at least one server */
int total = 0;
/** Calculate total weight */
for (SERVER_REF *server = service->dbref; server; server = server->next)
{
server->weight = SERVICE_BASE_SERVER_WEIGHT;
char *param = serverGetParameter(server->server, weightby);
if (param)
{
total += atoi(param);
}
}
if (total == 0)
{
MXS_WARNING("Weighting Parameter for service '%s' will be ignored as "
"no servers have values for the parameter '%s'.",
service->name, weightby);
}
else if (total < 0)
{
MXS_ERROR("Sum of weighting parameter '%s' for service '%s' exceeds "
"maximum value of %d. Weighting will be ignored.",
weightby, service->name, INT_MAX);
}
else
{
/** Calculate the relative weight of the servers */
for (SERVER_REF *server = service->dbref; server; server = server->next)
{
char *param = serverGetParameter(server->server, weightby);
if (param)
{
int wght = atoi(param);
int perc = (wght * SERVICE_BASE_SERVER_WEIGHT) / total;
if (perc == 0)
{
MXS_ERROR("Weighting parameter '%s' with a value of %d for"
" server '%s' rounds down to zero with total weight"
" of %d for service '%s'. No queries will be "
"routed to this server as long as a server with"
" positive weight is available.",
weightby, wght, server->server->unique_name,
total, service->name);
}
else if (perc < 0)
{
MXS_ERROR("Weighting parameter '%s' for server '%s' is too large, "
"maximum value is %d. No weighting will be used for this "
"server.", weightby, server->server->unique_name,
INT_MAX / SERVICE_BASE_SERVER_WEIGHT);
perc = SERVICE_BASE_SERVER_WEIGHT;
}
server->weight = perc;
}
else
{
MXS_WARNING("Server '%s' has no parameter '%s' used for weighting"
" for service '%s'.", server->server->unique_name,
weightby, service->name);
}
}
}
}
}
bool service_server_in_use(const SERVER *server)
{
bool rval = false;
spinlock_acquire(&service_spin);
for (SERVICE *service = allServices; service && !rval; service = service->next)
{
spinlock_acquire(&service->spin);
for (SERVER_REF *ref = service->dbref; ref && !rval; ref = ref->next)
{
if (ref->active && ref->server == server)
{
rval = true;
}
}
spinlock_release(&service->spin);
}
spinlock_release(&service_spin);
return rval;
}

View File

@ -124,6 +124,8 @@ session_alloc(SERVICE *service, DCB *client_dcb)
MXS_OOM();
return NULL;
}
/** Assign a session id and increase */
session->ses_id = (size_t)atomic_add(&session_id, 1) + 1;
session->ses_is_child = (bool) DCB_IS_CLONE(client_dcb);
session->service = service;
session->client_dcb = client_dcb;
@ -221,8 +223,6 @@ session_alloc(SERVICE *service, DCB *client_dcb)
session->client_dcb->user,
session->client_dcb->remote);
}
/** Assign a session id and increase, insert session into list */
session->ses_id = (size_t)atomic_add(&session_id, 1) + 1;
atomic_add(&service->stats.n_sessions, 1);
atomic_add(&service->stats.n_current, 1);
CHK_SESSION(session);

View File

@ -37,6 +37,11 @@
#include <maxscale/server.h>
#include <maxscale/log_manager.h>
#include <maxscale/gwdirs.h>
// This is pretty ugly but it's required to test internal functions
#include "../config.c"
#include "../server.c"
/**
* test1 Allocate a server and do lots of other things
*
@ -51,7 +56,7 @@ test1()
/* Server tests */
ss_dfprintf(stderr, "testserver : creating server called MyServer");
set_libdir(MXS_STRDUP_A("../../modules/authenticator/"));
server = server_alloc("MyServer", "HTTPD", 9876, "NullAuthAllow", NULL);
server = server_alloc("uniquename", "127.0.0.1", 9876, "HTTPD", "NullAuthAllow", NULL);
ss_info_dassert(server, "Allocating the server should not fail");
mxs_log_flush_sync();
@ -67,7 +72,6 @@ test1()
ss_dfprintf(stderr, "\t..done\nTesting Unique Name for Server.");
ss_info_dassert(NULL == server_find_by_unique_name("uniquename"),
"Should not find non-existent unique name.");
server_set_unique_name(server, "uniquename");
mxs_log_flush_sync();
ss_info_dassert(server == server_find_by_unique_name("uniquename"), "Should find by unique name.");
ss_dfprintf(stderr, "\t..done\nTesting Status Setting for Server.");
@ -103,11 +107,83 @@ test1()
}
#define TEST(A, B) do { if(!(A)){ printf(B"\n"); return false; }} while(false)
bool test_load_config(const char *input, SERVER *server)
{
DUPLICATE_CONTEXT dcontext;
if (duplicate_context_init(&dcontext))
{
CONFIG_CONTEXT ccontext = {.object = ""};
if (config_load_single_file(input, &dcontext, &ccontext))
{
CONFIG_CONTEXT *obj = ccontext.next;
CONFIG_PARAMETER *param = obj->parameters;
TEST(strcmp(obj->object, server->unique_name) == 0, "Server names differ");
TEST(strcmp(server->name, config_get_param(param, "address")->value) == 0, "Server addresses differ");
TEST(strcmp(server->protocol, config_get_param(param, "protocol")->value) == 0, "Server protocols differ");
TEST(strcmp(server->authenticator, config_get_param(param, "authenticator")->value) == 0,
"Server authenticators differ");
TEST(strcmp(server->auth_options, config_get_param(param, "authenticator_options")->value) == 0,
"Server authenticator options differ");
TEST(server->port == atoi(config_get_param(param, "port")->value), "Server ports differ");
TEST(create_new_server(obj) == 0, "Failed to create server from loaded config");
}
}
return true;
}
bool test_serialize()
{
char name[] = "serialized-server";
char config_name[] = "serialized-server.cnf";
char old_config_name[] = "serialized-server.cnf.old";
char *persist_dir = MXS_STRDUP_A("./");
set_config_persistdir(persist_dir);
SERVER *server = server_alloc(name, "127.0.0.1", 9876, "HTTPD", "NullAuthAllow", "fake=option");
TEST(server, "Server allocation failed");
/** Make sure the files don't exist */
unlink(config_name);
unlink(old_config_name);
/** Serialize server to disk */
TEST(server_serialize(server), "Failed to synchronize original server");
/** Load it again */
TEST(test_load_config(config_name, server), "Failed to load the serialized server");
/** We should have two identical servers */
SERVER *created = server_find_by_unique_name(name);
TEST(created->next == server, "We should end up with two servers");
rename(config_name, old_config_name);
/** Serialize the loaded server to disk */
TEST(server_serialize(created), "Failed to synchronize the copied server");
/** Check that they serialize to identical files */
char cmd[1024];
sprintf(cmd, "diff ./%s ./%s", config_name, old_config_name);
TEST(system(cmd) == 0, "The files are not identical");
return true;
}
int main(int argc, char **argv)
{
int result = 0;
result += test1();
if (!test_serialize())
{
result++;
}
exit(result);
}

View File

@ -45,9 +45,9 @@
static int
test1()
{
USERS *users;
char *authdata;
int result, count;
USERS *users;
const char *authdata;
int result, count;
/* Poll tests */
ss_dfprintf(stderr,

View File

@ -87,12 +87,12 @@ users_free(USERS *users)
* @return The number of users added to the table
*/
int
users_add(USERS *users, char *user, char *auth)
users_add(USERS *users, const char *user, const char *auth)
{
int add;
atomic_add(&users->stats.n_adds, 1);
add = hashtable_add(users->data, user, auth);
add = hashtable_add(users->data, (char*)user, (char*)auth);
atomic_add(&users->stats.n_entries, add);
return add;
}
@ -105,12 +105,12 @@ users_add(USERS *users, char *user, char *auth)
* @return The number of users deleted from the table
*/
int
users_delete(USERS *users, char *user)
users_delete(USERS *users, const char *user)
{
int del;
atomic_add(&users->stats.n_deletes, 1);
del = hashtable_delete(users->data, user);
del = hashtable_delete(users->data, (char*)user);
atomic_add(&users->stats.n_entries, -del);
return del;
}
@ -122,11 +122,12 @@ users_delete(USERS *users, char *user)
* @param user The user name
* @return The authentication data or NULL on error
*/
char
*users_fetch(USERS *users, char *user)
const char
*users_fetch(USERS *users, const char *user)
{
atomic_add(&users->stats.n_fetches, 1);
return hashtable_fetch(users->data, user);
// TODO: Returning data from the hashtable is not threadsafe.
return hashtable_fetch(users->data, (char*)user);
}
/**
@ -139,13 +140,13 @@ char
* @return Number of users updated
*/
int
users_update(USERS *users, char *user, char *auth)
users_update(USERS *users, const char *user, const char *auth)
{
if (hashtable_delete(users->data, user) == 0)
if (hashtable_delete(users->data, (char*)user) == 0)
{
return 0;
}
return hashtable_add(users->data, user, auth);
return hashtable_add(users->data, (char*)user, (char*)auth);
}
/**
@ -154,7 +155,7 @@ users_update(USERS *users, char *user, char *auth)
* @param users The users table
*/
void
usersPrint(USERS *users)
usersPrint(const USERS *users)
{
printf("Users table data\n");
hashtable_stats(users->data);
@ -167,7 +168,7 @@ usersPrint(USERS *users)
* @param users The users table
*/
void
dcb_usersPrint(DCB *dcb, USERS *users)
dcb_usersPrint(DCB *dcb, const USERS *users)
{
if (users == NULL || users->data == NULL)
{

View File

@ -2649,17 +2649,12 @@ static bool check_server_permissions(SERVICE *service, SERVER* server,
bool check_service_permissions(SERVICE* service)
{
if (is_internal_service(service->routerModule) ||
config_get_global_options()->skip_permission_checks)
config_get_global_options()->skip_permission_checks ||
service->dbref == NULL) // No servers to check
{
return true;
}
if (service->dbref == NULL)
{
MXS_ERROR("[%s] Service is missing the servers parameter.", service->name);
return false;
}
char *user, *password;
if (serviceGetUser(service, &user, &password) == 0)

View File

@ -139,7 +139,7 @@ static int cdc_auth_check(DCB *dcb, CDC_protocol *protocol, char *username, uint
{
if (dcb->listener->users)
{
char *user_password = users_fetch(dcb->listener->users, username);
const char *user_password = users_fetch(dcb->listener->users, username);
if (user_password)
{

View File

@ -1,4 +1,5 @@
add_subdirectory(cache)
add_subdirectory(maxrows)
add_subdirectory(ccrfilter)
add_subdirectory(dbfwfilter)
add_subdirectory(gatekeeper)

View File

@ -80,7 +80,8 @@ FILTER_OBJECT *GetModuleObject()
routeQuery,
clientReply,
diagnostics,
getCapabilities
getCapabilities,
NULL, // destroyInstance
};
return &object;

View File

@ -782,9 +782,119 @@ static bool cache_rule_compare_n(CACHE_RULE *self, const char *value, size_t len
static bool cache_rule_matches_column(CACHE_RULE *self, const char *default_db, const GWBUF *query)
{
ss_dassert(self->attribute == CACHE_ATTRIBUTE_COLUMN);
ss_info_dassert(!true, "Column matching not implemented yet.");
return false;
// TODO: Do this "parsing" when the rule item is created.
char buffer[strlen(self->value) + 1];
strcpy(buffer, self->value);
const char* rule_column = NULL;
const char* rule_table = NULL;
const char* rule_database = NULL;
char* dot1 = strchr(buffer, '.');
char* dot2 = dot1 ? strchr(buffer, '.') : NULL;
if (dot1 && dot2)
{
rule_database = buffer;
*dot1 = 0;
rule_table = dot1 + 1;
*dot2 = 0;
rule_column = dot2 + 1;
}
else if (dot1)
{
rule_table = buffer;
*dot1 = 0;
rule_column = dot1 + 1;
}
else
{
rule_column = buffer;
}
const QC_FIELD_INFO *infos;
size_t n_infos;
int n_tables;
char** tables = qc_get_table_names((GWBUF*)query, &n_tables, false);
const char* default_table = NULL;
if (n_tables == 1)
{
// Only if we have exactly one table can we assume anything
// about a table that has not been mentioned explicitly.
default_table = tables[0];
}
qc_get_field_info((GWBUF*)query, &infos, &n_infos);
bool matches = false;
size_t i = 0;
while (!matches && (i < n_infos))
{
const QC_FIELD_INFO *info = (infos + i);
if ((strcmp(info->column, rule_column) == 0) || (strcmp(info->column, "*") == 0))
{
if (rule_table)
{
const char* check_table = info->table ? info->table : default_table;
if (check_table && (strcmp(check_table, rule_table) == 0))
{
if (rule_database)
{
const char *check_database = info->database ? info->database : default_db;
if (check_database && (strcmp(check_database, rule_database) == 0))
{
matches = true;
}
else
{
// If the rules specifies a database and either the database
// does not match or we do not know the database, the rule
// does *not* match.
matches = false;
}
}
else
{
// If the rule specifies no table, then if the table and column matches,
// the rule matches.
matches = true;
}
}
else
{
// The rules specifies a table and either the table does not match
// or we do not know the table, the rule does *not* match.
matches = false;
}
}
else
{
// If the rule specifies no table, then if the column matches, the
// rule matches.
matches = true;
}
}
++i;
}
if (tables)
{
for (i = 0; i < (size_t)n_tables; ++i)
{
MXS_FREE(tables[i]);
}
MXS_FREE(tables);
}
return matches;
}
/**

View File

@ -1,7 +1,7 @@
# Build RocksDB
if ((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") AND (NOT (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.7)))
message(STATUS "GCC >= 4.7, RocksDB is built.")
if ((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") AND (NOT (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.8)))
message(STATUS "GCC >= 4.8, RocksDB is built.")
set(ROCKSDB_REPO "https://github.com/facebook/rocksdb.git"
CACHE STRING "RocksDB Git repository")

View File

@ -14,12 +14,37 @@
#include <stdlib.h>
#include "rules.h"
#include <maxscale/log_manager.h>
#include <maxscale/query_classifier.h>
#include <maxscale/protocol/mysql.h>
#if !defined(SS_DEBUG)
#define SS_DEBUG
#endif
#include <maxscale/debug.h>
struct test_case
GWBUF* create_gwbuf(const char* s)
{
size_t query_len = strlen(s);
size_t payload_len = query_len + 1;
size_t gwbuf_len = MYSQL_HEADER_LEN + payload_len;
GWBUF* gwbuf = gwbuf_alloc(gwbuf_len);
ss_dassert(gwbuf);
*((unsigned char*)((char*)GWBUF_DATA(gwbuf))) = payload_len;
*((unsigned char*)((char*)GWBUF_DATA(gwbuf) + 1)) = (payload_len >> 8);
*((unsigned char*)((char*)GWBUF_DATA(gwbuf) + 2)) = (payload_len >> 16);
*((unsigned char*)((char*)GWBUF_DATA(gwbuf) + 3)) = 0x00;
*((unsigned char*)((char*)GWBUF_DATA(gwbuf) + 4)) = 0x03;
memcpy((char*)GWBUF_DATA(gwbuf) + MYSQL_HEADER_LEN + 1, s, query_len);
return gwbuf;
}
//
// Test user rules. Basically tests that a user specification is translated
// into the correct pcre2 regex.
//
struct user_test_case
{
const char* json;
struct
@ -29,31 +54,33 @@ struct test_case
} expect;
};
#define TEST_CASE(op_from, from, op_to, to) \
#define USER_TEST_CASE(op_from, from, op_to, to) \
{ "{ \"use\": [ { \"attribute\": \"user\", \"op\": \"" #op_from "\", \"value\": \"" #from "\" } ] }",\
{ op_to, #to } }
const struct test_case test_cases[] =
#define COLUMN_
const struct user_test_case user_test_cases[] =
{
TEST_CASE(=, bob, CACHE_OP_LIKE, bob@.*),
TEST_CASE(=, 'bob', CACHE_OP_LIKE, bob@.*),
TEST_CASE(=, bob@%, CACHE_OP_LIKE, bob@.*),
TEST_CASE(=, 'bob'@'%.52', CACHE_OP_LIKE, bob@.*\\.52),
TEST_CASE(=, bob@127.0.0.1, CACHE_OP_EQ, bob@127.0.0.1),
TEST_CASE(=, b*b@127.0.0.1, CACHE_OP_EQ, b*b@127.0.0.1),
TEST_CASE(=, b*b@%.0.0.1, CACHE_OP_LIKE, b\\*b@.*\\.0\\.0\\.1),
TEST_CASE(=, b*b@%.0.%.1, CACHE_OP_LIKE, b\\*b@.*\\.0\\..*\\.1),
USER_TEST_CASE(=, bob, CACHE_OP_LIKE, bob@.*),
USER_TEST_CASE(=, 'bob', CACHE_OP_LIKE, bob@.*),
USER_TEST_CASE(=, bob@%, CACHE_OP_LIKE, bob@.*),
USER_TEST_CASE(=, 'bob'@'%.52', CACHE_OP_LIKE, bob@.*\\.52),
USER_TEST_CASE(=, bob@127.0.0.1, CACHE_OP_EQ, bob@127.0.0.1),
USER_TEST_CASE(=, b*b@127.0.0.1, CACHE_OP_EQ, b*b@127.0.0.1),
USER_TEST_CASE(=, b*b@%.0.0.1, CACHE_OP_LIKE, b\\*b@.*\\.0\\.0\\.1),
USER_TEST_CASE(=, b*b@%.0.%.1, CACHE_OP_LIKE, b\\*b@.*\\.0\\..*\\.1),
};
const size_t n_test_cases = sizeof(test_cases) / sizeof(test_cases[0]);
const size_t n_user_test_cases = sizeof(user_test_cases) / sizeof(user_test_cases[0]);
int test()
int test_user()
{
int errors = 0;
for (int i = 0; i < n_test_cases; ++i)
for (int i = 0; i < n_user_test_cases; ++i)
{
const struct test_case *test_case = &test_cases[i];
const struct user_test_case *test_case = &user_test_cases[i];
CACHE_RULES *rules = cache_rules_parse(test_case->json, 0);
ss_dassert(rules);
@ -78,9 +105,86 @@ int test()
rule->value);
++errors;
}
cache_rules_free(rules);
}
return errors == 0 ? EXIT_SUCCESS : EXIT_FAILURE;
return errors;
}
//
//
//
struct store_test_case
{
const char *rule; // The rule in JSON format.
bool matches; // Whether or not the rule should match the query.
const char *default_db; // The current default db.
const char *query; // The query to be matched against the rule.
};
#define STORE_TEST_CASE(attribute, op, value, matches, default_db, query) \
{ "{ \"store\": [ { \"attribute\": \"" attribute "\", \"op\": \"" op "\", \"value\": \"" value "\" } ] }",\
matches, default_db, query }
// In the following,
// true: The query SHOULD match the rule,
// false: The query should NOT match the rule.
const struct store_test_case store_test_cases[] =
{
STORE_TEST_CASE("column", "=", "a", true, NULL, "SELECT a FROM tbl"),
STORE_TEST_CASE("column", "=", "b", false, NULL, "SELECT a FROM tbl")
};
const size_t n_store_test_cases = sizeof(store_test_cases) / sizeof(store_test_cases[0]);
int test_store()
{
int errors = 0;
for (int i = 0; i < n_store_test_cases; ++i)
{
const struct store_test_case *test_case = &store_test_cases[i];
CACHE_RULES *rules = cache_rules_parse(test_case->rule, 0);
ss_dassert(rules);
CACHE_RULE *rule = rules->store_rules;
ss_dassert(rule);
GWBUF *packet = create_gwbuf(test_case->query);
bool matches = cache_rules_should_store(rules, test_case->default_db, packet);
if (matches != test_case->matches)
{
printf("Query : %s\n"
"Rule : %s\n"
"Expected: %s\n"
"Result : %s\n\n",
test_case->query,
test_case->rule,
test_case->matches ? "A match" : "Not a match",
matches ? "A match" : "Not a match");
}
gwbuf_free(packet);
cache_rules_free(rules);
}
return errors;
}
int test()
{
int errors = 0;
errors += test_user();
errors += test_store();
return errors ? EXIT_FAILURE : EXIT_SUCCESS;
}
int main()
@ -89,7 +193,14 @@ int main()
if (mxs_log_init(NULL, ".", MXS_LOG_TARGET_DEFAULT))
{
rc = test();
if (qc_init("qc_sqlite", ""))
{
rc = test();
}
else
{
MXS_ERROR("Could not initialize query classifier.");
}
mxs_log_finish();
}

View File

@ -75,9 +75,10 @@ static FILTER_OBJECT MyObject =
setDownstream,
NULL, // No Upstream requirement
routeQuery,
NULL,
NULL, // No clientReply
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
#define CCR_DEFAULT_TIME 60

View File

@ -114,11 +114,12 @@ static FILTER_OBJECT MyObject =
closeSession,
freeSession,
setDownstream,
NULL,
NULL, // No setUpStream
routeQuery,
NULL,
NULL, // No clientReply
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**
@ -1701,13 +1702,12 @@ bool rule_matches(FW_INSTANCE* my_instance,
RULELIST *rulelist,
char* query)
{
char *ptr, *where, *msg = NULL;
char *ptr, *msg = NULL;
char emsg[512];
unsigned char* memptr = (unsigned char*) queue->start;
bool is_sql, is_real, matches;
qc_query_op_t optype = QUERY_OP_UNDEFINED;
STRLINK* strln = NULL;
QUERYSPEED* queryspeed = NULL;
QUERYSPEED* rule_qs = NULL;
time_t time_now;
@ -1745,7 +1745,7 @@ bool rule_matches(FW_INSTANCE* my_instance,
case QUERY_OP_UPDATE:
case QUERY_OP_INSERT:
case QUERY_OP_DELETE:
// In these cases, we have to be able to trust what qc_get_affected_fields
// In these cases, we have to be able to trust what qc_get_field_info
// returns. Unless the query was parsed completely, we cannot do that.
msg = create_parse_error(my_instance, "parsed completely", query, &matches);
goto queryresolved;
@ -1817,32 +1817,29 @@ bool rule_matches(FW_INSTANCE* my_instance,
case RT_COLUMN:
if (is_sql && is_real)
{
where = qc_get_affected_fields(queue);
if (where != NULL)
{
char* saveptr;
char* tok = strtok_r(where, " ", &saveptr);
while (tok)
{
strln = (STRLINK*) rulelist->rule->data;
while (strln)
{
if (strcasecmp(tok, strln->value) == 0)
{
matches = true;
const QC_FIELD_INFO* infos;
size_t n_infos;
qc_get_field_info(queue, &infos, &n_infos);
sprintf(emsg, "Permission denied to column '%s'.", strln->value);
MXS_INFO("dbfwfilter: rule '%s': query targets forbidden column: %s",
rulelist->rule->name, strln->value);
msg = MXS_STRDUP_A(emsg);
MXS_FREE(where);
goto queryresolved;
}
strln = strln->next;
for (size_t i = 0; i < n_infos; ++i)
{
const char* tok = infos[i].column;
STRLINK* strln = (STRLINK*) rulelist->rule->data;
while (strln)
{
if (strcasecmp(tok, strln->value) == 0)
{
matches = true;
sprintf(emsg, "Permission denied to column '%s'.", strln->value);
MXS_INFO("dbfwfilter: rule '%s': query targets forbidden column: %s",
rulelist->rule->name, strln->value);
msg = MXS_STRDUP_A(emsg);
goto queryresolved;
}
tok = strtok_r(NULL, ",", &saveptr);
strln = strln->next;
}
MXS_FREE(where);
}
}
break;
@ -1850,23 +1847,22 @@ bool rule_matches(FW_INSTANCE* my_instance,
case RT_WILDCARD:
if (is_sql && is_real)
{
char * strptr;
where = qc_get_affected_fields(queue);
const QC_FIELD_INFO* infos;
size_t n_infos;
qc_get_field_info(queue, &infos, &n_infos);
if (where != NULL)
for (size_t i = 0; i < n_infos; ++i)
{
strptr = where;
const char* column = infos[i].column;
if (strchr(strptr, '*'))
if (strcmp(column, "*") == 0)
{
matches = true;
msg = MXS_STRDUP_A("Usage of wildcard denied.");
MXS_INFO("dbfwfilter: rule '%s': query contains a wildcard.",
rulelist->rule->name);
MXS_FREE(where);
goto queryresolved;
}
MXS_FREE(where);
}
}
break;

View File

@ -103,9 +103,10 @@ static FILTER_OBJECT MyObject =
setDownstream,
NULL, // No upstream requirement
routeQuery,
NULL,
NULL, // No clientReply
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**

View File

@ -52,9 +52,10 @@ static FILTER_OBJECT MyObject =
setDownstream,
NULL, // No upstream requirement
routeQuery,
NULL,
NULL, // No clientReply
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**

View File

@ -96,6 +96,7 @@ static FILTER_OBJECT MyObject =
clientReply,
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**
@ -570,7 +571,7 @@ static void diagnostic(FILTER *instance, void *fsession, DCB *dcb)
lua_gettop(my_instance->global_lua_state);
if (lua_isstring(my_instance->global_lua_state, -1))
{
dcb_printf(dcb, lua_tostring(my_instance->global_lua_state, -1));
dcb_printf(dcb, "%s", lua_tostring(my_instance->global_lua_state, -1));
dcb_printf(dcb, "\n");
}
}

View File

@ -0,0 +1,5 @@
add_library(maxrows SHARED maxrows.c)
target_link_libraries(maxrows maxscale-common)
set_target_properties(maxrows PROPERTIES VERSION "1.0.0")
set_target_properties(maxrows PROPERTIES LINK_FLAGS -Wl,-z,defs)
install_module(maxrows experimental)

View File

@ -0,0 +1,920 @@
/*
* Copyright (c) 2016 MariaDB Corporation Ab
*
* Use of this software is governed by the Business Source License included
* in the LICENSE.TXT file and at www.mariadb.com/bsl.
*
* Change Date: 2019-07-01
*
* On the date above, in accordance with the Business Source License, use
* of this software will be governed by version 2 or later of the General
* Public License.
*/
/**
* @file maxrows.c - Result set limit Filter
* @verbatim
*
*
* The filter returns a void result set if the number of rows in the result set
* from backend exceeds the max_rows parameter.
*
* Date Who Description
* 26/10/2016 Massimiliano Pinto Initial implementation
* 04/11/2016 Massimiliano Pinto Addition of SERVER_MORE_RESULTS_EXIST flag (0x0008)
* detection in handle_expecting_rows().
* 07/11/2016 Massimiliano Pinto handle_expecting_rows renamed to handle_rows
*
* @endverbatim
*/
#define MXS_MODULE_NAME "maxrows"
#include <maxscale/alloc.h>
#include <maxscale/filter.h>
#include <maxscale/gwdirs.h>
#include <maxscale/log_manager.h>
#include <maxscale/modinfo.h>
#include <maxscale/modutil.h>
#include <maxscale/mysql_utils.h>
#include <maxscale/query_classifier.h>
#include <stdbool.h>
#include <stdint.h>
#include <maxscale/buffer.h>
#include <maxscale/protocol/mysql.h>
#include <maxscale/debug.h>
#include "maxrows.h"
static char VERSION_STRING[] = "V1.0.0";
static FILTER *createInstance(const char *name, char **options, FILTER_PARAMETER **);
static void *newSession(FILTER *instance, SESSION *session);
static void closeSession(FILTER *instance, void *sdata);
static void freeSession(FILTER *instance, void *sdata);
static void setDownstream(FILTER *instance, void *sdata, DOWNSTREAM *downstream);
static void setUpstream(FILTER *instance, void *sdata, UPSTREAM *upstream);
static int routeQuery(FILTER *instance, void *sdata, GWBUF *queue);
static int clientReply(FILTER *instance, void *sdata, GWBUF *queue);
static void diagnostics(FILTER *instance, void *sdata, DCB *dcb);
static uint64_t getCapabilities(void);
/* Global symbols of the Module */
MODULE_INFO info =
{
MODULE_API_FILTER,
MODULE_IN_DEVELOPMENT,
FILTER_VERSION,
"A filter that is capable of limiting the resultset number of rows."
};
char *version()
{
return VERSION_STRING;
}
/**
* The module initialization functions, called when the module has
* been loaded.
*/
void ModuleInit()
{
}
/**
* The module entry point function, called when the module is loaded.
*
* @return The module object.
*/
FILTER_OBJECT *GetModuleObject()
{
static FILTER_OBJECT object =
{
createInstance,
newSession,
closeSession,
freeSession,
setDownstream,
setUpstream,
routeQuery,
clientReply,
diagnostics,
getCapabilities,
NULL, // No destroyInstance
};
return &object;
};
/* Implementation */
typedef struct maxrows_config
{
uint32_t max_resultset_rows;
uint32_t max_resultset_size;
uint32_t debug;
} MAXROWS_CONFIG;
static const MAXROWS_CONFIG DEFAULT_CONFIG =
{
MAXROWS_DEFAULT_MAX_RESULTSET_ROWS,
MAXROWS_DEFAULT_MAX_RESULTSET_SIZE,
MAXROWS_DEFAULT_DEBUG
};
typedef struct maxrows_instance
{
const char *name;
MAXROWS_CONFIG config;
} MAXROWS_INSTANCE;
typedef enum maxrows_session_state
{
MAXROWS_EXPECTING_RESPONSE, // A select has been sent, and we are waiting for the response.
MAXROWS_EXPECTING_FIELDS, // A select has been sent, and we want more fields.
MAXROWS_EXPECTING_ROWS, // A select has been sent, and we want more rows.
MAXROWS_EXPECTING_NOTHING, // We are not expecting anything from the server.
MAXROWS_IGNORING_RESPONSE, // We are not interested in the data received from the server.
MAXROWS_DISCARDING_RESPONSE, // We have returned empty result set and we discard any new data.
} maxrows_session_state_t;
typedef struct maxrows_response_state
{
GWBUF* data; /**< Response data, possibly incomplete. */
size_t n_totalfields; /**< The number of fields a resultset contains. */
size_t n_fields; /**< How many fields we have received, <= n_totalfields. */
size_t n_rows; /**< How many rows we have received. */
size_t offset; /**< Where we are in the response buffer. */
} MAXROWS_RESPONSE_STATE;
static void maxrows_response_state_reset(MAXROWS_RESPONSE_STATE *state);
typedef struct maxrows_session_data
{
MAXROWS_INSTANCE *instance; /**< The maxrows instance the session is associated with. */
DOWNSTREAM down; /**< The previous filter or equivalent. */
UPSTREAM up; /**< The next filter or equivalent. */
MAXROWS_RESPONSE_STATE res; /**< The response state. */
SESSION *session; /**< The session this data is associated with. */
char *default_db; /**< The default database. */
char *use_db; /**< Pending default database. Needs server response. */
maxrows_session_state_t state;
} MAXROWS_SESSION_DATA;
static MAXROWS_SESSION_DATA *maxrows_session_data_create(MAXROWS_INSTANCE *instance, SESSION *session);
static void maxrows_session_data_free(MAXROWS_SESSION_DATA *data);
static int handle_expecting_fields(MAXROWS_SESSION_DATA *csdata);
static int handle_expecting_nothing(MAXROWS_SESSION_DATA *csdata);
static int handle_expecting_response(MAXROWS_SESSION_DATA *csdata);
static int handle_rows(MAXROWS_SESSION_DATA *csdata);
static int handle_ignoring_response(MAXROWS_SESSION_DATA *csdata);
static bool process_params(char **options, FILTER_PARAMETER **params, MAXROWS_CONFIG* config);
static int send_upstream(MAXROWS_SESSION_DATA *csdata);
static int send_ok_upstream(MAXROWS_SESSION_DATA *csdata);
/* API BEGIN */
/**
* Create an instance of the maxrows filter for a particular service
* within MaxScale.
*
* @param name The name of the instance (as defined in the config file).
* @param options The options for this filter
* @param params The array of name/value pair parameters for the filter
*
* @return The instance data for this new instance
*/
static FILTER *createInstance(const char *name, char **options, FILTER_PARAMETER **params)
{
MAXROWS_INSTANCE *cinstance = NULL;
MAXROWS_CONFIG config = DEFAULT_CONFIG;
if (process_params(options, params, &config))
{
cinstance = MXS_CALLOC(1, sizeof(MAXROWS_INSTANCE));
if (cinstance)
{
cinstance->name = name;
cinstance->config = config;
}
}
return (FILTER*)cinstance;
}
/**
* Associate a new session with this instance of the filter.
*
* @param instance The maxrows instance data
* @param session The session itself
*
* @return Session specific data for this session
*/
static void *newSession(FILTER *instance, SESSION *session)
{
MAXROWS_INSTANCE *cinstance = (MAXROWS_INSTANCE*)instance;
MAXROWS_SESSION_DATA *csdata = maxrows_session_data_create(cinstance, session);
return csdata;
}
/**
* A session has been closed.
*
* @param instance The maxrows instance data
* @param sdata The session data of the session being closed
*/
static void closeSession(FILTER *instance, void *sdata)
{
MAXROWS_INSTANCE *cinstance = (MAXROWS_INSTANCE*)instance;
MAXROWS_SESSION_DATA *csdata = (MAXROWS_SESSION_DATA*)sdata;
}
/**
* Free the session data.
*
* @param instance The maxrows instance data
* @param sdata The session data of the session being closed
*/
static void freeSession(FILTER *instance, void *sdata)
{
MAXROWS_INSTANCE *cinstance = (MAXROWS_INSTANCE*)instance;
MAXROWS_SESSION_DATA *csdata = (MAXROWS_SESSION_DATA*)sdata;
maxrows_session_data_free(csdata);
}
/**
* Set the downstream component for this filter.
*
* @param instance The maxrowsinstance data
* @param sdata The session data of the session
* @param down The downstream filter or router
*/
static void setDownstream(FILTER *instance, void *sdata, DOWNSTREAM *down)
{
MAXROWS_INSTANCE *cinstance = (MAXROWS_INSTANCE*)instance;
MAXROWS_SESSION_DATA *csdata = (MAXROWS_SESSION_DATA*)sdata;
csdata->down = *down;
}
/**
* Set the upstream component for this filter.
*
* @param instance The maxrows instance data
* @param sdata The session data of the session
* @param up The upstream filter or router
*/
static void setUpstream(FILTER *instance, void *sdata, UPSTREAM *up)
{
MAXROWS_INSTANCE *cinstance = (MAXROWS_INSTANCE*)instance;
MAXROWS_SESSION_DATA *csdata = (MAXROWS_SESSION_DATA*)sdata;
csdata->up = *up;
}
/**
* A request on its way to a backend is delivered to this function.
*
* @param instance The filter instance data
* @param sdata The filter session data
* @param buffer Buffer containing an MySQL protocol packet.
*/
static int routeQuery(FILTER *instance, void *sdata, GWBUF *packet)
{
MAXROWS_INSTANCE *cinstance = (MAXROWS_INSTANCE*)instance;
MAXROWS_SESSION_DATA *csdata = (MAXROWS_SESSION_DATA*)sdata;
uint8_t *data = GWBUF_DATA(packet);
// All of these should be guaranteed by RCAP_TYPE_TRANSACTION_TRACKING
ss_dassert(GWBUF_IS_CONTIGUOUS(packet));
ss_dassert(GWBUF_LENGTH(packet) >= MYSQL_HEADER_LEN + 1);
ss_dassert(MYSQL_GET_PACKET_LEN(data) + MYSQL_HEADER_LEN == GWBUF_LENGTH(packet));
maxrows_response_state_reset(&csdata->res);
csdata->state = MAXROWS_IGNORING_RESPONSE;
switch ((int)MYSQL_GET_COMMAND(data))
{
case MYSQL_COM_QUERY:
{
csdata->state = MAXROWS_EXPECTING_RESPONSE;
break;
default:
break;
}
}
if (csdata->instance->config.debug & MAXROWS_DEBUG_DECISIONS)
{
MXS_NOTICE("Maxrows filter is sending data.");
}
return csdata->down.routeQuery(csdata->down.instance, csdata->down.session, packet);
}
/**
* A response on its way to the client is delivered to this function.
*
* @param instance The filter instance data
* @param sdata The filter session data
* @param queue The query data
*/
static int clientReply(FILTER *instance, void *sdata, GWBUF *data)
{
MAXROWS_INSTANCE *cinstance = (MAXROWS_INSTANCE*)instance;
MAXROWS_SESSION_DATA *csdata = (MAXROWS_SESSION_DATA*)sdata;
int rv;
if (csdata->res.data)
{
gwbuf_append(csdata->res.data, data);
}
else
{
csdata->res.data = data;
}
if (csdata->state != MAXROWS_IGNORING_RESPONSE)
{
if (csdata->state != MAXROWS_DISCARDING_RESPONSE)
{
if (gwbuf_length(csdata->res.data) > csdata->instance->config.max_resultset_size)
{
if (csdata->instance->config.debug & MAXROWS_DEBUG_DISCARDING)
{
MXS_NOTICE("Current size %uB of resultset, at least as much "
"as maximum allowed size %uKiB. Not returning data.",
gwbuf_length(csdata->res.data),
csdata->instance->config.max_resultset_size / 1024);
}
csdata->state = MAXROWS_DISCARDING_RESPONSE;
}
}
}
switch (csdata->state)
{
case MAXROWS_EXPECTING_FIELDS:
rv = handle_expecting_fields(csdata);
break;
case MAXROWS_EXPECTING_NOTHING:
rv = handle_expecting_nothing(csdata);
break;
case MAXROWS_EXPECTING_RESPONSE:
rv = handle_expecting_response(csdata);
break;
case MAXROWS_DISCARDING_RESPONSE:
case MAXROWS_EXPECTING_ROWS:
rv = handle_rows(csdata);
break;
case MAXROWS_IGNORING_RESPONSE:
rv = handle_ignoring_response(csdata);
break;
default:
MXS_ERROR("Internal filter logic broken, unexpected state: %d", csdata->state);
ss_dassert(!true);
rv = send_upstream(csdata);
maxrows_response_state_reset(&csdata->res);
csdata->state = MAXROWS_IGNORING_RESPONSE;
}
return rv;
}
/**
* Diagnostics routine
*
* If csdata is NULL then print diagnostics on the instance as a whole,
* otherwise print diagnostics for the particular session.
*
* @param instance The filter instance
* @param fsession Filter session, may be NULL
* @param dcb The DCB for diagnostic output
*/
static void diagnostics(FILTER *instance, void *sdata, DCB *dcb)
{
MAXROWS_INSTANCE *cinstance = (MAXROWS_INSTANCE*)instance;
MAXROWS_SESSION_DATA *csdata = (MAXROWS_SESSION_DATA*)sdata;
dcb_printf(dcb, "Maxrows filter is working\n");
}
/**
* Capability routine.
*
* @return The capabilities of the filter.
*/
static uint64_t getCapabilities(void)
{
return RCAP_TYPE_STMT_INPUT;
}
/* API END */
/**
* Reset maxrows response state
*
* @param state Pointer to object.
*/
static void maxrows_response_state_reset(MAXROWS_RESPONSE_STATE *state)
{
state->data = NULL;
state->n_totalfields = 0;
state->n_fields = 0;
state->n_rows = 0;
state->offset = 0;
}
/**
* Create maxrows session data
*
* @param instance The maxrows instance this data is associated with.
*
* @return Session data or NULL if creation fails.
*/
static MAXROWS_SESSION_DATA *maxrows_session_data_create(MAXROWS_INSTANCE *instance,
SESSION* session)
{
MAXROWS_SESSION_DATA *data = (MAXROWS_SESSION_DATA*)MXS_CALLOC(1, sizeof(MAXROWS_SESSION_DATA));
if (data)
{
char *default_db = NULL;
ss_dassert(session->client_dcb);
ss_dassert(session->client_dcb->data);
MYSQL_session *mysql_session = (MYSQL_session*)session->client_dcb->data;
if (mysql_session->db[0] != 0)
{
default_db = MXS_STRDUP(mysql_session->db);
}
if ((mysql_session->db[0] == 0) || default_db)
{
data->instance = instance;
data->session = session;
data->state = MAXROWS_EXPECTING_NOTHING;
data->default_db = default_db;
}
else
{
MXS_FREE(data);
data = NULL;
}
}
return data;
}
/**
* Free maxrows session data.
*
* @param A maxrows session data previously allocated using session_data_create().
*/
static void maxrows_session_data_free(MAXROWS_SESSION_DATA* data)
{
if (data)
{
// In normal circumstances, only data->default_db may be non-NULL at
// this point. However, if the authentication with the backend fails
// and the session is closed, data->use_db may be non-NULL.
MXS_FREE(data->use_db);
MXS_FREE(data->default_db);
MXS_FREE(data);
}
}
/**
* Called when resultset field information is handled.
*
* @param csdata The maxrows session data.
*/
static int handle_expecting_fields(MAXROWS_SESSION_DATA *csdata)
{
ss_dassert(csdata->state == MAXROWS_EXPECTING_FIELDS);
ss_dassert(csdata->res.data);
int rv = 1;
bool insufficient = false;
size_t buflen = gwbuf_length(csdata->res.data);
while (!insufficient && (buflen - csdata->res.offset >= MYSQL_HEADER_LEN))
{
uint8_t header[MYSQL_HEADER_LEN + 1];
gwbuf_copy_data(csdata->res.data, csdata->res.offset, MYSQL_HEADER_LEN + 1, header);
size_t packetlen = MYSQL_HEADER_LEN + MYSQL_GET_PACKET_LEN(header);
if (csdata->res.offset + packetlen <= buflen)
{
// We have at least one complete packet.
int command = (int)MYSQL_GET_COMMAND(header);
switch (command)
{
case 0xfe: // EOF, the one after the fields.
csdata->res.offset += packetlen;
csdata->state = MAXROWS_EXPECTING_ROWS;
rv = handle_rows(csdata);
break;
default: // Field information.
csdata->res.offset += packetlen;
++csdata->res.n_fields;
ss_dassert(csdata->res.n_fields <= csdata->res.n_totalfields);
break;
}
}
else
{
// We need more data
insufficient = true;
}
}
return rv;
}
/**
* Called when data is received (even if nothing is expected) from the server.
*
* @param csdata The maxrows session data.
*/
static int handle_expecting_nothing(MAXROWS_SESSION_DATA *csdata)
{
ss_dassert(csdata->state == MAXROWS_EXPECTING_NOTHING);
ss_dassert(csdata->res.data);
MXS_ERROR("Received data from the backend although we were expecting nothing.");
ss_dassert(!true);
return send_upstream(csdata);
}
/**
* Called when a response is received from the server.
*
* @param csdata The maxrows session data.
*/
static int handle_expecting_response(MAXROWS_SESSION_DATA *csdata)
{
ss_dassert(csdata->state == MAXROWS_EXPECTING_RESPONSE);
ss_dassert(csdata->res.data);
int rv = 1;
size_t buflen = gwbuf_length(csdata->res.data);
if (buflen >= MYSQL_HEADER_LEN + 1) // We need the command byte.
{
// Reserve enough space to accomodate for the largest length encoded integer,
// which is type field + 8 bytes.
uint8_t header[MYSQL_HEADER_LEN + 1 + 8];
gwbuf_copy_data(csdata->res.data, 0, MYSQL_HEADER_LEN + 1, header);
switch ((int)MYSQL_GET_COMMAND(header))
{
case 0x00: // OK
case 0xff: // ERR
if (csdata->instance->config.debug & MAXROWS_DEBUG_DECISIONS)
{
MXS_NOTICE("OK or ERR");
}
rv = send_upstream(csdata);
csdata->state = MAXROWS_IGNORING_RESPONSE;
break;
case 0xfb: // GET_MORE_CLIENT_DATA/SEND_MORE_CLIENT_DATA
if (csdata->instance->config.debug & MAXROWS_DEBUG_DECISIONS)
{
MXS_NOTICE("GET_MORE_CLIENT_DATA");
}
rv = send_upstream(csdata);
csdata->state = MAXROWS_IGNORING_RESPONSE;
break;
default:
if (csdata->instance->config.debug & MAXROWS_DEBUG_DECISIONS)
{
MXS_NOTICE("RESULTSET");
}
if (csdata->res.n_totalfields != 0)
{
// We've seen the header and have figured out how many fields there are.
csdata->state = MAXROWS_EXPECTING_FIELDS;
rv = handle_expecting_fields(csdata);
}
else
{
// leint_bytes() returns the length of the int type field + the size of the
// integer.
size_t n_bytes = leint_bytes(&header[4]);
if (MYSQL_HEADER_LEN + n_bytes <= buflen)
{
// Now we can figure out how many fields there are, but first we
// need to copy some more data.
gwbuf_copy_data(csdata->res.data,
MYSQL_HEADER_LEN + 1, n_bytes - 1, &header[MYSQL_HEADER_LEN + 1]);
csdata->res.n_totalfields = leint_value(&header[4]);
csdata->res.offset = MYSQL_HEADER_LEN + n_bytes;
csdata->state = MAXROWS_EXPECTING_FIELDS;
rv = handle_expecting_fields(csdata);
}
else
{
// We need more data. We will be called again, when data is available.
}
}
break;
}
}
return rv;
}
/**
* Called when resultset rows are handled.
*
* @param csdata The maxrows session data.
*/
static int handle_rows(MAXROWS_SESSION_DATA *csdata)
{
ss_dassert(csdata->state == MAXROWS_EXPECTING_ROWS || csdata->state == MAXROWS_DISCARDING_RESPONSE);
ss_dassert(csdata->res.data);
int rv = 1;
bool insufficient = false;
size_t buflen = gwbuf_length(csdata->res.data);
while (!insufficient && (buflen - csdata->res.offset >= MYSQL_HEADER_LEN))
{
uint8_t header[MAXROWS_EOF_PACKET_LEN]; //it holds a full EOF packet
gwbuf_copy_data(csdata->res.data, csdata->res.offset, MAXROWS_EOF_PACKET_LEN, header);
size_t packetlen = MYSQL_HEADER_LEN + MYSQL_GET_PACKET_LEN(header);
if (csdata->res.offset + packetlen <= buflen)
{
// We have at least one complete packet.
int command = (int)MYSQL_GET_COMMAND(header);
switch (command)
{
case 0xff: // ERR packet after the rows.
csdata->res.offset += packetlen;
ss_dassert(csdata->res.offset == buflen);
if (csdata->instance->config.debug & MAXROWS_DEBUG_DECISIONS)
{
MXS_NOTICE("Error packet seen while handling result set");
}
/*
* This is the ERR packet that could terminate a Multi-Resultset.
*/
if (csdata->state == MAXROWS_DISCARDING_RESPONSE)
{
rv = send_ok_upstream(csdata);
}
else
{
rv = send_upstream(csdata);
}
csdata->state = MAXROWS_EXPECTING_NOTHING;
break;
case 0x0: // OK packet after the rows.
/* OK could the last packet in the Multi-Resultset transmission:
* handle DISCARD or send all the data.
* But it could also be sent instead of EOF from as in MySQL 5.7.5
* if client sends CLIENT_DEPRECATE_EOF capability OK packet could
* have the SERVER_MORE_RESULTS_EXIST flag.
* Note: Flags in the OK packet are at the same offset as in EOF.
*/
case 0xfe: // EOF, the one after the rows.
csdata->res.offset += packetlen;
ss_dassert(csdata->res.offset == buflen);
/* EOF could be the last packet in the transmission:
* check first whether SERVER_MORE_RESULTS_EXIST flag is set.
* If so more results set could come. The end of stream
* will be an OK packet.
*/
if (packetlen < MAXROWS_EOF_PACKET_LEN)
{
MXS_ERROR("EOF packet has size of %lu instead of %d", packetlen, MAXROWS_EOF_PACKET_LEN);
rv = send_ok_upstream(csdata);
csdata->state = MAXROWS_EXPECTING_NOTHING;
break;
}
int flags = gw_mysql_get_byte2(header + MAXROWS_MYSQL_EOF_PACKET_FLAGS_OFFSET);
if (!(flags & SERVER_MORE_RESULTS_EXIST))
{
if (csdata->instance->config.debug & MAXROWS_DEBUG_DECISIONS)
{
MXS_NOTICE("OK or EOF packet seen terminating the resultset");
}
if (csdata->state == MAXROWS_DISCARDING_RESPONSE)
{
rv = send_ok_upstream(csdata);
}
else
{
rv = send_upstream(csdata);
}
csdata->state = MAXROWS_EXPECTING_NOTHING;
}
else
{
if (csdata->instance->config.debug & MAXROWS_DEBUG_DECISIONS)
{
MXS_NOTICE("EOF or OK packet seen with SERVER_MORE_RESULTS_EXIST flag: waiting for more data");
}
}
break;
case 0xfb: // NULL
default: // length-encoded-string
csdata->res.offset += packetlen;
++csdata->res.n_rows;
if (csdata->state != MAXROWS_DISCARDING_RESPONSE)
{
if (csdata->res.n_rows > csdata->instance->config.max_resultset_rows)
{
if (csdata->instance->config.debug & MAXROWS_DEBUG_DISCARDING)
{
MXS_INFO("max_resultset_rows %lu reached, not returning the result.", csdata->res.n_rows);
}
csdata->state = MAXROWS_DISCARDING_RESPONSE;
}
}
break;
}
}
else
{
// We need more data
insufficient = true;
}
}
return rv;
}
/**
* Called when all data from the server is ignored.
*
* @param csdata The maxrows session data.
*/
static int handle_ignoring_response(MAXROWS_SESSION_DATA *csdata)
{
ss_dassert(csdata->state == MAXROWS_IGNORING_RESPONSE);
ss_dassert(csdata->res.data);
return send_upstream(csdata);
}
/**
* Processes the maxrows params
*
* @param options Options as passed to the filter.
* @param params Parameters as passed to the filter.
* @param config Pointer to config instance where params will be stored.
*
* @return True if all parameters could be processed, false otherwise.
*/
static bool process_params(char **options, FILTER_PARAMETER **params, MAXROWS_CONFIG* config)
{
bool error = false;
for (int i = 0; params[i]; ++i)
{
const FILTER_PARAMETER *param = params[i];
/* We could add a new parameter, max_resultset_columns:
* This way if result has more than max_resultset_columns
* we return 0 result
*/
if (strcmp(param->name, "max_resultset_rows") == 0)
{
int v = atoi(param->value);
if (v > 0)
{
config->max_resultset_rows = v;
}
else
{
config->max_resultset_rows = MAXROWS_DEFAULT_MAX_RESULTSET_ROWS;
}
}
else if (strcmp(param->name, "max_resultset_size") == 0)
{
int v = atoi(param->value);
if (v > 0)
{
config->max_resultset_size = v * 1024;
}
else
{
MXS_ERROR("The value of the configuration entry '%s' must "
"be an integer larger than 0.", param->name);
error = true;
}
}
else if (strcmp(param->name, "debug") == 0)
{
int v = atoi(param->value);
if ((v >= MAXROWS_DEBUG_MIN) && (v <= MAXROWS_DEBUG_MAX))
{
config->debug = v;
}
else
{
MXS_ERROR("The value of the configuration entry '%s' must "
"be between %d and %d, inclusive.",
param->name, MAXROWS_DEBUG_MIN, MAXROWS_DEBUG_MAX);
error = true;
}
}
else if (!filter_standard_parameter(params[i]->name))
{
MXS_ERROR("Unknown configuration entry '%s'.", param->name);
error = true;
}
}
return !error;
}
/**
* Send data upstream.
*
* @param csdata Session data
*
* @return Whatever the upstream returns.
*/
static int send_upstream(MAXROWS_SESSION_DATA *csdata)
{
ss_dassert(csdata->res.data != NULL);
int rv = csdata->up.clientReply(csdata->up.instance, csdata->up.session, csdata->res.data);
csdata->res.data = NULL;
return rv;
}
/**
* Send OK packet data upstream.
*
* @param csdata Session data
*
* @return Whatever the upstream returns.
*/
static int send_ok_upstream(MAXROWS_SESSION_DATA *csdata)
{
/* Note: sequence id is always 01 (4th byte) */
uint8_t ok[MAXROWS_OK_PACKET_LEN] = {07,00,00,01,00,00,00,02,00,00,00};
GWBUF *packet = gwbuf_alloc(MAXROWS_OK_PACKET_LEN);
uint8_t *ptr = GWBUF_DATA(packet);
memcpy(ptr, &ok, MAXROWS_OK_PACKET_LEN);
ss_dassert(csdata->res.data != NULL);
int rv = csdata->up.clientReply(csdata->up.instance, csdata->up.session, packet);
gwbuf_free(csdata->res.data);
csdata->res.data = NULL;
return rv;
}

View File

@ -0,0 +1,43 @@
#pragma once
/*
* Copyright (c) 2016 MariaDB Corporation Ab
*
* Use of this software is governed by the Business Source License included
* in the LICENSE.TXT file and at www.mariadb.com/bsl.
*
* Change Date: 2019-07-01
*
* On the date above, in accordance with the Business Source License, use
* of this software will be governed by version 2 or later of the General
* Public License.
*/
#include <limits.h>
MXS_BEGIN_DECLS
#define MAXROWS_OK_PACKET_LEN 11
#define MAXROWS_EOF_PACKET_LEN 9
/*
* The EOF packet 2 bytes flags start after:
* network header (4 bytes) + eof indicator (1) + 2 bytes warnings count)
*/
#define MAXROWS_MYSQL_EOF_PACKET_FLAGS_OFFSET (MYSQL_HEADER_LEN + 1 + 2)
#define MAXROWS_DEBUG_NONE 0
#define MAXROWS_DEBUG_DISCARDING 1
#define MAXROWS_DEBUG_DECISIONS 2
#define MAXROWS_DEBUG_USAGE (MAXROWS_DEBUG_DECISIONS | MAXROWS_DEBUG_DISCARDING)
#define MAXROWS_DEBUG_MIN MAXROWS_DEBUG_NONE
#define MAXROWS_DEBUG_MAX MAXROWS_DEBUG_USAGE
// Count
#define MAXROWS_DEFAULT_MAX_RESULTSET_ROWS UINT_MAX
// Bytes
#define MAXROWS_DEFAULT_MAX_RESULTSET_SIZE 64 * 1024
// Integer value
#define MAXROWS_DEFAULT_DEBUG 0
MXS_END_DECLS

View File

@ -116,6 +116,7 @@ static FILTER_OBJECT MyObject =
clientReply,
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**

View File

@ -69,9 +69,10 @@ static FILTER_OBJECT MyObject =
setDownstream,
NULL, // No Upstream requirement
routeQuery,
NULL,
NULL, // No clientReply
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**

View File

@ -62,6 +62,10 @@ static char *version_str = "V1.1.1";
/** Formatting buffer size */
#define QLA_STRING_BUFFER_SIZE 1024
/** Log file settings flags */
#define CONFIG_FILE_SESSION (1 << 0) // Default value, session specific files
#define CONFIG_FILE_UNIFIED (1 << 1) // One file shared by all sessions
/*
* The filter entry points
*/
@ -87,6 +91,7 @@ static FILTER_OBJECT MyObject =
NULL, // No client reply
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**
@ -107,6 +112,10 @@ typedef struct
regex_t re; /* Compiled regex text */
char *nomatch; /* Optional text to match against for exclusion */
regex_t nore; /* Compiled regex nomatch text */
uint32_t log_mode_flags; /* Log file mode settings */
FILE *unified_fp; /* Unified log file. The pointer needs to be shared here
* to avoid garbled printing. */
bool flush_writes; /* Flush log file after every write */
} QLA_INSTANCE;
/**
@ -125,6 +134,7 @@ typedef struct
int active;
char *user;
char *remote;
size_t ses_id; /* The session this filter serves */
} QLA_SESSION;
/**
@ -186,6 +196,9 @@ createInstance(const char *name, char **options, FILTER_PARAMETER **params)
my_instance->match = NULL;
my_instance->nomatch = NULL;
my_instance->filebase = NULL;
my_instance->log_mode_flags = 0;
my_instance->unified_fp = NULL;
my_instance->flush_writes = false;
bool error = false;
if (params)
@ -239,6 +252,18 @@ createInstance(const char *name, char **options, FILTER_PARAMETER **params)
{
cflags |= REG_EXTENDED;
}
else if (!strcasecmp(options[i], "session_file"))
{
my_instance->log_mode_flags |= CONFIG_FILE_SESSION;
}
else if (!strcasecmp(options[i], "unified_file"))
{
my_instance->log_mode_flags |= CONFIG_FILE_UNIFIED;
}
else if (!strcasecmp(options[i], "flush_writes"))
{
my_instance->flush_writes = true;
}
else
{
MXS_ERROR("qlafilter: Unsupported option '%s'.",
@ -247,7 +272,11 @@ createInstance(const char *name, char **options, FILTER_PARAMETER **params)
}
}
}
if (my_instance->log_mode_flags == 0)
{
// If nothing has been set, set a default value
my_instance->log_mode_flags = CONFIG_FILE_SESSION;
}
if (my_instance->filebase == NULL)
{
MXS_ERROR("qlafilter: No 'filebase' parameter defined.");
@ -275,6 +304,35 @@ createInstance(const char *name, char **options, FILTER_PARAMETER **params)
my_instance->nomatch = NULL;
error = true;
}
// Try to open the unified log file
if (my_instance->log_mode_flags & CONFIG_FILE_UNIFIED &&
my_instance->filebase != NULL)
{
// First calculate filename length
const char UNIFIED[] = ".unified";
int namelen = strlen(my_instance->filebase) + sizeof(UNIFIED);
char *filename = NULL;
if ((filename = MXS_CALLOC(namelen, sizeof(char))) != NULL)
{
snprintf(filename, namelen, "%s.unified", my_instance->filebase);
// Open the file. It is only closed at program exit
my_instance->unified_fp = fopen(filename, "w");
if (my_instance->unified_fp == NULL)
{
char errbuf[MXS_STRERROR_BUFLEN];
MXS_ERROR("Opening output file for qla "
"filter failed due to %d, %s",
errno,
strerror_r(errno, errbuf, sizeof(errbuf)));
error = true;
}
MXS_FREE(filename);
}
else
{
error = true;
}
}
if (error)
{
@ -289,6 +347,10 @@ createInstance(const char *name, char **options, FILTER_PARAMETER **params)
MXS_FREE(my_instance->nomatch);
regfree(&my_instance->nore);
}
if (my_instance->unified_fp != NULL)
{
fclose(my_instance->unified_fp);
}
MXS_FREE(my_instance->filebase);
MXS_FREE(my_instance->source);
MXS_FREE(my_instance->userName);
@ -338,15 +400,17 @@ newSession(FILTER *instance, SESSION *session)
my_session->user = userName;
my_session->remote = remote;
my_session->ses_id = session->ses_id;
sprintf(my_session->filename, "%s.%d",
sprintf(my_session->filename, "%s.%lu",
my_instance->filebase,
my_instance->sessions);
my_session->ses_id); // Fixed possible race condition
// Multiple sessions can try to update my_instance->sessions simultaneously
atomic_add(&(my_instance->sessions), 1);
if (my_session->active)
// Only open the session file if the corresponding mode setting is used
if (my_session->active && (my_instance->log_mode_flags | CONFIG_FILE_SESSION))
{
my_session->fp = fopen(my_session->filename, "w");
@ -354,7 +418,7 @@ newSession(FILTER *instance, SESSION *session)
{
char errbuf[MXS_STRERROR_BUFLEN];
MXS_ERROR("Opening output file for qla "
"fileter failed due to %d, %s",
"filter failed due to %d, %s",
errno,
strerror_r(errno, errbuf, sizeof(errbuf)));
MXS_FREE(my_session->filename);
@ -363,14 +427,6 @@ newSession(FILTER *instance, SESSION *session)
}
}
}
else
{
char errbuf[MXS_STRERROR_BUFLEN];
MXS_ERROR("Memory allocation for qla filter failed due to "
"%d, %s.",
errno,
strerror_r(errno, errbuf, sizeof(errbuf)));
}
return my_session;
}
@ -458,8 +514,31 @@ routeQuery(FILTER *instance, void *session, GWBUF *queue)
gettimeofday(&tv, NULL);
localtime_r(&tv.tv_sec, &t);
strftime(buffer, sizeof(buffer), "%F %T", &t);
fprintf(my_session->fp, "%s,%s@%s,%s\n", buffer, my_session->user,
my_session->remote, trim(squeeze_whitespace(ptr)));
/**
* Loop over all the possible log file modes and write to
* the enabled files.
*/
char *sql_string = trim(squeeze_whitespace(ptr));
if (my_instance->log_mode_flags & CONFIG_FILE_SESSION)
{
fprintf(my_session->fp, "%s,%s@%s,%s\n", buffer, my_session->user,
my_session->remote, sql_string);
if (my_instance->flush_writes)
{
fflush(my_session->fp);
}
}
if (my_instance->log_mode_flags & CONFIG_FILE_UNIFIED)
{
fprintf(my_instance->unified_fp, "S%zd,%s,%s@%s,%s\n",
my_session->ses_id, buffer, my_session->user,
my_session->remote, sql_string);
if (my_instance->flush_writes)
{
fflush(my_instance->unified_fp);
}
}
}
MXS_FREE(ptr);
}

View File

@ -71,9 +71,10 @@ static FILTER_OBJECT MyObject =
setDownstream,
NULL, // No Upstream requirement
routeQuery,
NULL,
NULL, // No clientReply
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**

View File

@ -130,6 +130,7 @@ static FILTER_OBJECT MyObject =
clientReply,
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**

View File

@ -37,7 +37,7 @@ MODULE_INFO info =
"A simple query counting filter"
};
static char *version_str = "V1.0.0";
static char *version_str = "V2.0.0";
static FILTER *createInstance(const char *name, char **options, FILTER_PARAMETER **params);
static void *newSession(FILTER *instance, SESSION *session);
@ -47,6 +47,7 @@ static void setDownstream(FILTER *instance, void *fsession, DOWNSTREAM *down
static int routeQuery(FILTER *instance, void *fsession, GWBUF *queue);
static void diagnostic(FILTER *instance, void *fsession, DCB *dcb);
static uint64_t getCapabilities(void);
static void destroyInstance(FILTER *instance);
static FILTER_OBJECT MyObject =
@ -56,11 +57,12 @@ static FILTER_OBJECT MyObject =
closeSession,
freeSession,
setDownstream,
NULL, // No upstream requirement
NULL, // No upstream requirement
routeQuery,
NULL,
NULL, // No clientReply
diagnostic,
getCapabilities,
destroyInstance,
};
/**
@ -68,7 +70,8 @@ static FILTER_OBJECT MyObject =
*/
typedef struct
{
int sessions;
const char *name;
int sessions;
} TEST_INSTANCE;
/**
@ -135,6 +138,7 @@ createInstance(const char *name, char **options, FILTER_PARAMETER **params)
if ((my_instance = MXS_CALLOC(1, sizeof(TEST_INSTANCE))) != NULL)
{
my_instance->sessions = 0;
my_instance->name = name;
}
return (FILTER *)my_instance;
}
@ -258,3 +262,15 @@ static uint64_t getCapabilities(void)
{
return RCAP_TYPE_NONE;
}
/**
* destroyInstance routine.
*
* @param The filter instance.
*/
static void destroyInstance(FILTER *instance)
{
TEST_INSTANCE *cinstance = (TEST_INSTANCE *)instance;
MXS_INFO("Destroying filter %s", cinstance->name);
}

View File

@ -82,6 +82,7 @@ static FILTER_OBJECT MyObject =
clientReply,
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**

View File

@ -95,6 +95,7 @@ static FILTER_OBJECT MyObject =
clientReply,
diagnostic,
getCapabilities,
NULL, // No destroyInstance
};
/**

View File

@ -55,7 +55,7 @@ typedef struct cli_instance
* The CLI_SESSION structure. As CLI_SESSION is created for each user that logs into
* the DEBUG CLI.
*/
enum { CMDBUFLEN = 80 };
#define CMDBUFLEN 2048
typedef struct cli_session
{

View File

@ -681,11 +681,15 @@ static MONITOR_SERVERS *get_candidate_master(MONITOR* mon)
if (handle->use_priority && (value = serverGetParameter(moitor_servers->server, "priority")) != NULL)
{
currval = atoi(value);
if (currval < minval && currval > 0)
/** The server has a priority */
if ((currval = atoi(value)) > 0)
{
minval = currval;
candidate_master = moitor_servers;
/** The priority is valid */
if (currval < minval && currval > 0)
{
minval = currval;
candidate_master = moitor_servers;
}
}
}
else if (moitor_servers->server->node_id >= 0 &&

View File

@ -215,7 +215,7 @@ bool init_server_info(MYSQL_MONITOR *handle, MONITOR_SERVERS *database)
while (database)
{
/** Delete any existing structures and replace them with empty ones */
hashtable_delete(handle->server_info, database->server);
hashtable_delete(handle->server_info, database->server->unique_name);
if (!hashtable_add(handle->server_info, database->server->unique_name, &info))
{
@ -270,7 +270,6 @@ startMonitor(MONITOR *monitor, const CONFIG_PARAMETER* params)
handle->replicationHeartbeat = 0;
handle->detectStaleMaster = true;
handle->detectStaleSlave = true;
handle->master = NULL;
handle->script = NULL;
handle->multimaster = false;
handle->mysql51_replication = false;
@ -281,6 +280,9 @@ startMonitor(MONITOR *monitor, const CONFIG_PARAMETER* params)
spinlock_init(&handle->lock);
}
/** This should always be reset to NULL */
handle->master = NULL;
while (params)
{
if (!strcmp(params->name, "detect_stale_master"))
@ -688,20 +690,9 @@ monitorDatabase(MONITOR *mon, MONITOR_SERVERS *database)
MYSQL_MONITOR* handle = mon->handle;
MYSQL_ROW row;
MYSQL_RES *result;
char *uname = mon->user;
unsigned long int server_version = 0;
char *server_string;
if (database->server->monuser != NULL)
{
uname = database->server->monuser;
}
if (uname == NULL)
{
return;
}
/* Don't probe servers in maintenance mode */
if (SERVER_IN_MAINT(database->server))
{

View File

@ -402,9 +402,6 @@ gw_do_connect_to_backend(char *host, int port, int *fd)
MXS_DEBUG("%lu [gw_do_connect_to_backend] Connected to backend server "
"%s:%d, fd %d.",
pthread_self(), host, port, so);
#if defined(FAKE_CODE)
conn_open[so] = true;
#endif /* FAKE_CODE */
return_rv:
return rv;
@ -759,6 +756,41 @@ gw_read_and_write(DCB *dcb)
}
}
MySQLProtocol *proto = (MySQLProtocol *)dcb->protocol;
spinlock_acquire(&dcb->authlock);
if (proto->ignore_reply)
{
/** The reply to a COM_CHANGE_USER is in packet */
GWBUF *query = proto->stored_query;
uint8_t result = *((uint8_t*)GWBUF_DATA(read_buffer) + 4);
proto->stored_query = NULL;
proto->ignore_reply = false;
gwbuf_free(read_buffer);
spinlock_release(&dcb->authlock);
int rval = 0;
if (result == MYSQL_REPLY_OK)
{
rval = query ? dcb->func.write(dcb, query) : 1;
}
else if (query)
{
/** The COM_CHANGE USER failed, generate a fake hangup event to
* close the DCB and send an error to the client. */
gwbuf_free(query);
poll_fake_hangup_event(dcb);
}
return rval;
}
spinlock_release(&dcb->authlock);
/**
* If protocol has session command set, concatenate whole
* response into one buffer.
@ -911,6 +943,49 @@ static int gw_MySQLWrite_backend(DCB *dcb, GWBUF *queue)
CHK_DCB(dcb);
spinlock_acquire(&dcb->authlock);
if (dcb->was_persistent && dcb->state == DCB_STATE_POLLING)
{
ss_dassert(dcb->persistentstart == 0);
/**
* This is a DCB that was just taken out of the persistent connection pool.
* We need to sent a COM_CHANGE_USER query to the backend to reset the
* session state.
*/
if (backend_protocol->stored_query)
{
/** It is possible that the client DCB is closed before the COM_CHANGE_USER
* response is received. */
gwbuf_free(backend_protocol->stored_query);
}
dcb->was_persistent = false;
backend_protocol->ignore_reply = true;
backend_protocol->stored_query = queue;
spinlock_release(&dcb->authlock);
GWBUF *buf = gw_create_change_user_packet(dcb->session->client_dcb->data, dcb->protocol);
return dcb_write(dcb, buf) ? 1 : 0;
}
else if (backend_protocol->ignore_reply)
{
if (MYSQL_IS_COM_QUIT((uint8_t*)GWBUF_DATA(queue)))
{
gwbuf_free(queue);
}
else
{
/**
* We're still waiting on the reply to the COM_CHANGE_USER, append the
* buffer to the stored query. This is possible if the client sends
* BLOB data on the first command.
*/
backend_protocol->stored_query = gwbuf_append(backend_protocol->stored_query, queue);
}
spinlock_release(&dcb->authlock);
return 1;
}
/**
* Pick action according to state of protocol.
* If auth failed, return value is 0, write and buffered write

View File

@ -102,6 +102,7 @@ MySQLProtocol* mysql_protocol_init(DCB* dcb, int fd)
p->protocol_command.scom_cmd = MYSQL_COM_UNDEFINED;
p->protocol_command.scom_nresponse_packets = 0;
p->protocol_command.scom_nbytes_to_read = 0;
p->stored_query = NULL;
#if defined(SS_DEBUG)
p->protocol_chk_top = CHK_NUM_PROTOCOL;
p->protocol_chk_tail = CHK_NUM_PROTOCOL;
@ -145,6 +146,9 @@ void mysql_protocol_done(DCB* dcb)
MXS_FREE(scmd);
scmd = scmd2;
}
gwbuf_free(p->stored_query);
p->protocol_state = MYSQL_PROTOCOL_DONE;
retblock:

View File

@ -105,7 +105,8 @@ static ROUTER_OBJECT MyObject =
diagnostics,
clientReply,
errorReply,
getCapabilities
getCapabilities,
NULL
};
static SPINLOCK instlock;
@ -223,14 +224,22 @@ bool create_tables(sqlite3* handle)
return true;
}
static void add_conversion_task(AVRO_INSTANCE *inst)
static bool add_conversion_task(AVRO_INSTANCE *inst)
{
char tasknm[strlen(avro_task_name) + strlen(inst->service->name) + 2];
snprintf(tasknm, sizeof(tasknm), "%s-%s", inst->service->name, avro_task_name);
if (inst->service->svc_do_shutdown)
{
MXS_INFO("AVRO converter task is not added due to MaxScale shutdown");
return false;
}
MXS_INFO("Setting task for converter_func");
if (hktask_oneshot(tasknm, converter_func, inst, inst->task_delay) == 0)
{
MXS_ERROR("Failed to add binlog to Avro conversion task to housekeeper.");
return false;
}
return true;
}
/**
@ -1006,7 +1015,8 @@ void converter_func(void* data)
AVRO_INSTANCE* router = (AVRO_INSTANCE*) data;
bool ok = true;
avro_binlog_end_t binlog_end = AVRO_OK;
while (ok && binlog_end == AVRO_OK)
while (!router->service->svc_do_shutdown && ok && binlog_end == AVRO_OK)
{
uint64_t start_pos = router->current_pos;
if (avro_open_binlog(router->binlogdir, router->binlog_name, &router->binlog_fd))
@ -1037,10 +1047,12 @@ void converter_func(void* data)
if (binlog_end == AVRO_LAST_FILE)
{
router->task_delay = MXS_MIN(router->task_delay + 1, AVRO_TASK_DELAY_MAX);
add_conversion_task(router);
MXS_INFO("Stopped processing file %s at position %lu. Waiting until"
" more data is written before continuing. Next check in %d seconds.",
router->binlog_name, router->current_pos, router->task_delay);
if (add_conversion_task(router))
{
MXS_INFO("Stopped processing file %s at position %lu. Waiting until"
" more data is written before continuing. Next check in %d seconds.",
router->binlog_name, router->current_pos, router->task_delay);
}
}
}

View File

@ -51,6 +51,7 @@
* 11/07/2016 Massimiliano Pinto Added SSL backend support
* 22/07/2016 Massimiliano Pinto Added semi_sync replication support
* 16/08/2016 Massimiliano Pinto Addition of Start Encription Event description
* 08/11/2016 Massimiliano Pinto Added destroyInstance()
*
* @endverbatim
*/
@ -109,6 +110,7 @@ static int blr_check_binlog(ROUTER_INSTANCE *router);
int blr_read_events_all_events(ROUTER_INSTANCE *router, int fix, int debug);
void blr_master_close(ROUTER_INSTANCE *);
void blr_free_ssl_data(ROUTER_INSTANCE *inst);
static void destroyInstance(ROUTER *instance);
/** The module object definition */
static ROUTER_OBJECT MyObject =
@ -121,7 +123,8 @@ static ROUTER_OBJECT MyObject =
diagnostics,
clientReply,
errorReply,
getCapabilities
getCapabilities,
destroyInstance
};
static void stats_func(void *);
@ -583,7 +586,8 @@ createInstance(SERVICE *service, char **options)
{
SERVER *server;
SSL_LISTENER *ssl_cfg;
server = server_alloc("_none_", "MySQLBackend", 3306, "MySQLBackendAuth", NULL);
server = server_alloc("binlog_router_master_host", "_none_", 3306,
"MySQLBackend", "MySQLBackendAuth", NULL);
if (server == NULL)
{
MXS_ERROR("%s: Error for server_alloc in createInstance",
@ -615,7 +619,6 @@ createInstance(SERVICE *service, char **options)
server->server_ssl = ssl_cfg;
/* Set server unique name */
server_set_unique_name(server, "binlog_router_master_host");
/* Add server to service backend list */
serviceAddBackend(inst->service, server);
}
@ -2293,3 +2296,65 @@ blr_free_ssl_data(ROUTER_INSTANCE *inst)
inst->service->dbref->server->server_ssl = NULL;
}
}
/**
* destroy binlog server instance
*
* @param service The service this router instance belongs to
*/
static void
destroyInstance(ROUTER *instance)
{
ROUTER_INSTANCE *inst = (ROUTER_INSTANCE *) instance;
MXS_DEBUG("Destroying instance of router %s for service %s",
inst->service->routerModule, inst->service->name);
/* Check whether master connection is active */
if (inst->master)
{
if (inst->master->fd != -1 && inst->master->state == DCB_STATE_POLLING)
{
blr_master_close(inst);
}
}
spinlock_acquire(&inst->lock);
if (inst->master_state != BLRM_UNCONFIGURED)
{
inst->master_state = BLRM_SLAVE_STOPPED;
}
if (inst->client)
{
if (inst->client->state == DCB_STATE_POLLING)
{
dcb_close(inst->client);
inst->client = NULL;
}
}
/* Discard the queued residual data */
while (inst->residual)
{
inst->residual = gwbuf_consume(inst->residual, GWBUF_LENGTH(inst->residual));
}
inst->residual = NULL;
MXS_INFO("%s is being stopped by MaxScale shudown. Disconnecting from master %s:%d, "
"read up to log %s, pos %lu, transaction safe pos %lu",
inst->service->name,
inst->service->dbref->server->name,
inst->service->dbref->server->port,
inst->binlog_name, inst->current_pos, inst->binlog_position);
if (inst->trx_safe && inst->pending_transaction)
{
MXS_WARNING("%s stopped by shutdown: detected mid-transaction in binlog file %s, "
"pos %lu, incomplete transaction starts at pos %lu",
inst->service->name, inst->binlog_name, inst->current_pos, inst->binlog_position);
}
spinlock_release(&inst->lock);
}

View File

@ -103,12 +103,12 @@ int main(int argc, char **argv) {
s = strtok_r(NULL, ",", &lasts);
}
set_libdir(MXS_STRDUP_A("../../../authenticator/"));
server = server_alloc("_none_", "MySQLBackend", 3306, "MySQLBackendAuth", NULL);
server = server_alloc("binlog_router_master_host", "_none_", 3306,
"MySQLBackend", "MySQLBackendAuth", NULL);
if (server == NULL) {
return 1;
}
server_set_unique_name(server, "binlog_router_master_host");
serviceAddBackend(service, server);
}

View File

@ -71,7 +71,8 @@ static ROUTER_OBJECT MyObject =
diagnostics,
NULL,
NULL,
getCapabilities
getCapabilities,
NULL
};
extern int execute_cmd(CLI_SESSION *cli);

View File

@ -70,7 +70,8 @@ static ROUTER_OBJECT MyObject =
diagnostics,
NULL,
NULL,
getCapabilities
getCapabilities,
NULL
};
extern int execute_cmd(CLI_SESSION *cli);

File diff suppressed because it is too large Load Diff

View File

@ -96,7 +96,8 @@ static ROUTER_OBJECT MyObject =
diagnostics,
NULL,
handleError,
getCapabilities
getCapabilities,
NULL
};
static SPINLOCK instlock;

View File

@ -32,18 +32,6 @@
MXS_BEGIN_DECLS
/**
* Internal structure used to define the set of backend servers we are routing
* connections to. This provides the storage for routing module specific data
* that is required for each of the backend servers.
*/
typedef struct backend
{
SERVER *server; /*< The server itself */
int current_connection_count; /*< Number of connections to the server */
int weight; /*< Desired routing weight */
} BACKEND;
/**
* The client session structure used within this router.
*/
@ -55,7 +43,7 @@ typedef struct router_client_session
SPINLOCK rses_lock; /*< protects rses_deleted */
int rses_versno; /*< even = no active update, else odd */
bool rses_closed; /*< true when closeSession is called */
BACKEND *backend; /*< Backend used by the client session */
SERVER_REF *backend; /*< Backend used by the client session */
DCB *backend_dcb; /*< DCB Connection to the backend */
DCB *client_dcb; /**< Client DCB */
struct router_client_session *next;
@ -79,9 +67,7 @@ typedef struct
typedef struct router_instance
{
SERVICE *service; /*< Pointer to the service using this router */
ROUTER_CLIENT_SES *connections; /*< Link list of all the client connections */
SPINLOCK lock; /*< Spinlock for the instance data */
BACKEND **servers; /*< List of backend servers */
unsigned int bitmask; /*< Bitmask to apply to server->status */
unsigned int bitvalue; /*< Required value of server->status */
ROUTER_STATS stats; /*< Statistics for this router */

View File

@ -125,14 +125,15 @@ static ROUTER_OBJECT MyObject =
diagnostics,
clientReply,
handleError,
getCapabilities
getCapabilities,
NULL
};
static bool rses_begin_locked_router_action(ROUTER_CLIENT_SES* rses);
static void rses_end_locked_router_action(ROUTER_CLIENT_SES* rses);
static BACKEND *get_root_master(BACKEND **servers);
static SERVER_REF *get_root_master(SERVER_REF *servers);
static int handle_state_switch(DCB* dcb, DCB_REASON reason, void * routersession);
static SPINLOCK instlock;
static ROUTER_INSTANCE *instances;
@ -178,14 +179,6 @@ static inline void free_readconn_instance(ROUTER_INSTANCE *router)
{
if (router)
{
if (router->servers)
{
for (int i = 0; router->servers[i]; i++)
{
MXS_FREE(router->servers[i]);
}
}
MXS_FREE(router->servers);
MXS_FREE(router);
}
}
@ -203,11 +196,8 @@ static ROUTER *
createInstance(SERVICE *service, char **options)
{
ROUTER_INSTANCE *inst;
SERVER *server;
SERVER_REF *sref;
int i, n;
BACKEND *backend;
char *weightby;
if ((inst = MXS_CALLOC(1, sizeof(ROUTER_INSTANCE))) == NULL)
{
@ -217,103 +207,6 @@ createInstance(SERVICE *service, char **options)
inst->service = service;
spinlock_init(&inst->lock);
/*
* We need an array of the backend servers in the instance structure so
* that we can maintain a count of the number of connections to each
* backend server.
*/
for (sref = service->dbref, n = 0; sref; sref = sref->next)
{
n++;
}
inst->servers = (BACKEND **) MXS_CALLOC(n + 1, sizeof(BACKEND *));
if (!inst->servers)
{
free_readconn_instance(inst);
return NULL;
}
for (sref = service->dbref, n = 0; sref; sref = sref->next)
{
if ((inst->servers[n] = MXS_MALLOC(sizeof(BACKEND))) == NULL)
{
free_readconn_instance(inst);
return NULL;
}
inst->servers[n]->server = sref->server;
inst->servers[n]->current_connection_count = 0;
inst->servers[n]->weight = 1000;
n++;
}
inst->servers[n] = NULL;
if ((weightby = serviceGetWeightingParameter(service)) != NULL)
{
int total = 0;
for (int n = 0; inst->servers[n]; n++)
{
BACKEND *backend = inst->servers[n];
char *param = serverGetParameter(backend->server, weightby);
if (param)
{
total += atoi(param);
}
}
if (total == 0)
{
MXS_WARNING("Weighting Parameter for service '%s' "
"will be ignored as no servers have values "
"for the parameter '%s'.",
service->name, weightby);
}
else if (total < 0)
{
MXS_ERROR("Sum of weighting parameter '%s' for service '%s' exceeds "
"maximum value of %d. Weighting will be ignored.",
weightby, service->name, INT_MAX);
}
else
{
for (int n = 0; inst->servers[n]; n++)
{
BACKEND *backend = inst->servers[n];
char *param = serverGetParameter(backend->server, weightby);
if (param)
{
int wght = atoi(param);
int perc = (wght * 1000) / total;
if (perc == 0)
{
perc = 1;
MXS_ERROR("Weighting parameter '%s' with a value of %d for"
" server '%s' rounds down to zero with total weight"
" of %d for service '%s'. No queries will be "
"routed to this server.", weightby, wght,
backend->server->unique_name, total,
service->name);
}
else if (perc < 0)
{
MXS_ERROR("Weighting parameter '%s' for server '%s' is too large, "
"maximum value is %d. No weighting will be used for this server.",
weightby, backend->server->unique_name, INT_MAX / 1000);
perc = 1000;
}
backend->weight = perc;
}
else
{
MXS_WARNING("Server '%s' has no parameter '%s' used for weighting"
" for service '%s'.", backend->server->unique_name,
weightby, service->name);
}
}
}
}
/*
* Process the options
*/
@ -398,9 +291,9 @@ newSession(ROUTER *instance, SESSION *session)
{
ROUTER_INSTANCE *inst = (ROUTER_INSTANCE *) instance;
ROUTER_CLIENT_SES *client_rses;
BACKEND *candidate = NULL;
SERVER_REF *candidate = NULL;
int i;
BACKEND *master_host = NULL;
SERVER_REF *master_host = NULL;
MXS_DEBUG("%lu [newSession] new router session with session "
"%p, and inst %p.",
@ -408,7 +301,6 @@ newSession(ROUTER *instance, SESSION *session)
session,
inst);
client_rses = (ROUTER_CLIENT_SES *) MXS_CALLOC(1, sizeof(ROUTER_CLIENT_SES));
if (client_rses == NULL)
@ -425,7 +317,7 @@ newSession(ROUTER *instance, SESSION *session)
/**
* Find the Master host from available servers
*/
master_host = get_root_master(inst->servers);
master_host = get_root_master(inst->service->dbref);
/**
* Find a backend server to connect to. This is the extent of the
@ -445,52 +337,43 @@ newSession(ROUTER *instance, SESSION *session)
* become the new candidate. This has the effect of spreading the
* connections over different servers during periods of very low load.
*/
for (i = 0; inst->servers[i]; i++)
for (SERVER_REF *ref = inst->service->dbref; ref; ref = ref->next)
{
if (inst->servers[i])
if (!SERVER_REF_IS_ACTIVE(ref) || SERVER_IN_MAINT(ref->server) || ref->weight == 0)
{
continue;
}
else
{
MXS_DEBUG("%lu [newSession] Examine server in port %d with "
"%d connections. Status is %s, "
"inst->bitvalue is %d",
pthread_self(),
inst->servers[i]->server->port,
inst->servers[i]->current_connection_count,
STRSRVSTATUS(inst->servers[i]->server),
ref->server->port,
ref->connections,
STRSRVSTATUS(ref->server),
inst->bitmask);
}
if (SERVER_IN_MAINT(inst->servers[i]->server))
{
continue;
}
if (inst->servers[i]->weight == 0)
{
continue;
}
/* Check server status bits against bitvalue from router_options */
if (inst->servers[i] &&
SERVER_IS_RUNNING(inst->servers[i]->server) &&
(inst->servers[i]->server->status & inst->bitmask & inst->bitvalue))
if (ref && SERVER_IS_RUNNING(ref->server) &&
(ref->server->status & inst->bitmask & inst->bitvalue))
{
if (master_host)
{
if (inst->servers[i] == master_host && (inst->bitvalue & SERVER_SLAVE))
if (ref == master_host && (inst->bitvalue & SERVER_SLAVE))
{
/* skip root Master here, as it could also be slave of an external server
* that is not in the configuration.
* Intermediate masters (Relay Servers) are also slave and will be selected
* as Slave(s)
/* Skip root master here, as it could also be slave of an external server that
* is not in the configuration. Intermediate masters (Relay Servers) are also
* slave and will be selected as Slave(s)
*/
continue;
}
if (inst->servers[i] == master_host && (inst->bitvalue & SERVER_MASTER))
if (ref == master_host && (inst->bitvalue & SERVER_MASTER))
{
/* If option is "master" return only the root Master as there
* could be intermediate masters (Relay Servers)
* and they must not be selected.
/* If option is "master" return only the root Master as there could be
* intermediate masters (Relay Servers) and they must not be selected.
*/
candidate = master_host;
@ -499,8 +382,7 @@ newSession(ROUTER *instance, SESSION *session)
}
else
{
/* master_host is NULL, no master server.
* If requested router_option is 'master'
/* Master_host is NULL, no master server. If requested router_option is 'master'
* candidate wll be NULL.
*/
if (inst->bitvalue & SERVER_MASTER)
@ -510,40 +392,31 @@ newSession(ROUTER *instance, SESSION *session)
}
}
/* If no candidate set, set first running server as
our initial candidate server */
/* If no candidate set, set first running server as our initial candidate server */
if (candidate == NULL)
{
candidate = inst->servers[i];
candidate = ref;
}
else if (((inst->servers[i]->current_connection_count + 1)
* 1000) / inst->servers[i]->weight <
((candidate->current_connection_count + 1) *
1000) / candidate->weight)
else if (((ref->connections + 1) * 1000) / ref->weight <
((candidate->connections + 1) * 1000) / candidate->weight)
{
/* This running server has fewer
connections, set it as a new candidate */
candidate = inst->servers[i];
/* This running server has fewer connections, set it as a new candidate */
candidate = ref;
}
else if (((inst->servers[i]->current_connection_count + 1)
* 1000) / inst->servers[i]->weight ==
((candidate->current_connection_count + 1) *
1000) / candidate->weight &&
inst->servers[i]->server->stats.n_connections <
candidate->server->stats.n_connections)
else if (((ref->connections + 1) * 1000) / ref->weight ==
((candidate->connections + 1) * 1000) / candidate->weight &&
ref->server->stats.n_connections < candidate->server->stats.n_connections)
{
/* This running server has the same number
of connections currently as the candidate
but has had fewer connections over time
than candidate, set this server to candidate*/
candidate = inst->servers[i];
/* This running server has the same number of connections currently as the candidate
but has had fewer connections over time than candidate, set this server to
candidate*/
candidate = ref;
}
}
}
/* There is no candidate server here!
* With router_option=slave a master_host could be set, so route traffic there.
* Otherwise, just clean up and return NULL
/* If we haven't found a proper candidate yet but a master server is available, we'll pick that
* with the assumption that it is "better" than a slave.
*/
if (!candidate)
{
@ -553,9 +426,8 @@ newSession(ROUTER *instance, SESSION *session)
}
else
{
MXS_ERROR("Failed to create new routing session. "
"Couldn't find eligible candidate server. Freeing "
"allocated resources.");
MXS_ERROR("Failed to create new routing session. Couldn't find eligible"
" candidate server. Freeing allocated resources.");
MXS_FREE(client_rses);
return NULL;
}
@ -565,48 +437,32 @@ newSession(ROUTER *instance, SESSION *session)
* We now have the server with the least connections.
* Bump the connection count for this server
*/
atomic_add(&candidate->current_connection_count, 1);
client_rses->backend = candidate;
MXS_DEBUG("%lu [newSession] Selected server in port %d. "
"Connections : %d\n",
pthread_self(),
candidate->server->port,
candidate->current_connection_count);
/*
* Open a backend connection, putting the DCB for this
* connection in the client_rses->backend_dcb
*/
client_rses->backend_dcb = dcb_connect(candidate->server,
session,
/** Open the backend connection */
client_rses->backend_dcb = dcb_connect(candidate->server, session,
candidate->server->protocol);
if (client_rses->backend_dcb == NULL)
{
atomic_add(&candidate->current_connection_count, -1);
/** The failure is reported in dcb_connect() */
MXS_FREE(client_rses);
return NULL;
}
dcb_add_callback(
client_rses->backend_dcb,
atomic_add(&candidate->connections, 1);
// TODO: Remove this as it is never called
dcb_add_callback(client_rses->backend_dcb,
DCB_REASON_NOT_RESPONDING,
&handle_state_switch,
client_rses);
inst->stats.n_sessions++;
/**
* Add this session to the list of active sessions.
*/
spinlock_acquire(&inst->lock);
client_rses->next = inst->connections;
inst->connections = client_rses;
spinlock_release(&inst->lock);
CHK_CLIENT_RSES(client_rses);
MXS_INFO("Readconnroute: New session for server %s. "
"Connections : %d",
candidate->server->unique_name,
candidate->current_connection_count);
MXS_INFO("Readconnroute: New session for server %s. Connections : %d",
candidate->server->unique_name, candidate->connections);
return(void *) client_rses;
}
@ -631,42 +487,11 @@ newSession(ROUTER *instance, SESSION *session)
static void freeSession(ROUTER* router_instance, void* router_client_ses)
{
ROUTER_INSTANCE* router = (ROUTER_INSTANCE *) router_instance;
ROUTER_CLIENT_SES* router_cli_ses =
(ROUTER_CLIENT_SES *) router_client_ses;
ROUTER_CLIENT_SES* router_cli_ses = (ROUTER_CLIENT_SES *) router_client_ses;
ss_debug(int prev_val = ) atomic_add(&router_cli_ses->backend->current_connection_count, -1);
ss_debug(int prev_val = ) atomic_add(&router_cli_ses->backend->connections, -1);
ss_dassert(prev_val > 0);
spinlock_acquire(&router->lock);
if (router->connections == router_cli_ses)
{
router->connections = router_cli_ses->next;
}
else
{
ROUTER_CLIENT_SES *ptr = router->connections;
while (ptr != NULL && ptr->next != router_cli_ses)
{
ptr = ptr->next;
}
if (ptr != NULL)
{
ptr->next = router_cli_ses->next;
}
}
spinlock_release(&router->lock);
MXS_DEBUG("%lu [freeSession] Unlinked router_client_session %p from "
"router %p and from server on port %d. Connections : %d. ",
pthread_self(),
router_cli_ses,
router,
router_cli_ses->backend->server->port,
prev_val - 1);
MXS_FREE(router_cli_ses);
}
@ -708,6 +533,29 @@ closeSession(ROUTER *instance, void *router_session)
}
}
/** Log routing failure due to closed session */
static void log_closed_session(mysql_server_cmd_t mysql_command, bool is_closed,
SERVER_REF *ref)
{
char msg[MAX_SERVER_NAME_LEN + 200] = ""; // Extra space for message
if (is_closed)
{
sprintf(msg, "Session is closed.");
}
else if (SERVER_IS_DOWN(ref->server))
{
sprintf(msg, "Server '%s' is down.", ref->server->unique_name);
}
else if (!SERVER_REF_IS_ACTIVE(ref))
{
sprintf(msg, "Server '%s' was removed from the service.", ref->server->unique_name);
}
MXS_ERROR("Failed to route MySQL command %d to backend server. %s",
mysql_command, msg);
}
/**
* We have data from the client, we must route it to the backend.
* This is simply a case of sending it to the connection that was
@ -723,7 +571,7 @@ routeQuery(ROUTER *instance, void *router_session, GWBUF *queue)
{
ROUTER_INSTANCE *inst = (ROUTER_INSTANCE *) instance;
ROUTER_CLIENT_SES *router_cli_ses = (ROUTER_CLIENT_SES *) router_session;
int rc;
int rc = 0;
DCB* backend_dcb;
MySQLProtocol *proto = (MySQLProtocol*)router_cli_ses->client_dcb->protocol;
mysql_server_cmd_t mysql_command = proto->current_command;
@ -752,16 +600,11 @@ routeQuery(ROUTER *instance, void *router_session, GWBUF *queue)
}
if (rses_is_closed || backend_dcb == NULL ||
!SERVER_REF_IS_ACTIVE(router_cli_ses->backend) ||
SERVER_IS_DOWN(router_cli_ses->backend->server))
{
MXS_ERROR("Failed to route MySQL command %d to backend "
"server.%s",
mysql_command, rses_is_closed ? " Session is closed." : "");
rc = 0;
while ((queue = GWBUF_CONSUME_ALL(queue)) != NULL)
{
;
}
log_closed_session(mysql_command, rses_is_closed, router_cli_ses->backend);
gwbuf_free(queue);
goto return_rc;
}
@ -806,23 +649,12 @@ static void
diagnostics(ROUTER *router, DCB *dcb)
{
ROUTER_INSTANCE *router_inst = (ROUTER_INSTANCE *) router;
ROUTER_CLIENT_SES *session;
int i = 0;
BACKEND *backend;
char *weightby;
spinlock_acquire(&router_inst->lock);
session = router_inst->connections;
while (session)
{
i++;
session = session->next;
}
spinlock_release(&router_inst->lock);
dcb_printf(dcb, "\tNumber of router sessions: %d\n",
router_inst->stats.n_sessions);
dcb_printf(dcb, "\tCurrent no. of router sessions: %d\n", i);
dcb_printf(dcb, "\tCurrent no. of router sessions: %d\n",
router_inst->service->stats.n_current);
dcb_printf(dcb, "\tNumber of queries forwarded: %d\n",
router_inst->stats.n_queries);
if ((weightby = serviceGetWeightingParameter(router_inst->service))
@ -833,15 +665,13 @@ diagnostics(ROUTER *router, DCB *dcb)
weightby);
dcb_printf(dcb,
"\t\tServer Target %% Connections\n");
for (i = 0; router_inst->servers[i]; i++)
for (SERVER_REF *ref = router_inst->service->dbref; ref; ref = ref->next)
{
backend = router_inst->servers[i];
dcb_printf(dcb, "\t\t%-20s %3.1f%% %d\n",
backend->server->unique_name,
(float) backend->weight / 10,
backend->current_connection_count);
ref->server->unique_name,
(float) ref->weight / 10,
ref->connections);
}
}
}
@ -1002,28 +832,28 @@ static uint64_t getCapabilities(void)
*
*/
static BACKEND *get_root_master(BACKEND **servers)
static SERVER_REF *get_root_master(SERVER_REF *servers)
{
int i = 0;
BACKEND *master_host = NULL;
SERVER_REF *master_host = NULL;
for (i = 0; servers[i]; i++)
for (SERVER_REF *ref = servers; ref; ref = ref->next)
{
if (servers[i] && (servers[i]->server->status & (SERVER_MASTER | SERVER_MAINT)) == SERVER_MASTER)
if (ref->active && SERVER_IS_MASTER(ref->server))
{
if (master_host == NULL)
{
master_host = servers[i];
master_host = ref;
}
else if (servers[i]->server->depth < master_host->server->depth ||
(servers[i]->server->depth == master_host->server->depth &&
servers[i]->weight > master_host->weight))
else if (ref->server->depth < master_host->server->depth ||
(ref->server->depth == master_host->server->depth &&
ref->weight > master_host->weight))
{
/**
* This master has a lower depth than the candidate master or
* the depths are equal but this master has a higher weight
*/
master_host = servers[i];
master_host = ref;
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -32,26 +32,6 @@
MXS_BEGIN_DECLS
#undef PREP_STMT_CACHING
#if defined(PREP_STMT_CACHING)
typedef enum prep_stmt_type
{
PREP_STMT_NAME,
PREP_STMT_ID
} prep_stmt_type_t;
typedef enum prep_stmt_state
{
PREP_STMT_ALLOC,
PREP_STMT_SENT,
PREP_STMT_RECV,
PREP_STMT_DROPPED
} prep_stmt_state_t;
#endif /*< PREP_STMT_CACHING */
typedef enum bref_state
{
BREF_IN_USE = 0x01,
@ -199,27 +179,6 @@ typedef struct sescmd_cursor_st
#endif
} sescmd_cursor_t;
/**
* Internal structure used to define the set of backend servers we are routing
* connections to. This provides the storage for routing module specific data
* that is required for each of the backend servers.
*
* Owned by router_instance, referenced by each routing session.
*/
typedef struct backend_st
{
#if defined(SS_DEBUG)
skygw_chk_t be_chk_top;
#endif
SERVER* backend_server; /*< The server itself */
int backend_conn_count; /*< Number of connections to the server */
bool be_valid; /*< Valid when belongs to the router's configuration */
int weight; /*< Desired weighting on the load. Expressed in .1% increments */
#if defined(SS_DEBUG)
skygw_chk_t be_chk_tail;
#endif
} BACKEND;
/**
* Reference to BACKEND.
*
@ -230,7 +189,7 @@ typedef struct backend_ref_st
#if defined(SS_DEBUG)
skygw_chk_t bref_chk_top;
#endif
BACKEND* bref_backend;
SERVER_REF* ref;
DCB* bref_dcb;
bref_state_t bref_state;
int bref_num_result_wait;
@ -302,7 +261,6 @@ struct router_client_session
skygw_chk_t rses_chk_top;
#endif
SPINLOCK rses_lock; /*< protects rses_deleted */
int rses_versno; /*< even = no active update, else odd. not used 4/14 */
bool rses_closed; /*< true when closeSession is called */
rses_property_t* rses_properties[RSES_PROP_TYPE_COUNT]; /*< Properties listed by their type */
backend_ref_t* rses_master_ref;
@ -346,14 +304,10 @@ typedef struct
typedef struct router_instance
{
SERVICE* service; /*< Pointer to service */
ROUTER_CLIENT_SES* connections; /*< List of client connections */
SPINLOCK lock; /*< Lock for the instance data */
BACKEND** servers; /*< Backend servers */
BACKEND* master; /*< NULL or pointer */
rwsplit_config_t rwsplit_config; /*< expanded config info from SERVICE */
int rwsplit_version; /*< version number for router's config */
ROUTER_STATS stats; /*< Statistics for this router */
struct router_instance* next; /*< Next router on the list */
bool available_slaves; /*< The router has some slaves avialable */
} ROUTER_INSTANCE;

View File

@ -366,66 +366,6 @@ void live_session_reply(GWBUF **querybuf, ROUTER_CLIENT_SES *rses)
}
}
/*
* Uses MySQL specific mechanisms
*/
/**
* @brief Write an error message to the log for session lock failure
*
* This happens when processing a client reply and the session cannot be
* locked.
*
* @param rses Router session
* @param buf Query buffer containing reply data
* @param dcb The backend DCB that sent the reply
*/
void print_error_packet(ROUTER_CLIENT_SES *rses, GWBUF *buf, DCB *dcb)
{
#if defined(SS_DEBUG)
if (GWBUF_IS_TYPE_MYSQL(buf))
{
while (gwbuf_length(buf) > 0)
{
/**
* This works with MySQL protocol only !
* Protocol specific packet print functions would be nice.
*/
uint8_t *ptr = GWBUF_DATA(buf);
size_t len = MYSQL_GET_PACKET_LEN(ptr);
if (MYSQL_GET_COMMAND(ptr) == 0xff)
{
SERVER *srv = NULL;
backend_ref_t *bref = rses->rses_backend_ref;
int i;
char *bufstr;
for (i = 0; i < rses->rses_nbackends; i++)
{
if (bref[i].bref_dcb == dcb)
{
srv = bref[i].bref_backend->backend_server;
}
}
ss_dassert(srv != NULL);
char *str = (char *)&ptr[7];
bufstr = strndup(str, len - 3);
MXS_ERROR("Backend server %s:%d responded with "
"error : %s",
srv->name, srv->port, bufstr);
MXS_FREE(bufstr);
}
buf = gwbuf_consume(buf, len + 4);
}
}
else
{
gwbuf_free(buf);
}
#endif /*< SS_DEBUG */
}
/*
* Uses MySQL specific mechanisms
*/
@ -453,8 +393,8 @@ void check_session_command_reply(GWBUF *writebuf, sescmd_cursor_t *scur, backend
ss_dassert(len + 4 == GWBUF_LENGTH(scur->scmd_cur_cmd->my_sescmd_buf));
MXS_ERROR("Failed to execute session command in %s:%d. Error was: %s %s",
bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port, err, replystr);
bref->ref->server->name,
bref->ref->server->port, err, replystr);
MXS_FREE(err);
MXS_FREE(replystr);
}

View File

@ -84,7 +84,7 @@ bool route_single_stmt(ROUTER_INSTANCE *inst, ROUTER_CLIENT_SES *rses,
* transaction is committed and autocommit is enabled again.
*/
if (rses->rses_autocommit_enabled &&
QUERY_IS_TYPE(qtype, QUERY_TYPE_DISABLE_AUTOCOMMIT))
qc_query_is_type(qtype, QUERY_TYPE_DISABLE_AUTOCOMMIT))
{
rses->rses_autocommit_enabled = false;
@ -94,7 +94,7 @@ bool route_single_stmt(ROUTER_INSTANCE *inst, ROUTER_CLIENT_SES *rses,
}
}
else if (!rses->rses_transaction_active &&
QUERY_IS_TYPE(qtype, QUERY_TYPE_BEGIN_TRX))
qc_query_is_type(qtype, QUERY_TYPE_BEGIN_TRX))
{
rses->rses_transaction_active = true;
}
@ -102,13 +102,13 @@ bool route_single_stmt(ROUTER_INSTANCE *inst, ROUTER_CLIENT_SES *rses,
* Explicit COMMIT and ROLLBACK, implicit COMMIT.
*/
if (rses->rses_autocommit_enabled && rses->rses_transaction_active &&
(QUERY_IS_TYPE(qtype, QUERY_TYPE_COMMIT) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_ROLLBACK)))
(qc_query_is_type(qtype, QUERY_TYPE_COMMIT) ||
qc_query_is_type(qtype, QUERY_TYPE_ROLLBACK)))
{
rses->rses_transaction_active = false;
}
else if (!rses->rses_autocommit_enabled &&
QUERY_IS_TYPE(qtype, QUERY_TYPE_ENABLE_AUTOCOMMIT))
qc_query_is_type(qtype, QUERY_TYPE_ENABLE_AUTOCOMMIT))
{
rses->rses_autocommit_enabled = true;
rses->rses_transaction_active = false;
@ -253,10 +253,10 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
BREF_IS_IN_USE((&backend_ref[i])))
{
MXS_INFO("Route query to %s \t%s:%d%s",
(SERVER_IS_MASTER(backend_ref[i].bref_backend->backend_server)
(SERVER_IS_MASTER(backend_ref[i].ref->server)
? "master" : "slave"),
backend_ref[i].bref_backend->backend_server->name,
backend_ref[i].bref_backend->backend_server->port,
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port,
(i + 1 == router_cli_ses->rses_nbackends ? " <" : " "));
}
@ -368,10 +368,10 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
if (MXS_LOG_PRIORITY_IS_ENABLED(LOG_INFO))
{
MXS_INFO("Route query to %s \t%s:%d%s",
(SERVER_IS_MASTER(backend_ref[i].bref_backend->backend_server)
(SERVER_IS_MASTER(backend_ref[i].ref->server)
? "master" : "slave"),
backend_ref[i].bref_backend->backend_server->name,
backend_ref[i].bref_backend->backend_server->port,
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port,
(i + 1 == router_cli_ses->rses_nbackends ? " <" : " "));
}
@ -391,8 +391,8 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
{
nsucc += 1;
MXS_INFO("Backend %s:%d already executing sescmd.",
backend_ref[i].bref_backend->backend_server->name,
backend_ref[i].bref_backend->backend_server->port);
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port);
}
else
{
@ -403,8 +403,8 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
else
{
MXS_ERROR("Failed to execute session command in %s:%d",
backend_ref[i].bref_backend->backend_server->name,
backend_ref[i].bref_backend->backend_server->port);
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port);
}
}
}
@ -533,9 +533,9 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
for (i = 0; i < rses->rses_nbackends; i++)
{
BACKEND *b = backend_ref[i].bref_backend;
SERVER_REF *b = backend_ref[i].ref;
SERVER server;
server.status = backend_ref[i].bref_backend->backend_server->status;
server.status = b->server->status;
/**
* To become chosen:
* backend must be in use, name must match,
@ -543,7 +543,8 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
* server, or master.
*/
if (BREF_IS_IN_USE((&backend_ref[i])) &&
(strncasecmp(name, b->backend_server->unique_name, PATH_MAX) == 0) &&
SERVER_REF_IS_ACTIVE(b) &&
(strncasecmp(name, b->server->unique_name, PATH_MAX) == 0) &&
(SERVER_IS_SLAVE(&server) || SERVER_IS_RELAY_SERVER(&server) ||
SERVER_IS_MASTER(&server)))
{
@ -569,15 +570,15 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
for (i = 0; i < rses->rses_nbackends; i++)
{
BACKEND *b = (&backend_ref[i])->bref_backend;
SERVER_REF *b = backend_ref[i].ref;
SERVER server;
SERVER candidate;
server.status = backend_ref[i].bref_backend->backend_server->status;
server.status = b->server->status;
/**
* Unused backend or backend which is not master nor
* slave can't be used
*/
if (!BREF_IS_IN_USE(&backend_ref[i]) ||
if (!BREF_IS_IN_USE(&backend_ref[i]) || !SERVER_REF_IS_ACTIVE(b) ||
(!SERVER_IS_MASTER(&server) && !SERVER_IS_SLAVE(&server)))
{
continue;
@ -596,7 +597,7 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
{
/** found master */
candidate_bref = &backend_ref[i];
candidate.status = candidate_bref->bref_backend->backend_server->status;
candidate.status = candidate_bref->ref->server->status;
succp = true;
}
/**
@ -605,12 +606,12 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
* maximum allowed replication lag.
*/
else if (max_rlag == MAX_RLAG_UNDEFINED ||
(b->backend_server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->backend_server->rlag <= max_rlag))
(b->server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->server->rlag <= max_rlag))
{
/** found slave */
candidate_bref = &backend_ref[i];
candidate.status = candidate_bref->bref_backend->backend_server->status;
candidate.status = candidate_bref->ref->server->status;
succp = true;
}
}
@ -620,13 +621,13 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
*/
else if (SERVER_IS_MASTER(&candidate) && SERVER_IS_SLAVE(&server) &&
(max_rlag == MAX_RLAG_UNDEFINED ||
(b->backend_server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->backend_server->rlag <= max_rlag)) &&
(b->server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->server->rlag <= max_rlag)) &&
!rses->rses_config.rw_master_reads)
{
/** found slave */
candidate_bref = &backend_ref[i];
candidate.status = candidate_bref->bref_backend->backend_server->status;
candidate.status = candidate_bref->ref->server->status;
succp = true;
}
/**
@ -637,21 +638,17 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
else if (SERVER_IS_SLAVE(&server))
{
if (max_rlag == MAX_RLAG_UNDEFINED ||
(b->backend_server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->backend_server->rlag <= max_rlag))
(b->server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->server->rlag <= max_rlag))
{
candidate_bref =
check_candidate_bref(candidate_bref, &backend_ref[i],
rses->rses_config.rw_slave_select_criteria);
candidate.status =
candidate_bref->bref_backend->backend_server->status;
candidate_bref = check_candidate_bref(candidate_bref, &backend_ref[i],
rses->rses_config.rw_slave_select_criteria);
candidate.status = candidate_bref->ref->server->status;
}
else
{
MXS_INFO("Server %s:%d is too much behind the "
"master, %d s. and can't be chosen.",
b->backend_server->name, b->backend_server->port,
b->backend_server->rlag);
MXS_INFO("Server %s:%d is too much behind the master, %d s. and can't be chosen.",
b->server->name, b->server->port, b->server->rlag);
}
}
} /*< for */
@ -669,27 +666,37 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
*/
if (btype == BE_MASTER)
{
if (master_bref)
if (master_bref && SERVER_REF_IS_ACTIVE(master_bref->ref))
{
/** It is possible for the server status to change at any point in time
* so copying it locally will make possible error messages
* easier to understand */
SERVER server;
server.status = master_bref->bref_backend->backend_server->status;
if (BREF_IS_IN_USE(master_bref) && SERVER_IS_MASTER(&server))
server.status = master_bref->ref->server->status;
if (BREF_IS_IN_USE(master_bref))
{
*p_dcb = master_bref->bref_dcb;
succp = true;
/** if bref is in use DCB should not be closed */
ss_dassert(master_bref->bref_dcb->state != DCB_STATE_ZOMBIE);
if (SERVER_IS_MASTER(&server))
{
*p_dcb = master_bref->bref_dcb;
succp = true;
/** if bref is in use DCB should not be closed */
ss_dassert(master_bref->bref_dcb->state != DCB_STATE_ZOMBIE);
}
else
{
MXS_ERROR("Server '%s' should be master but "
"is %s instead and can't be chosen as the master.",
master_bref->ref->server->unique_name,
STRSRVSTATUS(&server));
succp = false;
}
}
else
{
MXS_ERROR("Server at %s:%d should be master but "
"is %s instead and can't be chosen to master.",
master_bref->bref_backend->backend_server->name,
master_bref->bref_backend->backend_server->port,
STRSRVSTATUS(&server));
MXS_ERROR("Server '%s' is not in use and can't be "
"chosen as the master.",
master_bref->ref->server->unique_name);
succp = false;
}
}
@ -727,13 +734,13 @@ route_target_t get_route_target(ROUTER_CLIENT_SES *rses,
* These queries are not affected by hints
*/
else if (!load_active &&
(QUERY_IS_TYPE(qtype, QUERY_TYPE_SESSION_WRITE) ||
(qc_query_is_type(qtype, QUERY_TYPE_SESSION_WRITE) ||
/** Configured to allow writing variables to all nodes */
(use_sql_variables_in == TYPE_ALL &&
QUERY_IS_TYPE(qtype, QUERY_TYPE_GSYSVAR_WRITE)) ||
qc_query_is_type(qtype, QUERY_TYPE_GSYSVAR_WRITE)) ||
/** enable or disable autocommit are always routed to all */
QUERY_IS_TYPE(qtype, QUERY_TYPE_ENABLE_AUTOCOMMIT) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_DISABLE_AUTOCOMMIT)))
qc_query_is_type(qtype, QUERY_TYPE_ENABLE_AUTOCOMMIT) ||
qc_query_is_type(qtype, QUERY_TYPE_DISABLE_AUTOCOMMIT)))
{
/**
* This is problematic query because it would be routed to all
@ -750,9 +757,9 @@ route_target_t get_route_target(ROUTER_CLIENT_SES *rses,
* the execution of the prepared statements to the right server would be
* an easy one. Currently this is not supported.
*/
if (QUERY_IS_TYPE(qtype, QUERY_TYPE_READ) &&
!(QUERY_IS_TYPE(qtype, QUERY_TYPE_PREPARE_STMT) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_PREPARE_NAMED_STMT)))
if (qc_query_is_type(qtype, QUERY_TYPE_READ) &&
!(qc_query_is_type(qtype, QUERY_TYPE_PREPARE_STMT) ||
qc_query_is_type(qtype, QUERY_TYPE_PREPARE_NAMED_STMT)))
{
MXS_WARNING("The query can't be routed to all "
"backend servers because it includes SELECT and "
@ -770,40 +777,40 @@ route_target_t get_route_target(ROUTER_CLIENT_SES *rses,
* Hints may affect on routing of the following queries
*/
else if (!trx_active && !load_active &&
!QUERY_IS_TYPE(qtype, QUERY_TYPE_WRITE) &&
(QUERY_IS_TYPE(qtype, QUERY_TYPE_READ) || /*< any SELECT */
QUERY_IS_TYPE(qtype, QUERY_TYPE_SHOW_TABLES) || /*< 'SHOW TABLES' */
QUERY_IS_TYPE(qtype,
QUERY_TYPE_USERVAR_READ) || /*< read user var */
QUERY_IS_TYPE(qtype, QUERY_TYPE_SYSVAR_READ) || /*< read sys var */
QUERY_IS_TYPE(qtype,
QUERY_TYPE_EXEC_STMT) || /*< prepared stmt exec */
QUERY_IS_TYPE(qtype, QUERY_TYPE_PREPARE_STMT) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_PREPARE_NAMED_STMT) ||
QUERY_IS_TYPE(qtype,
QUERY_TYPE_GSYSVAR_READ))) /*< read global sys var */
!qc_query_is_type(qtype, QUERY_TYPE_WRITE) &&
(qc_query_is_type(qtype, QUERY_TYPE_READ) || /*< any SELECT */
qc_query_is_type(qtype, QUERY_TYPE_SHOW_TABLES) || /*< 'SHOW TABLES' */
qc_query_is_type(qtype,
QUERY_TYPE_USERVAR_READ) || /*< read user var */
qc_query_is_type(qtype, QUERY_TYPE_SYSVAR_READ) || /*< read sys var */
qc_query_is_type(qtype,
QUERY_TYPE_EXEC_STMT) || /*< prepared stmt exec */
qc_query_is_type(qtype, QUERY_TYPE_PREPARE_STMT) ||
qc_query_is_type(qtype, QUERY_TYPE_PREPARE_NAMED_STMT) ||
qc_query_is_type(qtype,
QUERY_TYPE_GSYSVAR_READ))) /*< read global sys var */
{
/** First set expected targets before evaluating hints */
if (!QUERY_IS_TYPE(qtype, QUERY_TYPE_MASTER_READ) &&
(QUERY_IS_TYPE(qtype, QUERY_TYPE_READ) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_SHOW_TABLES) || /*< 'SHOW TABLES' */
if (!qc_query_is_type(qtype, QUERY_TYPE_MASTER_READ) &&
(qc_query_is_type(qtype, QUERY_TYPE_READ) ||
qc_query_is_type(qtype, QUERY_TYPE_SHOW_TABLES) || /*< 'SHOW TABLES' */
/** Configured to allow reading variables from slaves */
(use_sql_variables_in == TYPE_ALL &&
(QUERY_IS_TYPE(qtype, QUERY_TYPE_USERVAR_READ) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_SYSVAR_READ) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_GSYSVAR_READ)))))
(qc_query_is_type(qtype, QUERY_TYPE_USERVAR_READ) ||
qc_query_is_type(qtype, QUERY_TYPE_SYSVAR_READ) ||
qc_query_is_type(qtype, QUERY_TYPE_GSYSVAR_READ)))))
{
target = TARGET_SLAVE;
}
if (QUERY_IS_TYPE(qtype, QUERY_TYPE_MASTER_READ) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_EXEC_STMT) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_PREPARE_STMT) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_PREPARE_NAMED_STMT) ||
if (qc_query_is_type(qtype, QUERY_TYPE_MASTER_READ) ||
qc_query_is_type(qtype, QUERY_TYPE_EXEC_STMT) ||
qc_query_is_type(qtype, QUERY_TYPE_PREPARE_STMT) ||
qc_query_is_type(qtype, QUERY_TYPE_PREPARE_NAMED_STMT) ||
/** Configured not to allow reading variables from slaves */
(use_sql_variables_in == TYPE_MASTER &&
(QUERY_IS_TYPE(qtype, QUERY_TYPE_USERVAR_READ) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_SYSVAR_READ))))
(qc_query_is_type(qtype, QUERY_TYPE_USERVAR_READ) ||
qc_query_is_type(qtype, QUERY_TYPE_SYSVAR_READ))))
{
target = TARGET_MASTER;
}
@ -818,26 +825,26 @@ route_target_t get_route_target(ROUTER_CLIENT_SES *rses,
{
/** hints don't affect on routing */
ss_dassert(trx_active ||
(QUERY_IS_TYPE(qtype, QUERY_TYPE_WRITE) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_MASTER_READ) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_SESSION_WRITE) ||
(QUERY_IS_TYPE(qtype, QUERY_TYPE_USERVAR_READ) &&
(qc_query_is_type(qtype, QUERY_TYPE_WRITE) ||
qc_query_is_type(qtype, QUERY_TYPE_MASTER_READ) ||
qc_query_is_type(qtype, QUERY_TYPE_SESSION_WRITE) ||
(qc_query_is_type(qtype, QUERY_TYPE_USERVAR_READ) &&
use_sql_variables_in == TYPE_MASTER) ||
(QUERY_IS_TYPE(qtype, QUERY_TYPE_SYSVAR_READ) &&
(qc_query_is_type(qtype, QUERY_TYPE_SYSVAR_READ) &&
use_sql_variables_in == TYPE_MASTER) ||
(QUERY_IS_TYPE(qtype, QUERY_TYPE_GSYSVAR_READ) &&
(qc_query_is_type(qtype, QUERY_TYPE_GSYSVAR_READ) &&
use_sql_variables_in == TYPE_MASTER) ||
(QUERY_IS_TYPE(qtype, QUERY_TYPE_GSYSVAR_WRITE) &&
(qc_query_is_type(qtype, QUERY_TYPE_GSYSVAR_WRITE) &&
use_sql_variables_in == TYPE_MASTER) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_BEGIN_TRX) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_ENABLE_AUTOCOMMIT) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_DISABLE_AUTOCOMMIT) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_ROLLBACK) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_COMMIT) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_EXEC_STMT) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_CREATE_TMP_TABLE) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_READ_TMP_TABLE) ||
QUERY_IS_TYPE(qtype, QUERY_TYPE_UNKNOWN)));
qc_query_is_type(qtype, QUERY_TYPE_BEGIN_TRX) ||
qc_query_is_type(qtype, QUERY_TYPE_ENABLE_AUTOCOMMIT) ||
qc_query_is_type(qtype, QUERY_TYPE_DISABLE_AUTOCOMMIT) ||
qc_query_is_type(qtype, QUERY_TYPE_ROLLBACK) ||
qc_query_is_type(qtype, QUERY_TYPE_COMMIT) ||
qc_query_is_type(qtype, QUERY_TYPE_EXEC_STMT) ||
qc_query_is_type(qtype, QUERY_TYPE_CREATE_TMP_TABLE) ||
qc_query_is_type(qtype, QUERY_TYPE_READ_TMP_TABLE) ||
qc_query_is_type(qtype, QUERY_TYPE_UNKNOWN)));
target = TARGET_MASTER;
}
@ -895,9 +902,6 @@ route_target_t get_route_target(ROUTER_CLIENT_SES *rses,
hint = hint->next;
} /*< while (hint != NULL) */
#if defined(SS_EXTRA_DEBUG)
MXS_INFO("Selected target \"%s\"", STRTARGET(target));
#endif
return target;
}
@ -1102,9 +1106,6 @@ bool handle_slave_is_target(ROUTER_INSTANCE *inst, ROUTER_CLIENT_SES *rses,
*/
if (rwsplit_get_dcb(target_dcb, rses, BE_SLAVE, NULL, rlag_max))
{
#if defined(SS_EXTRA_DEBUG)
MXS_INFO("Found DCB for slave.");
#endif
atomic_add(&inst->stats.n_slave, 1);
return true;
}
@ -1191,9 +1192,8 @@ handle_got_target(ROUTER_INSTANCE *inst, ROUTER_CLIENT_SES *rses,
ss_dassert(target_dcb != NULL);
MXS_INFO("Route query to %s \t%s:%d <",
(SERVER_IS_MASTER(bref->bref_backend->backend_server) ? "master"
: "slave"), bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port);
(SERVER_IS_MASTER(bref->ref->server) ? "master"
: "slave"), bref->ref->server->name, bref->ref->server->port);
/**
* Store current stmt if execution of previous session command
* haven't completed yet.
@ -1372,14 +1372,13 @@ static backend_ref_t *get_root_master_bref(ROUTER_CLIENT_SES *rses)
if (bref == rses->rses_master_ref)
{
/** Store master state for better error reporting */
master.status = bref->bref_backend->backend_server->status;
master.status = bref->ref->server->status;
}
if (bref->bref_backend->backend_server->status & SERVER_MASTER)
if (SERVER_IS_MASTER(bref->ref->server))
{
if (candidate_bref == NULL ||
(bref->bref_backend->backend_server->depth <
candidate_bref->bref_backend->backend_server->depth))
(bref->ref->server->depth < candidate_bref->ref->server->depth))
{
candidate_bref = bref;
}

View File

@ -38,7 +38,7 @@ static bool connect_server(backend_ref_t *bref, SESSION *session, bool execute_h
static void log_server_connections(select_criteria_t select_criteria,
backend_ref_t *backend_ref, int router_nservers);
static BACKEND *get_root_master(backend_ref_t *servers, int router_nservers);
static SERVER_REF *get_root_master(backend_ref_t *servers, int router_nservers);
static int bref_cmp_global_conn(const void *bref1, const void *bref2);
@ -103,10 +103,11 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
}
/* get the root Master */
BACKEND *master_host = get_root_master(backend_ref, router_nservers);
SERVER_REF *master_host = get_root_master(backend_ref, router_nservers);
if (router->rwsplit_config.rw_master_failure_mode == RW_FAIL_INSTANTLY &&
(master_host == NULL || SERVER_IS_DOWN(master_host->backend_server)))
(master_host == NULL || !SERVER_REF_IS_ACTIVE(master_host) ||
SERVER_IS_DOWN(master_host->server)))
{
MXS_ERROR("Couldn't find suitable Master from %d candidates.", router_nservers);
return false;
@ -145,9 +146,11 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
for (int i = 0; i < router_nservers &&
(slaves_connected < max_nslaves || !master_connected); i++)
{
SERVER *serv = backend_ref[i].bref_backend->backend_server;
SERVER *serv = backend_ref[i].ref->server;
if (!BREF_HAS_FAILED(&backend_ref[i]) && SERVER_IS_RUNNING(serv))
if (!BREF_HAS_FAILED(&backend_ref[i]) &&
SERVER_REF_IS_ACTIVE(backend_ref[i].ref) &&
SERVER_IS_RUNNING(serv))
{
/* check also for relay servers and don't take the master_host */
if (slaves_found < max_nslaves &&
@ -155,7 +158,7 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
(serv->rlag != MAX_RLAG_NOT_AVAILABLE &&
serv->rlag <= max_slave_rlag)) &&
(SERVER_IS_SLAVE(serv) || SERVER_IS_RELAY_SERVER(serv)) &&
(master_host == NULL || (serv != master_host->backend_server)))
(master_host == NULL || (serv != master_host->server)))
{
slaves_found += 1;
@ -166,7 +169,7 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
}
}
/* take the master_host for master */
else if (master_host && (serv == master_host->backend_server))
else if (master_host && (serv == master_host->server))
{
/** p_master_ref must be assigned with this backend_ref pointer
* because its original value may have been lost when backend
@ -205,9 +208,9 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
if (BREF_IS_IN_USE((&backend_ref[i])))
{
MXS_INFO("Selected %s in \t%s:%d",
STRSRVSTATUS(backend_ref[i].bref_backend->backend_server),
backend_ref[i].bref_backend->backend_server->name,
backend_ref[i].bref_backend->backend_server->port);
STRSRVSTATUS(backend_ref[i].ref->server),
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port);
}
} /* for */
}
@ -226,12 +229,12 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
{
if (BREF_IS_IN_USE((&backend_ref[i])))
{
ss_dassert(backend_ref[i].bref_backend->backend_conn_count > 0);
ss_dassert(backend_ref[i].ref->connections > 0);
/** disconnect opened connections */
bref_clear_state(&backend_ref[i], BREF_IN_USE);
/** Decrease backend's connection counter. */
atomic_add(&backend_ref[i].bref_backend->backend_conn_count, -1);
atomic_add(&backend_ref[i].ref->connections, -1);
dcb_close(backend_ref[i].bref_dcb);
}
}
@ -243,13 +246,12 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
/** Compare number of connections from this router in backend servers */
static int bref_cmp_router_conn(const void *bref1, const void *bref2)
{
BACKEND *b1 = ((backend_ref_t *)bref1)->bref_backend;
BACKEND *b2 = ((backend_ref_t *)bref2)->bref_backend;
SERVER_REF *b1 = ((backend_ref_t *)bref1)->ref;
SERVER_REF *b2 = ((backend_ref_t *)bref2)->ref;
if (b1->weight == 0 && b2->weight == 0)
{
return b1->backend_server->stats.n_current -
b2->backend_server->stats.n_current;
return b1->connections - b2->connections;
}
else if (b1->weight == 0)
{
@ -260,20 +262,20 @@ static int bref_cmp_router_conn(const void *bref1, const void *bref2)
return -1;
}
return ((1000 + 1000 * b1->backend_conn_count) / b1->weight) -
((1000 + 1000 * b2->backend_conn_count) / b2->weight);
return ((1000 + 1000 * b1->connections) / b1->weight) -
((1000 + 1000 * b2->connections) / b2->weight);
}
/** Compare number of global connections in backend servers */
static int bref_cmp_global_conn(const void *bref1, const void *bref2)
{
BACKEND *b1 = ((backend_ref_t *)bref1)->bref_backend;
BACKEND *b2 = ((backend_ref_t *)bref2)->bref_backend;
SERVER_REF *b1 = ((backend_ref_t *)bref1)->ref;
SERVER_REF *b2 = ((backend_ref_t *)bref2)->ref;
if (b1->weight == 0 && b2->weight == 0)
{
return b1->backend_server->stats.n_current -
b2->backend_server->stats.n_current;
return b1->server->stats.n_current -
b2->server->stats.n_current;
}
else if (b1->weight == 0)
{
@ -284,32 +286,29 @@ static int bref_cmp_global_conn(const void *bref1, const void *bref2)
return -1;
}
return ((1000 + 1000 * b1->backend_server->stats.n_current) / b1->weight) -
((1000 + 1000 * b2->backend_server->stats.n_current) / b2->weight);
return ((1000 + 1000 * b1->server->stats.n_current) / b1->weight) -
((1000 + 1000 * b2->server->stats.n_current) / b2->weight);
}
/** Compare replication lag between backend servers */
static int bref_cmp_behind_master(const void *bref1, const void *bref2)
{
BACKEND *b1 = ((backend_ref_t *)bref1)->bref_backend;
BACKEND *b2 = ((backend_ref_t *)bref2)->bref_backend;
SERVER_REF *b1 = ((backend_ref_t *)bref1)->ref;
SERVER_REF *b2 = ((backend_ref_t *)bref2)->ref;
return ((b1->backend_server->rlag < b2->backend_server->rlag) ? -1
: ((b1->backend_server->rlag > b2->backend_server->rlag) ? 1 : 0));
return b1->server->rlag - b2->server->rlag;
}
/** Compare number of current operations in backend servers */
static int bref_cmp_current_load(const void *bref1, const void *bref2)
{
SERVER *s1 = ((backend_ref_t *)bref1)->bref_backend->backend_server;
SERVER *s2 = ((backend_ref_t *)bref2)->bref_backend->backend_server;
BACKEND *b1 = ((backend_ref_t *)bref1)->bref_backend;
BACKEND *b2 = ((backend_ref_t *)bref2)->bref_backend;
SERVER_REF *b1 = ((backend_ref_t *)bref1)->ref;
SERVER_REF *b2 = ((backend_ref_t *)bref2)->ref;
if (b1->weight == 0 && b2->weight == 0)
{
return b1->backend_server->stats.n_current -
b2->backend_server->stats.n_current;
// TODO: Fix this so that operations are used instead of connections
return b1->server->stats.n_current - b2->server->stats.n_current;
}
else if (b1->weight == 0)
{
@ -320,8 +319,8 @@ static int bref_cmp_current_load(const void *bref1, const void *bref2)
return -1;
}
return ((1000 * s1->stats.n_current_ops) - b1->weight) -
((1000 * s2->stats.n_current_ops) - b2->weight);
return ((1000 * b1->server->stats.n_current_ops) - b1->weight) -
((1000 * b2->server->stats.n_current_ops) - b2->weight);
}
/**
@ -338,7 +337,7 @@ static int bref_cmp_current_load(const void *bref1, const void *bref2)
*/
static bool connect_server(backend_ref_t *bref, SESSION *session, bool execute_history)
{
SERVER *serv = bref->bref_backend->backend_server;
SERVER *serv = bref->ref->server;
bool rval = false;
bref->bref_dcb = dcb_connect(serv, session, serv->protocol);
@ -354,16 +353,16 @@ static bool connect_server(backend_ref_t *bref, SESSION *session, bool execute_h
&router_handle_state_switch, (void *) bref);
bref->bref_state = 0;
bref_set_state(bref, BREF_IN_USE);
atomic_add(&bref->bref_backend->backend_conn_count, 1);
atomic_add(&bref->ref->connections, 1);
rval = true;
}
else
{
MXS_ERROR("Failed to execute session command in %s (%s:%d). See earlier "
"errors for more details.",
bref->bref_backend->backend_server->unique_name,
bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port);
bref->ref->server->unique_name,
bref->ref->server->name,
bref->ref->server->port);
dcb_close(bref->bref_dcb);
bref->bref_dcb = NULL;
}
@ -398,33 +397,33 @@ static void log_server_connections(select_criteria_t select_criteria,
for (int i = 0; i < router_nservers; i++)
{
BACKEND *b = backend_ref[i].bref_backend;
SERVER_REF *b = backend_ref[i].ref;
switch (select_criteria)
{
case LEAST_GLOBAL_CONNECTIONS:
MXS_INFO("MaxScale connections : %d in \t%s:%d %s",
b->backend_server->stats.n_current, b->backend_server->name,
b->backend_server->port, STRSRVSTATUS(b->backend_server));
b->server->stats.n_current, b->server->name,
b->server->port, STRSRVSTATUS(b->server));
break;
case LEAST_ROUTER_CONNECTIONS:
MXS_INFO("RWSplit connections : %d in \t%s:%d %s",
b->backend_conn_count, b->backend_server->name,
b->backend_server->port, STRSRVSTATUS(b->backend_server));
b->connections, b->server->name,
b->server->port, STRSRVSTATUS(b->server));
break;
case LEAST_CURRENT_OPERATIONS:
MXS_INFO("current operations : %d in \t%s:%d %s",
b->backend_server->stats.n_current_ops,
b->backend_server->name, b->backend_server->port,
STRSRVSTATUS(b->backend_server));
b->server->stats.n_current_ops,
b->server->name, b->server->port,
STRSRVSTATUS(b->server));
break;
case LEAST_BEHIND_MASTER:
MXS_INFO("replication lag : %d in \t%s:%d %s",
b->backend_server->rlag, b->backend_server->name,
b->backend_server->port, STRSRVSTATUS(b->backend_server));
b->server->rlag, b->server->name,
b->server->port, STRSRVSTATUS(b->server));
default:
break;
}
@ -445,27 +444,26 @@ static void log_server_connections(select_criteria_t select_criteria,
* @return The Master found
*
*/
static BACKEND *get_root_master(backend_ref_t *servers, int router_nservers)
static SERVER_REF *get_root_master(backend_ref_t *servers, int router_nservers)
{
int i = 0;
BACKEND *master_host = NULL;
SERVER_REF *master_host = NULL;
for (i = 0; i < router_nservers; i++)
{
BACKEND *b;
if (servers[i].bref_backend == NULL)
if (servers[i].ref == NULL)
{
/** This should not happen */
ss_dassert(false);
continue;
}
b = servers[i].bref_backend;
SERVER_REF *b = servers[i].ref;
if ((b->backend_server->status & (SERVER_MASTER | SERVER_MAINT)) ==
SERVER_MASTER)
if (SERVER_IS_MASTER(b->server))
{
if (master_host == NULL ||
(b->backend_server->depth < master_host->backend_server->depth))
(b->server->depth < master_host->server->depth))
{
master_host = b;
}

View File

@ -173,7 +173,7 @@ GWBUF *sescmd_cursor_process_replies(GWBUF *replybuf,
{
MXS_ERROR("Slave server '%s': response differs from master's response. "
"Closing connection due to inconsistent session state.",
bref->bref_backend->backend_server->unique_name);
bref->ref->server->unique_name);
sescmd_cursor_set_active(scur, false);
bref_clear_state(bref, BREF_QUERY_ACTIVE);
bref_clear_state(bref, BREF_IN_USE);
@ -205,7 +205,7 @@ GWBUF *sescmd_cursor_process_replies(GWBUF *replybuf,
scmd->reply_cmd = *((unsigned char *)replybuf->start + 4);
MXS_INFO("Server '%s' responded to a session command, sending the response "
"to the client.", bref->bref_backend->backend_server->unique_name);
"to the client.", bref->ref->server->unique_name);
for (int i = 0; i < ses->rses_nbackends; i++)
{
@ -226,8 +226,8 @@ GWBUF *sescmd_cursor_process_replies(GWBUF *replybuf,
*reconnect = true;
MXS_INFO("Disabling slave %s:%d, result differs from "
"master's result. Master: %d Slave: %d",
ses->rses_backend_ref[i].bref_backend->backend_server->name,
ses->rses_backend_ref[i].bref_backend->backend_server->port,
ses->rses_backend_ref[i].ref->server->name,
ses->rses_backend_ref[i].ref->server->port,
bref->reply_cmd, ses->rses_backend_ref[i].reply_cmd);
}
}
@ -237,11 +237,11 @@ GWBUF *sescmd_cursor_process_replies(GWBUF *replybuf,
else
{
MXS_INFO("Slave '%s' responded before master to a session command. Result: %d",
bref->bref_backend->backend_server->unique_name,
bref->ref->server->unique_name,
(int)bref->reply_cmd);
if (bref->reply_cmd == 0xff)
{
SERVER *serv = bref->bref_backend->backend_server;
SERVER *serv = bref->ref->server;
MXS_ERROR("Slave '%s' (%s:%u) failed to execute session command.",
serv->unique_name, serv->name, serv->port);
}

Some files were not shown because too many files have changed in this diff Show More