Merge branch 'develop' into MXS-930

This commit is contained in:
MassimilianoPinto
2016-11-07 16:21:06 +01:00
31 changed files with 3073 additions and 1044 deletions

View File

@ -31,6 +31,7 @@
- [Debug and Diagnostic Support](Reference/Debug-And-Diagnostic-Support.md)
- [Routing Hints](Reference/Hint-Syntax.md)
- [MaxBinlogCheck](Reference/MaxBinlogCheck.md)
- [MaxScale REST API](REST-API/API.md)
## Tutorials

View File

@ -0,0 +1,453 @@
# REST API design document
This document describes the version 1 of the MaxScale REST API.
## Table of Contents
- [HTTP Headers](#http-headers)
- [Request Headers](#request-headers)
- [Response Headers](#response-headers)
- [Response Codes](#response-codes)
- [2xx Success](#2xx-success)
- [3xx Redirection](#3xx-redirection)
- [4xx Client Error](#4xx-client-error)
- [5xx Server Error](#5xx-server-error)
- [Resources](#resources)
- [Common Request Parameter](#common-request-parameters)
## HTTP Headers
### Request Headers
REST makes use of the HTTP protocols in its aim to provide a natural way to
understand the workings of an API. The following request headers are understood
by this API.
#### Accept-Charset
Acceptable character sets.
#### Authorization
Credentials for authentication.
#### Content-Type
All PUT and POST requests must use the `Content-Type: application/json` media
type and the request body must be a valid JSON representation of a resource. All
PATCH requests must use the `Content-Type: application/json-patch` media type
and the request body must be a valid JSON Patch document which is applied to the
resource. Curently, only _add_, _remove_, _replace_ and _test_ operations are
supported.
Read the [JSON Patch](https://tools.ietf.org/html/draft-ietf-appsawg-json-patch-08)
draft for more details on how to use it with PATCH.
#### Date
This header is required and should be in the RFC 1123 standard form, e.g. Mon,
18 Nov 2013 08:14:29 -0600. Please note that the date must be in English. It
will be checked by the API for being close to the current date and time.
#### Host
The address and port of the server.
#### If-Match
The request is performed only if the provided ETag value matches the one on the
server. This field should be used with PUT requests to prevent concurrent
updates to the same resource.
The value of this header must be a value from the `ETag` header retrieved from
the same resource at an earlier point in time.
#### If-Modified-Since
If the content has not changed the server responds with a 304 status code. If
the content has changed the server responds with a 200 status code and the
requested resource.
The value of this header must be a date value in the
["HTTP-date"](https://www.ietf.org/rfc/rfc2822.txt) format.
#### If-None-Match
If the content has not changed the server responds with a 304 status code. If
the content has changed the server responds with a 200 status code and the
requested resource.
The value of this header must be a value from the `ETag` header retrieved from
the same resource at an earlier point in time.
#### If-Unmodified-Since
The request is performed only if the requested resource has not been modified
since the provided date.
The value of this header must be a date value in the
["HTTP-date"](https://www.ietf.org/rfc/rfc2822.txt) format.
#### X-HTTP-Method-Override
Some clients only support GET and PUT requests. By providing the string value of
the intended method in the `X-HTTP-Method-Override` header, a client can perform
a POST, PATCH or DELETE request with the PUT method
(e.g. `X-HTTP-Method-Override: PATCH`).
_TODO: Add API version header?_
### Response Headers
#### Allow
All resources return the Allow header with the supported HTTP methods. For
example the resource `/service` will always return the `Accept: GET, PATCH, PUT`
header.
#### Accept-Patch
All PATCH capable resources return the `Accept-Patch: application/json-patch`
header.
#### Date
Returns the RFC 1123 standard form date when the reply was sent. The date is in
English and it uses the server's local timezone.
#### ETag
An identifier for a specific version of a resource. The value of this header
changes whenever a resource is modified.
When the client sends the `If-Match` or `If-None-Match` header, the provided
value should be the value of the `ETag` header of an earlier GET.
#### Last-Modified
The date when the resource was last modified in "HTTP-date" format.
#### Location
If an out of date resource location is requested, a HTTP return code of 3XX with
the `Location` header is returned. The value of the header contains the new
location of the requested resource as a relative URI.
#### WWW-Authenticate
The requested authentication method. For example, `WWW-Authenticate: Basic`
would require basic HTTP authentication.
## Response Codes
Every HTTP response starts with a line with a return code which indicates the
outcome of the request. The API uses some of the standard HTTP values:
### 2xx Success
- 200 OK
- Successful HTTP requests, response has a body.
- 201 Created
- A new resource was created.
- 202 Accepted
- The request has been accepted for processing, but the processing has not
been completed.
- 204 No Content
- Successful HTTP requests, response has no body.
### 3xx Redirection
This class of status code indicates the client must take additional action to
complete the request.
- 301 Moved Permanently
- This and all future requests should be directed to the given URI.
- 302 Found
- The response to the request can be found under another URI using the same
method as in the original request.
- 303 See Other
- The response to the request can be found under another URI using a GET
method.
- 304 Not Modified
- Indicates that the resource has not been modified since the version
specified by the request headers If-Modified-Since or If-None-Match.
- 307 Temporary Redirect
- The request should be repeated with another URI but future requests should
use the original URI.
- 308 Permanent Redirect
- The request and all future requests should be repeated using another URI.
### 4xx Client Error
The 4xx class of status code is when the client seems to have erred. Except when
responding to a HEAD request, the body of the response contains a JSON
representation of the error in the following format.
```
{
"error": "Method not supported",
"description": "The `/service` resource does not support POST."
}
```
The _error_ field contains a short error description and the _description_ field
contains a more detailed version of the error message.
- 400 Bad Request
- The server cannot or will not process the request due to client error.
- 401 Unauthorized
- Authentication is required. The response includes a WWW-Authenticate header.
- 403 Forbidden
- The request was a valid request, but the client does not have the necessary
permissions for the resource.
- 404 Not Found
- The requested resource could not be found.
- 405 Method Not Allowed
- A request method is not supported for the requested resource.
- 406 Not Acceptable
- The requested resource is capable of generating only content not acceptable
according to the Accept headers sent in the request.
- 409 Conflict
- Indicates that the request could not be processed because of conflict in the
request, such as an edit conflict be tween multiple simultaneous updates.
- 411 Length Required
- The request did not specify the length of its content, which is required by
the requested resource.
- 412 Precondition Failed
- The server does not meet one of the preconditions that the requester put on
the request.
- 413 Payload Too Large
- The request is larger than the server is willing or able to process.
- 414 URI Too Long
- The URI provided was too long for the server to process.
- 415 Unsupported Media Type
- The request entity has a media type which the server or resource does not
support.
- 422 Unprocessable Entity
- The request was well-formed but was unable to be followed due to semantic
errors.
- 423 Locked
- The resource that is being accessed is locked.
- 428 Precondition Required
- The origin server requires the request to be conditional. This error code is
returned when none of the `Modified-Since` or `Match` type headers are used.
- 431 Request Header Fields Too Large
- The server is unwilling to process the request because either an individual
header field, or all the header fields collectively, are too large.
### 5xx Server Error
The server failed to fulfill an apparently valid request.
Response status codes beginning with the digit "5" indicate cases in which the
server is aware that it has encountered an error or is otherwise incapable of
performing the request. Except when responding to a HEAD request, the server
includes an entity containing an explanation of the error situation.
```
{
"error": "Log rotation failed",
"description": "Failed to rotate log files: 13, Permission denied"
}
```
The _error_ field contains a short error description and the _description_ field
contains a more detailed version of the error message.
- 500 Internal Server Error
- A generic error message, given when an unexpected condition was encountered
and no more specific message is suitable.
- 501 Not Implemented
- The server either does not recognize the request method, or it lacks the
ability to fulfill the request.
- 502 Bad Gateway
- The server was acting as a gateway or proxy and received an invalid response
from the upstream server.
- 503 Service Unavailable
- The server is currently unavailable (because it is overloaded or down for
maintenance). Generally, this is a temporary state.
- 504 Gateway Timeout
- The server was acting as a gateway or proxy and did not receive a timely
response from the upstream server.
- 505 HTTP Version Not Supported
- The server does not support the HTTP protocol version used in the request.
- 506 Variant Also Negotiates
- Transparent content negotiation for the request results in a circular
reference.
- 507 Insufficient Storage
- The server is unable to store the representation needed to complete the
request.
- 508 Loop Detected
- The server detected an infinite loop while processing the request (sent in
lieu of 208 Already Reported).
- 510 Not Extended
- Further extensions to the request are required for the server to fulfil it.
### Response Headers Reserved for Future Use
The following response headers are not currently in use. Future versions of the
API could return them.
- 206 Partial Content
- The server is delivering only part of the resource (byte serving) due to a
range header sent by the client.
- 300 Multiple Choices
- Indicates multiple options for the resource from which the client may choose
(via agent-driven content negotiation).
- 407 Proxy Authentication Required
- The client must first authenticate itself with the proxy.
- 408 Request Timeout
- The server timed out waiting for the request. According to HTTP
specifications: "The client did not produce a request within the time that
the server was prepared to wait. The client MAY repeat the request without
modifications at any later time."
- 410 Gone
- Indicates that the resource requested is no longer available and will not be
available again.
- 416 Range Not Satisfiable
- The client has asked for a portion of the file (byte serving), but the
server cannot supply that portion.
- 417 Expectation Failed
- The server cannot meet the requirements of the Expect request-header field.
- 421 Misdirected Request
- The request was directed at a server that is not able to produce a response.
- 424 Failed Dependency
- The request failed due to failure of a previous request.
- 426 Upgrade Required
- The client should switch to a different protocol such as TLS/1.0, given in
the Upgrade header field.
- 429 Too Many Requests
- The user has sent too many requests in a given amount of time. Intended for
use with rate-limiting schemes.
## Resources
The MaxScale REST API provides the following resources.
- [/maxscale](Resources-MaxScale.md)
- [/services](Resources-Service.md)
- [/servers](Resources-Server.md)
- [/filters](Resources-Filter.md)
- [/monitors](Resources-Monitor.md)
- [/sessions](Resources-Session.md)
- [/users](Resources-User.md)
## Common Request Parameters
Most of the resources that support GET also support the following
parameters. See the resource documentation for a list of supported request
parameters.
- `fields`
- A list of fields to return.
This allows the returned object to be filtered so that only needed
parts are returned. The value of this parameter is a comma separated
list of fields to return.
For example, the parameter `?fields=id,name` would return object which
would only contain the _id_ and _name_ fields.
- `range`
- Return a subset of the object array.
The value of this parameter is the range of objects to return given as
a inclusive range separated by a hyphen. If the size of the array is
less than the end of the range, only the objects between the requested
start of the range and the actual end of the array are returned. This
means that
For example, the parameter `?range=10-20` would return objects 10
through 20 from the object array if the actual size of the original
array is greater than or equal to 20.

View File

@ -0,0 +1,151 @@
# Filter Resource
A filter resource represents an instance of a filter inside MaxScale. Multiple
services can use the same filter and a single service can use multiple filters.
## Resource Operations
### Get a filter
Get a single filter. The _:name_ in the URI must be a valid filter name with all
whitespace replaced with hyphens. The filter names are case-insensitive.
```
GET /filters/:name
```
#### Response
```
Status: 200 OK
{
"name": "Query Logging Filter",
"module": "qlafilter",
"parameters": {
"filebase": {
"value": "/var/log/maxscale/qla/log.",
"configurable": false
},
"match": {
"value": "select.*from.*t1",
"configurable": true
}
},
"services": [
"/services/my-service",
"/services/my-second-service"
]
}
```
#### Supported Request Parameter
- `fields`
### Get all filters
Get all filters.
```
GET /filters
```
#### Response
```
Status: 200 OK
[
{
"name": "Query Logging Filter",
"module": "qlafilter",
"parameters": {
"filebase": {
"value": "/var/log/maxscale/qla/log.",
"configurable": false
},
"match": {
"value": "select.*from.*t1",
"configurable": true
}
},
"services": [
"/services/my-service",
"/services/my-second-service
]
},
{
"name": "DBFW Filter",
"module": "dbfwfilter",
"parameters": {
{
"name": "rules",
"value": "/etc/maxscale-rules",
"configurable": false
}
},
"services": [
"/services/my-second-service
]
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Update a filter
**Note**: The update mechanisms described here are provisional and most likely
will change in the future. This description is only for design purposes and
does not yet work.
Partially update a filter. The _:name_ in the URI must map to a filter name
and the request body must be a valid JSON Patch document which is applied to the
resource.
```
PATCH /filter/:name
```
### Modifiable Fields
|Field |Type |Description |
|------------|-------|---------------------------------|
|parameters |object |Module specific filter parameters|
```
[
{ "op": "replace", "path": "/parameters/rules/value", "value": "/etc/new-rules" },
{ "op": "add", "path": "/parameters/action/value", "value": "allow" }
]
```
#### Response
Response contains the modified resource.
```
Status: 200 OK
{
"name": "DBFW Filter",
"module": "dbfwfilter",
"parameters": {
"rules": {
"value": "/etc/new-rules",
"configurable": false
},
"action": {
"value": "allow",
"configurable": true
}
}
"services": [
"/services/my-second-service"
]
}
```

View File

@ -0,0 +1,216 @@
# MaxScale Resource
The MaxScale resource represents a MaxScale instance and it is the core on top
of which the modules build upon.
## Resource Operations
## Get global information
Retrieve global information about a MaxScale instance. This includes various
file locations, configuration options and version information.
```
GET /maxscale
```
#### Response
```
Status: 200 OK
{
"config": "/etc/maxscale.cnf",
"cachedir": "/var/cache/maxscale/",
"datadir": "/var/lib/maxscale/"
"libdir": "/usr/lib64/maxscale/",
"piddir": "/var/run/maxscale/",
"execdir": "/usr/bin/",
"languagedir": "/var/lib/maxscale/",
"user": "maxscale",
"threads": 4,
"version": "2.1.0",
"commit": "12e7f17eb361e353f7ac413b8b4274badb41b559"
"started": "Wed, 31 Aug 2016 23:29:26 +0300"
}
```
#### Supported Request Parameter
- `fields`
## Get thread information
Get detailed information and statistics about the threads.
```
GET /maxscale/threads
```
#### Response
```
Status: 200 OK
{
"load_average": {
"historic": 1.05,
"current": 1.00,
"1min": 0.00,
"5min": 0.00,
"15min": 0.00
},
"threads": [
{
"id": 0,
"state": "processing",
"file_descriptors": 1,
"event": [
"in",
"out"
],
"run_time": 300
},
{
"id": 1,
"state": "polling",
"file_descriptors": 0,
"event": [],
"run_time": 0
}
]
}
```
#### Supported Request Parameter
- `fields`
## Get logging information
Get information about the current state of logging, enabled log files and the
location where the log files are stored.
```
GET /maxscale/logs
```
#### Response
```
Status: 200 OK
{
"logdir": "/var/log/maxscale/",
"maxlog": true,
"syslog": false,
"log_levels": {
"error": true,
"warning": true,
"notice": true,
"info": false,
"debug": false
},
"log_augmentation": {
"function": true
},
"log_throttling": {
"limit": 8,
"window": 2000,
"suppression": 10000
},
"last_flushed": "Wed, 31 Aug 2016 23:29:26 +0300"
}
```
#### Supported Request Parameter
- `fields`
## Flush and rotate log files
Flushes any pending messages to disk and reopens the log files. The body of the
message is ignored.
```
POST /maxscale/logs/flush
```
#### Response
```
Status: 204 No Content
```
## Get task schedule
Retrieve all pending tasks that are queued for execution.
```
GET /maxscale/tasks
```
#### Response
```
Status: 200 OK
[
{
"name": "Load Average",
"type": "repeated",
"frequency": 10,
"next_due": "Fri Sep 9 14:12:37 2016"
}
}
```
#### Supported Request Parameter
- `fields`
## Get loaded modules
Retrieve information about all loaded modules. This includes version, API and
maturity information.
```
GET /maxscale/modules
```
#### Response
```
Status: 200 OK
[
{
"name": "MySQLBackend",
"type": "Protocol",
"version": "V2.0.0",
"api_version": "1.1.0",
"maturity": "GA"
},
{
"name": "qlafilter",
"type": "Filter",
"version": "V1.1.1",
"api_version": "1.1.0",
"maturity": "GA"
},
{
"name": "readwritesplit",
"type": "Router",
"version": "V1.1.0",
"api_version": "1.0.0",
"maturity": "GA"
}
}
```
#### Supported Request Parameter
- `fields`
- `range`
TODO: Add epoll statistics and rest of the supported methods.

View File

@ -0,0 +1,176 @@
# Monitor Resource
A monitor resource represents a monitor inside MaxScale that monitors one or
more servers.
## Resource Operations
### Get a monitor
Get a single monitor. The _:name_ in the URI must be a valid monitor name with
all whitespace replaced with hyphens. The monitor names are case-insensitive.
```
GET /monitors/:name
```
#### Response
```
Status: 200 OK
{
"name": "MySQL Monitor",
"module": "mysqlmon",
"state": "started",
"monitor_interval": 2500,
"connect_timeout": 5,
"read_timeout": 2,
"write_timeout": 3,
"servers": [
"/servers/db-serv-1",
"/servers/db-serv-2",
"/servers/db-serv-3"
]
}
```
#### Supported Request Parameter
- `fields`
### Get all monitors
Get all monitors.
```
GET /monitors
```
#### Response
```
Status: 200 OK
[
{
"name": "MySQL Monitor",
"module": "mysqlmon",
"state": "started",
"monitor_interval": 2500,
"connect_timeout": 5,
"read_timeout": 2,
"write_timeout": 3,
"servers": [
"/servers/db-serv-1",
"/servers/db-serv-2",
"/servers/db-serv-3"
]
},
{
"name": "Galera Monitor",
"module": "galeramon",
"state": "started",
"monitor_interval": 5000,
"connect_timeout": 10,
"read_timeout": 5,
"write_timeout": 5,
"servers": [
"/servers/db-galera-1",
"/servers/db-galera-2",
"/servers/db-galera-3"
]
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Stop a monitor
Stops a started monitor.
```
PUT /monitor/:name/stop
```
#### Response
```
Status: 204 No Content
```
### Start a monitor
Starts a stopped monitor.
```
PUT /monitor/:name/start
```
#### Response
```
Status: 204 No Content
```
### Update a monitor
**Note**: The update mechanisms described here are provisional and most likely
will change in the future. This description is only for design purposes and
does not yet work.
Partially update a monitor. The _:name_ in the URI must map to a monitor name
and the request body must be a valid JSON Patch document which is applied to the
resource.
```
PATCH /monitor/:name
```
### Modifiable Fields
The following values can be modified with the PATCH method.
|Field |Type |Description |
|-----------------|------------|---------------------------------------------------|
|servers |string array|Servers monitored by this monitor |
|monitor_interval |number |Monitoring interval in milliseconds |
|connect_timeout |number |Connection timeout in seconds |
|read_timeout |number |Read timeout in seconds |
|write_timeout |number |Write timeout in seconds |
```
[
{ "op": "remove", "path": "/servers/0" },
{ "op": "replace", "path": "/monitor_interval", "value": 2000 },
{ "op": "replace", "path": "/connect_timeout", "value": 2 },
{ "op": "replace", "path": "/read_timeout", "value": 2 },
{ "op": "replace", "path": "/write_timeout", "value": 2 }
]
```
#### Response
Response contains the modified resource.
```
Status: 200 OK
{
"name": "MySQL Monitor",
"module": "mysqlmon",
"servers": [
"/servers/db-serv-2",
"/servers/db-serv-3"
],
"state": "started",
"monitor_interval": 2000,
"connect_timeout": 2,
"read_timeout": 2,
"write_timeout": 2
}
```

View File

@ -0,0 +1,207 @@
# Server Resource
A server resource represents a backend database server.
## Resource Operations
### Get a server
Get a single server. The _:name_ in the URI must be a valid server name with all
whitespace replaced with hyphens. The server names are case-insensitive.
```
GET /servers/:name
```
#### Response
```
Status: 200 OK
{
"name": "db-serv-1",
"address": "192.168.121.58",
"port": 3306,
"protocol": "MySQLBackend",
"status": [
"master",
"running"
],
"parameters": {
"report_weight": 10,
"app_weight": 2
}
}
```
**Note**: The _parameters_ field contains all custom parameters for
servers, including the server weighting parameters.
#### Supported Request Parameter
- `fields`
### Get all servers
```
GET /servers
```
#### Response
```
Status: 200 OK
[
{
"name": "db-serv-1",
"address": "192.168.121.58",
"port": 3306,
"protocol": "MySQLBackend",
"status": [
"master",
"running"
],
"parameters": {
"report_weight": 10,
"app_weight": 2
}
},
{
"name": "db-serv-2",
"address": "192.168.121.175",
"port": 3306,
"status": [
"slave",
"running"
],
"protocol": "MySQLBackend",
"parameters": {
"app_weight": 6
}
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Update a server
**Note**: The update mechanisms described here are provisional and most likely
will change in the future. This description is only for design purposes and
does not yet work.
Partially update a server. The _:name_ in the URI must map to a server name with
all whitespace replaced with hyphens and the request body must be a valid JSON
Patch document which is applied to the resource.
```
PATCH /servers/:name
```
### Modifiable Fields
|Field |Type |Description |
|-----------|------------|-----------------------------------------------------------------------------|
|address |string |Server address |
|port |number |Server port |
|parameters |object |Server extra parameters |
|state |string array|Server state, array of `master`, `slave`, `synced`, `running` or `maintenance`. An empty array is interpreted as a server that is down.|
```
{
{ "op": "replace", "path": "/address", "value": "192.168.0.100" },
{ "op": "replace", "path": "/port", "value": 4006 },
{ "op": "add", "path": "/state/0", "value": "maintenance" },
{ "op": "replace", "path": "/parameters/report_weight", "value": 1 }
}
```
#### Response
Response contains the modified resource.
```
Status: 200 OK
{
"name": "db-serv-1",
"protocol": "MySQLBackend",
"address": "192.168.0.100",
"port": 4006,
"state": [
"maintenance",
"running"
],
"parameters": {
"report_weight": 1,
"app_weight": 2
}
}
```
### Get all connections to a server
Get all connections that are connected to a server.
```
GET /servers/:name/connections
```
#### Response
```
Status: 200 OK
[
{
"state": "DCB in the polling loop",
"role": "Backend Request Handler",
"server": "/servers/db-serv-01",
"service": "/services/my-service",
"statistics": {
"reads": 2197
"writes": 1562
"buffered_writes": 0
"high_water_events": 0
"low_water_events": 0
}
},
{
"state": "DCB in the polling loop",
"role": "Backend Request Handler",
"server": "/servers/db-serv-01",
"service": "/services/my-second-service"
"statistics": {
"reads": 0
"writes": 0
"buffered_writes": 0
"high_water_events": 0
"low_water_events": 0
}
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Close all connections to a server
Close all connections to a particular server. This will forcefully close all
backend connections.
```
DELETE /servers/:name/connections
```
#### Response
```
Status: 204 No Content
```

View File

@ -0,0 +1,272 @@
# Service Resource
A service resource represents a service inside MaxScale. A service is a
collection of network listeners, filters, a router and a set of backend servers.
## Resource Operations
### Get a service
Get a single service. The _:name_ in the URI must be a valid service name with
all whitespace replaced with hyphens. The service names are case-insensitive.
```
GET /services/:name
```
#### Response
```
Status: 200 OK
{
"name": "My Service",
"router": "readwritesplit",
"router_options": {
"disable_sescmd_history": "true"
},
"state": "started",
"total_connections": 10,
"current_connections": 2,
"started": "2016-08-29T12:52:31+03:00",
"filters": [
"/filters/Query-Logging-Filter"
],
"servers": [
"/servers/db-serv-1",
"/servers/db-serv-2",
"/servers/db-serv-3"
]
}
```
#### Supported Request Parameter
- `fields`
### Get all services
Get all services.
```
GET /services
```
#### Response
```
Status: 200 OK
[
{
"name": "My Service",
"router": "readwritesplit",
"router_options": {
"disable_sescmd_history": "true"
},
"state": "started",
"total_connections": 10,
"current_connections": 2,
"started": "2016-08-29T12:52:31+03:00",
"filters": [
"/filters/Query-Logging-Filter"
],
"servers": [
"/servers/db-serv-1",
"/servers/db-serv-2",
"/servers/db-serv-3"
]
},
{
"name": "My Second Service",
"router": "readconnroute",
"router_options": {
"type": "master"
},
"state": "started",
"total_connections": 10,
"current_connections": 2,
"started": "2016-08-29T12:52:31+03:00",
"servers": [
"/servers/db-serv-1",
"/servers/db-serv-2"
]
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Get service listeners
Get the listeners of a service. The _:name_ in the URI must be a valid service
name with all whitespace replaced with hyphens. The service names are
case-insensitive.
```
GET /services/:name/listeners
```
#### Response
```
Status: 200 OK
[
{
"name": "My Listener",
"protocol": "MySQLClient",
"address": "0.0.0.0",
"port": 4006
},
{
"name": "My SSL Listener",
"protocol": "MySQLClient",
"address": "127.0.0.1",
"port": 4006,
"ssl": "required",
"ssl_cert": "/home/markusjm/newcerts/server-cert.pem",
"ssl_key": "/home/markusjm/newcerts/server-key.pem",
"ssl_ca_cert": "/home/markusjm/newcerts/ca.pem"
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Update a service
**Note**: The update mechanisms described here are provisional and most likely
will change in the future. This description is only for design purposes and
does not yet work.
Partially update a service. The _:name_ in the URI must map to a service name
and the request body must be a valid JSON Patch document which is applied to the
resource.
```
PATCH /services/:name
```
### Modifiable Fields
|Field |Type |Description |
|--------------|------------|---------------------------------------------------|
|servers |string array|Servers used by this service, must be relative links to existing server resources|
|router_options|object |Router specific options|
|filters |string array|Service filters, configured in the same order they are declared in the array (`filters[0]` => first filter, `filters[1]` => second filter)|
|user |string |The username for the service user|
|password |string |The password for the service user|
|root_user |boolean |Allow root user to connect via this service|
|version_string|string |Custom version string given to connecting clients|
|weightby |string |Name of a server weigting parameter which is used for connection weighting|
|connection_timeout|number |Client idle timeout in seconds|
|max_connection|number |Maximum number of allowed connections|
|strip_db_esc|boolean |Strip escape characters from default database name|
```
[
{ "op": "replace", "path": "/servers", "value": ["/servers/db-serv-2","/servers/db-serv-3"] },
{ "op": "add", "path": "/router_options/master_failover_mode", "value": "fail_on_write" },
{ "op": "remove", "path": "/filters" }
]
```
#### Response
Response contains the modified resource.
```
Status: 200 OK
{
"name": "My Service",
"router": "readwritesplit",
"router_options": {
"disable_sescmd_history=false",
"master_failover_mode": "fail_on_write"
}
"state": "started",
"total_connections": 10,
"current_connections": 2,
"started": "2016-08-29T12:52:31+03:00",
"servers": [
"/servers/db-serv-2",
"/servers/db-serv-3"
]
}
```
### Stop a service
Stops a started service.
```
PUT /service/:name/stop
```
#### Response
```
Status: 204 No Content
```
### Start a service
Starts a stopped service.
```
PUT /service/:name/start
```
#### Response
```
Status: 204 No Content
```
### Get all sessions for a service
Get all sessions for a particular service.
```
GET /services/:name/sessions
```
#### Response
Relative links to all sessions for this service.
```
Status: 200 OK
[
"/sessions/1",
"/sessions/2"
]
```
#### Supported Request Parameter
- `range`
### Close all sessions for a service
Close all sessions for a particular service. This will forcefully close all
client connections and any backend connections they have made.
```
DELETE /services/:name/sessions
```
#### Response
```
Status: 204 No Content
```

View File

@ -0,0 +1,138 @@
# Session Resource
A session consists of a client connection, any number of related backend
connections, a router module session and possibly filter module sessions. Each
session is created on a service and a service can have multiple sessions.
## Resource Operations
### Get a session
Get a single session. _:id_ must be a valid session ID.
```
GET /sessions/:id
```
#### Response
```
Status: 200 OK
{
"id": 1,
"state": "Session ready for routing",
"user": "jdoe",
"address": "192.168.0.200",
"service": "/services/my-service",
"connected": "Wed Aug 31 03:03:12 2016",
"idle": 260
}
```
#### Supported Request Parameter
- `fields`
### Get all sessions
Get all sessions.
```
GET /sessions
```
#### Response
```
Status: 200 OK
[
{
"id": 1,
"state": "Session ready for routing",
"user": "jdoe",
"address": "192.168.0.200",
"service": "/services/My-Service",
"connected": "Wed Aug 31 03:03:12 2016",
"idle": 260
},
{
"id": 2,
"state": "Session ready for routing",
"user": "dba",
"address": "192.168.0.201",
"service": "/services/My-Service",
"connected": "Wed Aug 31 03:10:00 2016",
"idle": 1
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Get all connections created by a session
Get all backend connections created by a session. _:id_ must be a valid session ID.
```
GET /sessions/:id/connections
```
#### Response
```
Status: 200 OK
[
{
"state": "DCB in the polling loop",
"role": "Backend Request Handler",
"server": "/servers/db-serv-01",
"service": "/services/my-service",
"statistics": {
"reads": 2197
"writes": 1562
"buffered_writes": 0
"high_water_events": 0
"low_water_events": 0
}
},
{
"state": "DCB in the polling loop",
"role": "Backend Request Handler",
"server": "/servers/db-serv-02",
"service": "/services/my-service",
"statistics": {
"reads": 0
"writes": 0
"buffered_writes": 0
"high_water_events": 0
"low_water_events": 0
}
}
]
```
#### Supported Request Parameter
- `fields`
- `range`
### Close a session
Close a session. This will forcefully close the client connection and any
backend connections.
```
DELETE /sessions/:id
```
#### Response
```
Status: 204 No Content
```

View File

@ -0,0 +1,81 @@
# Admin User Resource
Admin users represent administrative users that are able to query and change
MaxScale's configuration.
## Resource Operations
### Get all users
Get all administrative users.
```
GET /users
```
#### Response
```
Status: 200 OK
[
{
"name": "jdoe"
},
{
"name": "dba"
},
{
"name": "admin"
}
]
#### Supported Request Parameter
- `fields`
- `range`
### Create a user
Create a new administrative user.
```
PUT /users
```
### Modifiable Fields
All of the following fields need to be defined in the request body.
|Field |Type |Description |
|---------|------|-------------------------|
|name |string|Username, consisting of alphanumeric characters|
|password |string|Password for the new user|
```
{
"name": "foo",
"password": "bar"
}
```
#### Response
```
Status: 204 No Content
```
### Delete a user
Delete a user. The _:name_ part of the URI must be a valid user name. The user
names are case-insensitive.
```
DELETE /users/:name
```
#### Response
```
Status: 204 No Content
```

View File

@ -29,7 +29,42 @@
MXS_BEGIN_DECLS
int atomic_add(int *variable, int value);
/**
* Implementation of an atomic add operation for the GCC environment, or the
* X86 processor. If we are working within GNU C then we can use the GCC
* atomic add built in function, which is portable across platforms that
* implement GCC. Otherwise, this function currently supports only X86
* architecture (without further development).
*
* Adds a value to the contents of a location pointed to by the first parameter.
* The add operation is atomic and the return value is the value stored in the
* location prior to the operation. The number that is added may be signed,
* therefore atomic_subtract is merely an atomic add with a negative value.
*
* @param variable Pointer the the variable to add to
* @param value Value to be added
* @return The value of variable before the add occurred
*/
int atomic_add(int *variable, int value);
/**
* @brief Impose a full memory barrier
*
* A full memory barrier guarantees that all store and load operations complete
* before the function is called.
*
* Currently, only the GNUC __sync_synchronize() is used. C11 introduces
* standard functions for atomic memory operations and should be taken into use.
*
* @see https://www.kernel.org/doc/Documentation/memory-barriers.txt
*/
static inline void atomic_synchronize()
{
#ifdef __GNUC__
__sync_synchronize(); /* Memory barrier. */
#else
#error "No GNUC atomics available."
#endif
}
MXS_END_DECLS

View File

@ -125,7 +125,6 @@ typedef struct query_classifier
char** (*qc_get_table_names)(GWBUF* stmt, int* tblsize, bool fullnames);
char* (*qc_get_canonical)(GWBUF* stmt);
bool (*qc_query_has_clause)(GWBUF* stmt);
char* (*qc_get_affected_fields)(GWBUF* stmt);
char** (*qc_get_database_names)(GWBUF* stmt, int* size);
char* (*qc_get_prepare_name)(GWBUF* stmt);
qc_query_op_t (*qc_get_prepare_operation)(GWBUF* stmt);
@ -225,17 +224,6 @@ void qc_thread_end(void);
*/
qc_parse_result_t qc_parse(GWBUF* stmt);
/**
* Returns the fields the statement affects, as a string of names separated
* by spaces. Note that the fields do not contain any table information.
*
* @param stmt A buffer containing a COM_QUERY packet.
*
* @return A string containing the fields or NULL if a memory allocation
* failure occurs. The string must be freed by the caller.
*/
char* qc_get_affected_fields(GWBUF* stmt);
/**
* Returns information about affected fields.
*

View File

@ -97,8 +97,10 @@ typedef struct
typedef struct server_ref_t
{
struct server_ref_t *next;
SERVER* server;
struct server_ref_t *next; /**< Next server reference */
SERVER* server; /**< The actual server */
int weight; /**< Weight of this server */
int connections; /**< Number of connections created through this reference */
} SERVER_REF;
#define SERVICE_MAX_RETRY_INTERVAL 3600 /*< The maximum interval between service start retries */

View File

@ -45,11 +45,6 @@ bool qc_is_drop_table_query(GWBUF* querybuf)
return false;
}
char* qc_get_affected_fields(GWBUF* buf)
{
return NULL;
}
bool qc_query_has_clause(GWBUF* buf)
{
return false;
@ -66,6 +61,22 @@ qc_query_op_t qc_get_operation(GWBUF* querybuf)
return QUERY_OP_UNDEFINED;
}
char* qc_sqlite_get_prepare_name(GWBUF* query)
{
return NULL;
}
qc_query_op_t qc_sqlite_get_prepare_operation(GWBUF* query)
{
return QUERY_OP_UNDEFINED;
}
void qc_sqlite_get_field_info(GWBUF* query, const QC_FIELD_INFO** infos, size_t* n_infos)
{
*infos = NULL;
*n_infos = 0;
}
bool qc_init(const char* args)
{
return true;
@ -125,8 +136,10 @@ extern "C"
qc_get_table_names,
NULL,
qc_query_has_clause,
qc_get_affected_fields,
qc_get_database_names,
qc_get_prepare_name,
qc_get_prepare_operation,
qc_get_field_info,
};
QUERY_CLASSIFIER* GetModuleObject()

View File

@ -76,6 +76,9 @@ typedef struct parsing_info_st
void* pi_handle; /*< parsing info object pointer */
char* pi_query_plain_str; /*< query as plain string */
void (*pi_done_fp)(void *); /*< clean-up function for parsing info */
QC_FIELD_INFO* field_infos;
size_t field_infos_len;
size_t field_infos_capacity;
#if defined(SS_DEBUG)
skygw_chk_t pi_chk_tail;
#endif
@ -1029,6 +1032,35 @@ char* qc_get_stmtname(GWBUF* buf)
}
#endif
/**
* Get the parsing info structure from a GWBUF
*
* @param querybuf A GWBUF
*
* @return The parsing info object, or NULL
*/
parsing_info_t* get_pinfo(GWBUF* querybuf)
{
parsing_info_t *pi = NULL;
if ((querybuf != NULL) && GWBUF_IS_PARSED(querybuf))
{
pi = (parsing_info_t *) gwbuf_get_buffer_object_data(querybuf, GWBUF_PARSING_INFO);
}
return pi;
}
LEX* get_lex(parsing_info_t* pi)
{
MYSQL* mysql = (MYSQL *) pi->pi_handle;
ss_dassert(mysql);
THD* thd = (THD *) mysql->thd;
ss_dassert(thd);
return thd->lex;
}
/**
* Get the parse tree from parsed querybuf.
* @param querybuf The parsed GWBUF
@ -1038,31 +1070,19 @@ char* qc_get_stmtname(GWBUF* buf)
*/
LEX* get_lex(GWBUF* querybuf)
{
LEX* lex = NULL;
parsing_info_t* pi = get_pinfo(querybuf);
parsing_info_t* pi;
MYSQL* mysql;
THD* thd;
if (querybuf == NULL || !GWBUF_IS_PARSED(querybuf))
if (pi)
{
return NULL;
MYSQL* mysql = (MYSQL *) pi->pi_handle;
ss_dassert(mysql);
THD* thd = (THD *) mysql->thd;
ss_dassert(thd);
lex = thd->lex;
}
pi = (parsing_info_t *) gwbuf_get_buffer_object_data(querybuf, GWBUF_PARSING_INFO);
if (pi == NULL)
{
return NULL;
}
if ((mysql = (MYSQL *) pi->pi_handle) == NULL ||
(thd = (THD *) mysql->thd) == NULL)
{
ss_dassert(mysql != NULL && thd != NULL);
return NULL;
}
return thd->lex;
return lex;
}
/**
@ -1284,315 +1304,6 @@ bool qc_is_drop_table_query(GWBUF* querybuf)
return answer;
}
inline void add_str(char** buf, int* buflen, int* bufsize, const char* str)
{
int isize = strlen(str) + 1;
if (*buf == NULL || isize + *buflen >= *bufsize)
{
*bufsize = (*bufsize) * 2 + isize;
char *tmp = (char*) realloc(*buf, (*bufsize) * sizeof (char));
if (tmp == NULL)
{
MXS_ERROR("Error: memory reallocation failed.");
free(*buf);
*buf = NULL;
*bufsize = 0;
}
*buf = tmp;
}
if (*buflen > 0)
{
if (*buf)
{
strcat(*buf, " ");
}
}
if (*buf)
{
strcat(*buf, str);
}
*buflen += isize;
}
typedef enum collect_source
{
COLLECT_SELECT,
COLLECT_WHERE,
COLLECT_HAVING,
COLLECT_GROUP_BY,
} collect_source_t;
static void collect_name(Item* item, char** bufp, int* buflenp, int* bufsizep, List<Item>* excludep)
{
const char* full_name = item->full_name();
const char* name = strrchr(full_name, '.');
if (!name)
{
// No dot found.
name = full_name;
}
else
{
// Dot found, advance beyond it.
++name;
}
bool exclude = false;
if (excludep)
{
List_iterator<Item> ilist(*excludep);
Item* exclude_item = (Item*) ilist.next();
for (; !exclude && (exclude_item != NULL); exclude_item = (Item*) ilist.next())
{
if (exclude_item->name && (strcasecmp(name, exclude_item->name) == 0))
{
exclude = true;
}
}
}
if (!exclude)
{
add_str(bufp, buflenp, bufsizep, name);
}
}
static void collect_affected_fields(collect_source_t source,
Item* item, char** bufp, int* buflenp, int* bufsizep,
List<Item>* excludep)
{
switch (item->type())
{
case Item::COND_ITEM:
{
Item_cond* cond_item = static_cast<Item_cond*>(item);
List_iterator<Item> ilist(*cond_item->argument_list());
item = (Item*) ilist.next();
for (; item != NULL; item = (Item*) ilist.next())
{
collect_affected_fields(source, item, bufp, buflenp, bufsizep, excludep);
}
}
break;
case Item::FIELD_ITEM:
collect_name(item, bufp, buflenp, bufsizep, excludep);
break;
case Item::REF_ITEM:
{
if (source != COLLECT_SELECT)
{
Item_ref* ref_item = static_cast<Item_ref*>(item);
collect_name(item, bufp, buflenp, bufsizep, excludep);
size_t n_items = ref_item->cols();
for (size_t i = 0; i < n_items; ++i)
{
Item* reffed_item = ref_item->element_index(i);
if (reffed_item != ref_item)
{
collect_affected_fields(source,
ref_item->element_index(i), bufp, buflenp, bufsizep,
excludep);
}
}
}
}
break;
case Item::ROW_ITEM:
{
Item_row* row_item = static_cast<Item_row*>(item);
size_t n_items = row_item->cols();
for (size_t i = 0; i < n_items; ++i)
{
collect_affected_fields(source, row_item->element_index(i), bufp, buflenp, bufsizep,
excludep);
}
}
break;
case Item::FUNC_ITEM:
case Item::SUM_FUNC_ITEM:
{
Item_func* func_item = static_cast<Item_func*>(item);
Item** items = func_item->arguments();
size_t n_items = func_item->argument_count();
for (size_t i = 0; i < n_items; ++i)
{
collect_affected_fields(source, items[i], bufp, buflenp, bufsizep, excludep);
}
}
break;
case Item::SUBSELECT_ITEM:
{
Item_subselect* subselect_item = static_cast<Item_subselect*>(item);
switch (subselect_item->substype())
{
case Item_subselect::IN_SUBS:
case Item_subselect::ALL_SUBS:
case Item_subselect::ANY_SUBS:
{
Item_in_subselect* in_subselect_item = static_cast<Item_in_subselect*>(item);
#if (((MYSQL_VERSION_MAJOR == 5) &&\
((MYSQL_VERSION_MINOR > 5) ||\
((MYSQL_VERSION_MINOR == 5) && (MYSQL_VERSION_PATCH >= 48))\
)\
) ||\
(MYSQL_VERSION_MAJOR >= 10)\
)
if (in_subselect_item->left_expr_orig)
{
collect_affected_fields(source,
in_subselect_item->left_expr_orig, bufp, buflenp, bufsizep,
excludep);
}
#else
#pragma message "Figure out what to do with versions < 5.5.48."
#endif
// TODO: Anything else that needs to be looked into?
}
break;
case Item_subselect::EXISTS_SUBS:
case Item_subselect::SINGLEROW_SUBS:
// TODO: Handle these explicitly as well.
break;
case Item_subselect::UNKNOWN_SUBS:
default:
MXS_ERROR("Unknown subselect type: %d", subselect_item->substype());
break;
}
}
break;
default:
break;
}
}
char* qc_get_affected_fields(GWBUF* buf)
{
LEX* lex;
int buffsz = 0, bufflen = 0;
char* where = NULL;
Item* item;
Item::Type itype;
if (!buf)
{
return NULL;
}
if (!ensure_query_is_parsed(buf))
{
return NULL;
}
if ((lex = get_lex(buf)) == NULL)
{
return NULL;
}
lex->current_select = lex->all_selects_list;
if ((where = (char*) malloc(sizeof (char)*1)) == NULL)
{
MXS_ERROR("Memory allocation failed.");
return NULL;
}
*where = '\0';
while (lex->current_select)
{
List_iterator<Item> ilist(lex->current_select->item_list);
item = (Item*) ilist.next();
for (; item != NULL; item = (Item*) ilist.next())
{
collect_affected_fields(COLLECT_SELECT, item, &where, &buffsz, &bufflen, NULL);
}
if (lex->current_select->group_list.first)
{
ORDER* order = lex->current_select->group_list.first;
while (order)
{
Item* item = *order->item;
collect_affected_fields(COLLECT_GROUP_BY, item, &where, &buffsz, &bufflen,
&lex->current_select->item_list);
order = order->next;
}
}
if (lex->current_select->where)
{
collect_affected_fields(COLLECT_WHERE, lex->current_select->where, &where, &buffsz, &bufflen,
&lex->current_select->item_list);
}
if (lex->current_select->having)
{
collect_affected_fields(COLLECT_HAVING, lex->current_select->having, &where, &buffsz, &bufflen,
&lex->current_select->item_list);
}
lex->current_select = lex->current_select->next_select_in_list();
}
if ((lex->sql_command == SQLCOM_INSERT) ||
(lex->sql_command == SQLCOM_INSERT_SELECT) ||
(lex->sql_command == SQLCOM_REPLACE))
{
List_iterator<Item> ilist(lex->field_list);
item = (Item*) ilist.next();
for (; item != NULL; item = (Item*) ilist.next())
{
collect_affected_fields(COLLECT_SELECT, item, &where, &buffsz, &bufflen, NULL);
}
if (lex->insert_list)
{
List_iterator<Item> ilist(*lex->insert_list);
item = (Item*) ilist.next();
for (; item != NULL; item = (Item*) ilist.next())
{
collect_affected_fields(COLLECT_SELECT, item, &where, &buffsz, &bufflen, NULL);
}
}
}
return where;
}
bool qc_query_has_clause(GWBUF* buf)
{
bool clause = false;
@ -1719,6 +1430,14 @@ static void parsing_info_done(void* ptr)
free(pi->pi_query_plain_str);
}
for (size_t i = 0; i < pi->field_infos_len; ++i)
{
free(pi->field_infos[i].database);
free(pi->field_infos[i].table);
free(pi->field_infos[i].column);
}
free(pi->field_infos);
free(pi);
}
}
@ -2010,12 +1729,452 @@ qc_query_op_t qc_get_prepare_operation(GWBUF* stmt)
return operation;
}
void qc_get_field_info(GWBUF* stmt, const QC_FIELD_INFO** infos, size_t* n_infos)
static bool should_exclude(const char* name, List<Item>* excludep)
{
MXS_ERROR("qc_get_field_info not implemented yet.");
bool exclude = false;
List_iterator<Item> ilist(*excludep);
Item* exclude_item;
*infos = NULL;
*n_infos = 0;
while (!exclude && (exclude_item = ilist++))
{
const char* exclude_name = exclude_item->name;
if (exclude_name && (strcasecmp(name, exclude_name) == 0))
{
exclude = true;
}
if (!exclude)
{
exclude_name = strrchr(exclude_item->full_name(), '.');
if (exclude_name)
{
++exclude_name; // Char after the '.'
if (strcasecmp(name, exclude_name) == 0)
{
exclude = true;
}
}
}
}
return exclude;
}
static void add_field_info(parsing_info_t* info,
const char* database,
const char* table,
const char* column,
List<Item>* excludep)
{
ss_dassert(column);
// If only a column is specified, but not a table or database and we
// have a list of expressions that should be excluded, we check if the column
// value is present in that list. This is in order to exclude the second "d" in
// a statement like "select a as d from x where d = 2".
if (column && !table && !database && excludep && should_exclude(column, excludep))
{
return;
}
QC_FIELD_INFO item = { (char*)database, (char*)table, (char*)column };
size_t i;
for (i = 0; i < info->field_infos_len; ++i)
{
QC_FIELD_INFO* field_info = info->field_infos + i;
if (strcasecmp(item.column, field_info->column) == 0)
{
if (!item.table && !field_info->table)
{
ss_dassert(!item.database && !field_info->database);
break;
}
else if (item.table && field_info->table && (strcmp(item.table, field_info->table) == 0))
{
if (!item.database && !field_info->database)
{
break;
}
else if (item.database &&
field_info->database &&
(strcmp(item.database, field_info->database) == 0))
{
break;
}
}
}
}
QC_FIELD_INFO* field_infos = NULL;
if (i == info->field_infos_len) // If true, the field was not present already.
{
if (info->field_infos_len < info->field_infos_capacity)
{
field_infos = info->field_infos;
}
else
{
size_t capacity = info->field_infos_capacity ? 2 * info->field_infos_capacity : 8;
field_infos = (QC_FIELD_INFO*)realloc(info->field_infos, capacity * sizeof(QC_FIELD_INFO));
if (field_infos)
{
info->field_infos = field_infos;
info->field_infos_capacity = capacity;
}
}
}
// If field_infos is NULL, then the field was found and has already been noted.
if (field_infos)
{
item.database = item.database ? strdup(item.database) : NULL;
item.table = item.table ? strdup(item.table) : NULL;
ss_dassert(item.column);
item.column = strdup(item.column);
// We are happy if we at least could dup the column.
if (item.column)
{
field_infos[info->field_infos_len++] = item;
}
}
}
static void add_field_info(parsing_info_t* pi, Item_field* item, List<Item>* excludep)
{
const char* database = item->db_name;
const char* table = item->table_name;
const char* column = item->field_name;
LEX* lex = get_lex(pi);
switch (lex->sql_command)
{
case SQLCOM_SHOW_FIELDS:
if (!database)
{
database = "information_schema";
}
if (!table)
{
table = "COLUMNS";
}
break;
case SQLCOM_SHOW_KEYS:
if (!database)
{
database = "information_schema";
}
if (!table)
{
table = "STATISTICS";
}
break;
case SQLCOM_SHOW_STATUS:
if (!database)
{
database = "information_schema";
}
if (!table)
{
table = "SESSION_STATUS";
}
break;
case SQLCOM_SHOW_TABLES:
if (!database)
{
database = "information_schema";
}
if (!table)
{
table = "TABLE_NAMES";
}
break;
case SQLCOM_SHOW_TABLE_STATUS:
if (!database)
{
database = "information_schema";
}
if (!table)
{
table = "TABLES";
}
break;
case SQLCOM_SHOW_VARIABLES:
if (!database)
{
database = "information_schema";
}
if (!table)
{
table = "SESSION_STATUS";
}
break;
default:
break;
}
add_field_info(pi, database, table, column, excludep);
}
static void add_field_info(parsing_info_t* pi, Item* item, List<Item>* excludep)
{
const char* database = NULL;
const char* table = NULL;
const char* column = item->name;
add_field_info(pi, database, table, column, excludep);
}
typedef enum collect_source
{
COLLECT_SELECT,
COLLECT_WHERE,
COLLECT_HAVING,
COLLECT_GROUP_BY,
} collect_source_t;
static void update_field_infos(parsing_info_t* pi,
collect_source_t source,
Item* item,
List<Item>* excludep)
{
switch (item->type())
{
case Item::COND_ITEM:
{
Item_cond* cond_item = static_cast<Item_cond*>(item);
List_iterator<Item> ilist(*cond_item->argument_list());
while (Item *i = ilist++)
{
update_field_infos(pi, source, i, excludep);
}
}
break;
case Item::FIELD_ITEM:
add_field_info(pi, static_cast<Item_field*>(item), excludep);
break;
case Item::REF_ITEM:
{
if (source != COLLECT_SELECT)
{
Item_ref* ref_item = static_cast<Item_ref*>(item);
add_field_info(pi, item, excludep);
size_t n_items = ref_item->cols();
for (size_t i = 0; i < n_items; ++i)
{
Item* reffed_item = ref_item->element_index(i);
if (reffed_item != ref_item)
{
update_field_infos(pi, source, ref_item->element_index(i), excludep);
}
}
}
}
break;
case Item::ROW_ITEM:
{
Item_row* row_item = static_cast<Item_row*>(item);
size_t n_items = row_item->cols();
for (size_t i = 0; i < n_items; ++i)
{
update_field_infos(pi, source, row_item->element_index(i), excludep);
}
}
break;
case Item::FUNC_ITEM:
case Item::SUM_FUNC_ITEM:
{
Item_func* func_item = static_cast<Item_func*>(item);
Item** items = func_item->arguments();
size_t n_items = func_item->argument_count();
for (size_t i = 0; i < n_items; ++i)
{
update_field_infos(pi, source, items[i], excludep);
}
}
break;
case Item::SUBSELECT_ITEM:
{
Item_subselect* subselect_item = static_cast<Item_subselect*>(item);
switch (subselect_item->substype())
{
case Item_subselect::IN_SUBS:
case Item_subselect::ALL_SUBS:
case Item_subselect::ANY_SUBS:
{
Item_in_subselect* in_subselect_item = static_cast<Item_in_subselect*>(item);
#if (((MYSQL_VERSION_MAJOR == 5) &&\
((MYSQL_VERSION_MINOR > 5) ||\
((MYSQL_VERSION_MINOR == 5) && (MYSQL_VERSION_PATCH >= 48))\
)\
) ||\
(MYSQL_VERSION_MAJOR >= 10)\
)
if (in_subselect_item->left_expr_orig)
{
update_field_infos(pi, source,
in_subselect_item->left_expr_orig, excludep);
}
#else
#pragma message "Figure out what to do with versions < 5.5.48."
#endif
// TODO: Anything else that needs to be looked into?
}
break;
case Item_subselect::EXISTS_SUBS:
case Item_subselect::SINGLEROW_SUBS:
// TODO: Handle these explicitly as well.
break;
case Item_subselect::UNKNOWN_SUBS:
default:
MXS_ERROR("Unknown subselect type: %d", subselect_item->substype());
break;
}
}
break;
default:
break;
}
}
void qc_get_field_info(GWBUF* buf, const QC_FIELD_INFO** infos, size_t* n_infos)
{
if (!buf)
{
return;
}
if (!ensure_query_is_parsed(buf))
{
return;
}
parsing_info_t* pi = get_pinfo(buf);
if (!pi->field_infos)
{
ss_dassert(pi);
LEX* lex = get_lex(buf);
ss_dassert(lex);
if (!lex)
{
return;
}
lex->current_select = lex->all_selects_list;
while (lex->current_select)
{
List_iterator<Item> ilist(lex->current_select->item_list);
while (Item *item = ilist++)
{
update_field_infos(pi, COLLECT_SELECT, item, NULL);
}
if (lex->current_select->group_list.first)
{
ORDER* order = lex->current_select->group_list.first;
while (order)
{
Item* item = *order->item;
update_field_infos(pi, COLLECT_GROUP_BY, item, &lex->current_select->item_list);
order = order->next;
}
}
if (lex->current_select->where)
{
update_field_infos(pi, COLLECT_WHERE,
lex->current_select->where,
&lex->current_select->item_list);
}
#if defined(COLLECT_HAVING_AS_WELL)
// A HAVING clause can only refer to fields that already have been
// mentioned. Consequently, they need not be collected.
if (lex->current_select->having)
{
update_field_infos(pi, COLLECT_HAVING,
lex->current_select->having,
&lex->current_select->item_list);
}
#endif
lex->current_select = lex->current_select->next_select_in_list();
}
List_iterator<Item> ilist(lex->value_list);
while (Item* item = ilist++)
{
update_field_infos(pi, COLLECT_SELECT, item, NULL);
}
if ((lex->sql_command == SQLCOM_INSERT) ||
(lex->sql_command == SQLCOM_INSERT_SELECT) ||
(lex->sql_command == SQLCOM_REPLACE))
{
List_iterator<Item> ilist(lex->field_list);
while (Item *item = ilist++)
{
update_field_infos(pi, COLLECT_SELECT, item, NULL);
}
if (lex->insert_list)
{
List_iterator<Item> ilist(*lex->insert_list);
while (Item *item = ilist++)
{
update_field_infos(pi, COLLECT_SELECT, item, NULL);
}
}
}
}
*infos = pi->field_infos;
*n_infos = pi->field_infos_len;
}
namespace
@ -2160,7 +2319,6 @@ static QUERY_CLASSIFIER qc =
qc_get_table_names,
NULL,
qc_query_has_clause,
qc_get_affected_fields,
qc_get_database_names,
qc_get_prepare_name,
qc_get_prepare_operation,

View File

@ -60,9 +60,6 @@ typedef struct qc_sqlite_info
uint32_t types; // The types of the query.
qc_query_op_t operation; // The operation in question.
char* affected_fields; // The affected fields.
size_t affected_fields_len; // The used length of affected_fields.
size_t affected_fields_capacity; // The capacity of affected_fields.
bool is_real_query; // SELECT, UPDATE, INSERT, DELETE or a variation.
bool has_clause; // Has WHERE or HAVING.
char** table_names; // Array of table names used in the query.
@ -82,6 +79,9 @@ typedef struct qc_sqlite_info
qc_query_op_t prepare_operation; // The operation of a prepared statement.
char* preparable_stmt; // The preparable statement.
size_t preparable_stmt_length; // The length of the preparable statement.
QC_FIELD_INFO *field_infos; // Pointer to array of QC_FIELD_INFOs.
size_t field_infos_len; // The used entries in field_infos.
size_t field_infos_capacity; // The capacity of the field_infos array.
} QC_SQLITE_INFO;
typedef enum qc_log_level
@ -123,11 +123,11 @@ typedef enum qc_token_position
QC_TOKEN_RIGHT, // To the right, e.g: "b" in "a = b".
} qc_token_position_t;
static void append_affected_field(QC_SQLITE_INFO* info, const char* s);
static void buffer_object_free(void* data);
static char** copy_string_array(char** strings, int* pn);
static void enlarge_string_array(size_t n, size_t len, char*** ppzStrings, size_t* pCapacity);
static bool ensure_query_is_parsed(GWBUF* query);
static void free_field_infos(QC_FIELD_INFO* infos, size_t n_infos);
static void free_string_array(char** sa);
static QC_SQLITE_INFO* get_query_info(GWBUF* query);
static QC_SQLITE_INFO* info_alloc(void);
@ -140,17 +140,17 @@ static bool parse_query(GWBUF* query);
static void parse_query_string(const char* query, size_t len);
static bool query_is_parsed(GWBUF* query);
static bool should_exclude(const char* zName, const ExprList* pExclude);
static void update_affected_fields(QC_SQLITE_INFO* info,
int prev_token,
const Expr* pExpr,
qc_token_position_t pos,
const ExprList* pExclude);
static void update_affected_fields_from_exprlist(QC_SQLITE_INFO* info,
const ExprList* pEList, const ExprList* pExclude);
static void update_affected_fields_from_idlist(QC_SQLITE_INFO* info,
const IdList* pIds, const ExprList* pExclude);
static void update_affected_fields_from_select(QC_SQLITE_INFO* info,
const Select* pSelect, const ExprList* pExclude);
static void update_fields_infos(QC_SQLITE_INFO* info,
int prev_token,
const Expr* pExpr,
qc_token_position_t pos,
const ExprList* pExclude);
static void update_fields_infos_from_exprlist(QC_SQLITE_INFO* info,
const ExprList* pEList, const ExprList* pExclude);
static void update_fields_infos_from_idlist(QC_SQLITE_INFO* info,
const IdList* pIds, const ExprList* pExclude);
static void update_fields_infos_from_select(QC_SQLITE_INFO* info,
const Select* pSelect, const ExprList* pExclude);
static void update_database_names(QC_SQLITE_INFO* info, const char* name);
static void update_names(QC_SQLITE_INFO* info, const char* zDatabase, const char* zTable);
static void update_names_from_srclist(QC_SQLITE_INFO* info, const SrcList* pSrc);
@ -248,7 +248,7 @@ static bool ensure_query_is_parsed(GWBUF* query)
return parsed;
}
void free_field_infos(QC_FIELD_INFO* infos, size_t n_infos)
static void free_field_infos(QC_FIELD_INFO* infos, size_t n_infos)
{
if (infos)
{
@ -304,13 +304,13 @@ static QC_SQLITE_INFO* info_alloc(void)
static void info_finish(QC_SQLITE_INFO* info)
{
free(info->affected_fields);
free_string_array(info->table_names);
free_string_array(info->table_fullnames);
free(info->created_table_name);
free_string_array(info->database_names);
free(info->prepare_name);
free(info->preparable_stmt);
free_field_infos(info->field_infos, info->field_infos_len);
}
static void info_free(QC_SQLITE_INFO* info)
@ -330,9 +330,6 @@ static QC_SQLITE_INFO* info_init(QC_SQLITE_INFO* info)
info->types = QUERY_TYPE_UNKNOWN;
info->operation = QUERY_OP_UNDEFINED;
info->affected_fields = NULL;
info->affected_fields_len = 0;
info->affected_fields_capacity = 0;
info->is_real_query = false;
info->has_clause = false;
info->table_names = NULL;
@ -352,6 +349,9 @@ static QC_SQLITE_INFO* info_init(QC_SQLITE_INFO* info)
info->prepare_operation = QUERY_OP_UNDEFINED;
info->preparable_stmt = NULL;
info->preparable_stmt_length = 0;
info->field_infos = NULL;
info->field_infos_len = 0;
info->field_infos_capacity = 0;
return info;
}
@ -643,42 +643,6 @@ static void log_invalid_data(GWBUF* query, const char* message)
}
}
static void append_affected_field(QC_SQLITE_INFO* info, const char* s)
{
size_t len = strlen(s);
size_t required_len = info->affected_fields_len + len + 1; // 1 for NULL
if (info->affected_fields_len != 0)
{
required_len += 1; // " " between fields
}
if (required_len > info->affected_fields_capacity)
{
if (info->affected_fields_capacity == 0)
{
info->affected_fields_capacity = 32;
}
while (required_len > info->affected_fields_capacity)
{
info->affected_fields_capacity *= 2;
}
info->affected_fields = MXS_REALLOC(info->affected_fields, info->affected_fields_capacity);
MXS_ABORT_IF_NULL(info->affected_fields);
}
if (info->affected_fields_len != 0)
{
strcpy(info->affected_fields + info->affected_fields_len, " ");
info->affected_fields_len += 1;
}
strcpy(info->affected_fields + info->affected_fields_len, s);
info->affected_fields_len += len;
}
static bool should_exclude(const char* zName, const ExprList* pExclude)
{
int i;
@ -696,54 +660,209 @@ static bool should_exclude(const char* zName, const ExprList* pExclude)
Expr* pExpr = item->pExpr;
if (pExpr->op == TK_DOT)
if (pExpr->op == TK_EQ)
{
// We end up here e.g with "UPDATE t set t.col = 5 ..."
// So, we pick the left branch.
pExpr = pExpr->pLeft;
}
while (pExpr->op == TK_DOT)
{
pExpr = pExpr->pRight;
}
// We need to ensure that we do not report fields where there
// is only a difference in case. E.g.
// SELECT A FROM tbl WHERE a = "foo";
// Affected fields is "A" and not "A a".
if ((pExpr->op == TK_ID) && (strcasecmp(pExpr->u.zToken, zName) == 0))
if (pExpr->op == TK_ID)
{
break;
// We need to ensure that we do not report fields where there
// is only a difference in case. E.g.
// SELECT A FROM tbl WHERE a = "foo";
// Affected fields is "A" and not "A a".
if (strcasecmp(pExpr->u.zToken, zName) == 0)
{
break;
}
}
}
return i != pExclude->nExpr;
}
static void update_affected_fields(QC_SQLITE_INFO* info,
int prev_token,
const Expr* pExpr,
qc_token_position_t pos,
const ExprList* pExclude)
static void update_field_infos(QC_SQLITE_INFO* info,
const char* database,
const char* table,
const char* column,
const ExprList* pExclude)
{
ss_dassert(column);
// If only a column is specified, but not a table or database and we
// have a list of expressions that should be excluded, we check if the column
// value is present in that list. This is in order to exclude the second "d" in
// a statement like "select a as d from x where d = 2".
if (column && !table && !database && pExclude && should_exclude(column, pExclude))
{
return;
}
QC_FIELD_INFO item = { (char*)database, (char*)table, (char*)column };
int i;
for (i = 0; i < info->field_infos_len; ++i)
{
QC_FIELD_INFO* field_info = info->field_infos + i;
if (strcasecmp(item.column, field_info->column) == 0)
{
if (!item.table && !field_info->table)
{
ss_dassert(!item.database && !field_info->database);
break;
}
else if (item.table && field_info->table && (strcmp(item.table, field_info->table) == 0))
{
if (!item.database && !field_info->database)
{
break;
}
else if (item.database &&
field_info->database &&
(strcmp(item.database, field_info->database) == 0))
{
break;
}
}
}
}
QC_FIELD_INFO* field_infos = NULL;
if (i == info->field_infos_len) // If true, the field was not present already.
{
if (info->field_infos_len < info->field_infos_capacity)
{
field_infos = info->field_infos;
}
else
{
size_t capacity = info->field_infos_capacity ? 2 * info->field_infos_capacity : 8;
field_infos = MXS_REALLOC(info->field_infos, capacity * sizeof(QC_FIELD_INFO));
if (field_infos)
{
info->field_infos = field_infos;
info->field_infos_capacity = capacity;
}
}
}
// If field_infos is NULL, then the field was found and has already been noted.
if (field_infos)
{
item.database = item.database ? MXS_STRDUP(item.database) : NULL;
item.table = item.table ? MXS_STRDUP(item.table) : NULL;
ss_dassert(item.column);
item.column = MXS_STRDUP(item.column);
// We are happy if we at least could dup the column.
if (item.column)
{
field_infos[info->field_infos_len++] = item;
}
}
}
static void update_field_infos_from_expr(QC_SQLITE_INFO* info,
const struct Expr* pExpr,
const ExprList* pExclude)
{
QC_FIELD_INFO item = {};
if (pExpr->op == TK_ASTERISK)
{
item.column = "*";
}
else if (pExpr->op == TK_ID)
{
// select a from...
item.column = pExpr->u.zToken;
}
else if (pExpr->op == TK_DOT)
{
if (pExpr->pLeft->op == TK_ID &&
(pExpr->pRight->op == TK_ID || pExpr->pRight->op == TK_ASTERISK))
{
// select a.b from...
item.table = pExpr->pLeft->u.zToken;
if (pExpr->pRight->op == TK_ID)
{
item.column = pExpr->pRight->u.zToken;
}
else
{
item.column = "*";
}
}
else if (pExpr->pLeft->op == TK_ID &&
pExpr->pRight->op == TK_DOT &&
pExpr->pRight->pLeft->op == TK_ID &&
(pExpr->pRight->pRight->op == TK_ID || pExpr->pRight->pRight->op == TK_ASTERISK))
{
// select a.b.c from...
item.database = pExpr->pLeft->u.zToken;
item.table = pExpr->pRight->pLeft->u.zToken;
if (pExpr->pRight->pRight->op == TK_ID)
{
item.column = pExpr->pRight->pRight->u.zToken;
}
else
{
item.column = "*";
}
}
}
if (item.column)
{
bool should_update = true;
if ((pExpr->flags & EP_DblQuoted) == 0)
{
if ((strcasecmp(item.column, "true") == 0) || (strcasecmp(item.column, "false") == 0))
{
should_update = false;
}
}
if (should_update)
{
update_field_infos(info, item.database, item.table, item.column, pExclude);
}
}
}
static void update_fields_infos(QC_SQLITE_INFO* info,
int prev_token,
const Expr* pExpr,
qc_token_position_t pos,
const ExprList* pExclude)
{
const char* zToken = pExpr->u.zToken;
switch (pExpr->op)
{
case TK_ASTERISK: // "select *"
append_affected_field(info, "*");
case TK_ASTERISK: // select *
update_field_infos_from_expr(info, pExpr, pExclude);
break;
case TK_DOT:
// In case of "X.Y" qc_mysqlembedded returns "Y".
update_affected_fields(info, TK_DOT, pExpr->pRight, QC_TOKEN_RIGHT, pExclude);
case TK_DOT: // select a.b ... select a.b.c
update_field_infos_from_expr(info, pExpr, pExclude);
break;
case TK_ID:
if ((pExpr->flags & EP_DblQuoted) == 0)
{
if ((strcasecmp(zToken, "true") != 0) && (strcasecmp(zToken, "false") != 0))
{
if (!pExclude || !should_exclude(zToken, pExclude))
{
append_affected_field(info, zToken);
}
}
}
case TK_ID: // select a
update_field_infos_from_expr(info, pExpr, pExclude);
break;
case TK_VARIABLE:
@ -804,12 +923,12 @@ static void update_affected_fields(QC_SQLITE_INFO* info,
if (pExpr->pLeft)
{
update_affected_fields(info, pExpr->op, pExpr->pLeft, QC_TOKEN_LEFT, pExclude);
update_fields_infos(info, pExpr->op, pExpr->pLeft, QC_TOKEN_LEFT, pExclude);
}
if (pExpr->pRight)
{
update_affected_fields(info, pExpr->op, pExpr->pRight, QC_TOKEN_RIGHT, pExclude);
update_fields_infos(info, pExpr->op, pExpr->pRight, QC_TOKEN_RIGHT, pExclude);
}
if (pExpr->x.pList)
@ -819,7 +938,7 @@ static void update_affected_fields(QC_SQLITE_INFO* info,
case TK_BETWEEN:
case TK_CASE:
case TK_FUNCTION:
update_affected_fields_from_exprlist(info, pExpr->x.pList, pExclude);
update_fields_infos_from_exprlist(info, pExpr->x.pList, pExclude);
break;
case TK_EXISTS:
@ -827,11 +946,11 @@ static void update_affected_fields(QC_SQLITE_INFO* info,
case TK_SELECT:
if (pExpr->flags & EP_xIsSelect)
{
update_affected_fields_from_select(info, pExpr->x.pSelect, pExclude);
update_fields_infos_from_select(info, pExpr->x.pSelect, pExclude);
}
else
{
update_affected_fields_from_exprlist(info, pExpr->x.pList, pExclude);
update_fields_infos_from_exprlist(info, pExpr->x.pList, pExclude);
}
break;
}
@ -840,19 +959,19 @@ static void update_affected_fields(QC_SQLITE_INFO* info,
}
}
static void update_affected_fields_from_exprlist(QC_SQLITE_INFO* info,
const ExprList* pEList,
const ExprList* pExclude)
static void update_fields_infos_from_exprlist(QC_SQLITE_INFO* info,
const ExprList* pEList,
const ExprList* pExclude)
{
for (int i = 0; i < pEList->nExpr; ++i)
{
struct ExprList_item* pItem = &pEList->a[i];
update_affected_fields(info, 0, pItem->pExpr, QC_TOKEN_MIDDLE, pExclude);
update_fields_infos(info, 0, pItem->pExpr, QC_TOKEN_MIDDLE, pExclude);
}
}
static void update_affected_fields_from_idlist(QC_SQLITE_INFO* info,
static void update_fields_infos_from_idlist(QC_SQLITE_INFO* info,
const IdList* pIds,
const ExprList* pExclude)
{
@ -860,14 +979,11 @@ static void update_affected_fields_from_idlist(QC_SQLITE_INFO* info,
{
struct IdList_item* pItem = &pIds->a[i];
if (!pExclude || !should_exclude(pItem->zName, pExclude))
{
append_affected_field(info, pItem->zName);
}
update_field_infos(info, NULL, NULL, pItem->zName, pExclude);
}
}
static void update_affected_fields_from_select(QC_SQLITE_INFO* info,
static void update_fields_infos_from_select(QC_SQLITE_INFO* info,
const Select* pSelect,
const ExprList* pExclude)
{
@ -885,7 +1001,7 @@ static void update_affected_fields_from_select(QC_SQLITE_INFO* info,
if (pSrc->a[i].pSelect)
{
update_affected_fields_from_select(info, pSrc->a[i].pSelect, pExclude);
update_fields_infos_from_select(info, pSrc->a[i].pSelect, pExclude);
}
#ifdef QC_COLLECT_NAMES_FROM_USING
@ -895,7 +1011,7 @@ static void update_affected_fields_from_select(QC_SQLITE_INFO* info,
// does not reveal its value, right?
if (pSrc->a[i].pUsing)
{
update_affected_fields_from_idlist(info, pSrc->a[i].pUsing, pSelect->pEList);
update_fields_infos_from_idlist(info, pSrc->a[i].pUsing, pSelect->pEList);
}
#endif
}
@ -903,24 +1019,28 @@ static void update_affected_fields_from_select(QC_SQLITE_INFO* info,
if (pSelect->pEList)
{
update_affected_fields_from_exprlist(info, pSelect->pEList, NULL);
update_fields_infos_from_exprlist(info, pSelect->pEList, NULL);
}
if (pSelect->pWhere)
{
info->has_clause = true;
update_affected_fields(info, 0, pSelect->pWhere, QC_TOKEN_MIDDLE, pSelect->pEList);
update_fields_infos(info, 0, pSelect->pWhere, QC_TOKEN_MIDDLE, pSelect->pEList);
}
if (pSelect->pGroupBy)
{
update_affected_fields_from_exprlist(info, pSelect->pGroupBy, pSelect->pEList);
update_fields_infos_from_exprlist(info, pSelect->pGroupBy, pSelect->pEList);
}
if (pSelect->pHaving)
{
info->has_clause = true;
update_affected_fields(info, 0, pSelect->pHaving, QC_TOKEN_MIDDLE, pSelect->pEList);
#if defined(COLLECT_HAVING_AS_WELL)
// A HAVING clause can only refer to fields that already have been
// mentioned. Consequently, they need not be collected.
update_fields_infos(info, 0, pSelect->pHaving, QC_TOKEN_MIDDLE, pSelect->pEList);
#endif
}
}
@ -1165,7 +1285,7 @@ void mxs_sqlite3CreateView(Parse *pParse, /* The parsing context */
if (pSelect)
{
update_affected_fields_from_select(info, pSelect, NULL);
update_fields_infos_from_select(info, pSelect, NULL);
info->is_real_query = false;
}
@ -1235,7 +1355,7 @@ void mxs_sqlite3DeleteFrom(Parse* pParse, SrcList* pTabList, Expr* pWhere, SrcLi
if (pWhere)
{
update_affected_fields(info, 0, pWhere, QC_TOKEN_MIDDLE, 0);
update_fields_infos(info, 0, pWhere, QC_TOKEN_MIDDLE, 0);
}
exposed_sqlite3ExprDelete(pParse->db, pWhere);
@ -1299,7 +1419,7 @@ void mxs_sqlite3EndTable(Parse *pParse, /* Parse context */
{
if (pSelect)
{
update_affected_fields_from_select(info, pSelect, NULL);
update_fields_infos_from_select(info, pSelect, NULL);
info->is_real_query = false;
}
else if (pOldTable)
@ -1345,17 +1465,17 @@ void mxs_sqlite3Insert(Parse* pParse,
if (pColumns)
{
update_affected_fields_from_idlist(info, pColumns, NULL);
update_fields_infos_from_idlist(info, pColumns, NULL);
}
if (pSelect)
{
update_affected_fields_from_select(info, pSelect, NULL);
update_fields_infos_from_select(info, pSelect, NULL);
}
if (pSet)
{
update_affected_fields_from_exprlist(info, pSet, NULL);
update_fields_infos_from_exprlist(info, pSet, NULL);
}
exposed_sqlite3SrcListDelete(pParse->db, pTabList);
@ -1479,18 +1599,13 @@ void mxs_sqlite3Update(Parse* pParse, SrcList* pTabList, ExprList* pChanges, Exp
{
struct ExprList_item* pItem = &pChanges->a[i];
if (pItem->zName)
{
append_affected_field(info, pItem->zName);
}
update_affected_fields(info, 0, pItem->pExpr, QC_TOKEN_MIDDLE, NULL);
update_fields_infos(info, 0, pItem->pExpr, QC_TOKEN_MIDDLE, NULL);
}
}
if (pWhere)
{
update_affected_fields(info, 0, pWhere, QC_TOKEN_MIDDLE, NULL);
update_fields_infos(info, 0, pWhere, QC_TOKEN_MIDDLE, pChanges);
}
exposed_sqlite3SrcListDelete(pParse->db, pTabList);
@ -1516,7 +1631,7 @@ void maxscaleCollectInfoFromSelect(Parse* pParse, Select* pSelect)
info->types = QUERY_TYPE_READ;
}
update_affected_fields_from_select(info, pSelect, NULL);
update_fields_infos_from_select(info, pSelect, NULL);
}
void maxscaleAlterTable(Parse *pParse, /* Parser context. */
@ -1668,9 +1783,12 @@ void maxscaleExplain(Parse* pParse, SrcList* pName)
info->status = QC_QUERY_PARSED;
info->types = QUERY_TYPE_READ;
update_names(info, "information_schema", "COLUMNS");
append_affected_field(info,
"COLUMN_DEFAULT COLUMN_KEY COLUMN_NAME "
"COLUMN_TYPE EXTRA IS_NULLABLE");
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_DEFAULT", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_KEY", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_NAME", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_TYPE", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "EXTRA", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "IS_NULLABLE", NULL);
exposed_sqlite3SrcListDelete(pParse->db, pName);
}
@ -2190,7 +2308,7 @@ void maxscaleSet(Parse* pParse, int scope, mxs_set_t kind, ExprList* pList)
if (pValue->op == TK_SELECT)
{
update_affected_fields_from_select(info, pValue->x.pSelect, NULL);
update_fields_infos_from_select(info, pValue->x.pSelect, NULL);
info->is_real_query = false; // TODO: This is what qc_mysqlembedded claims.
}
}
@ -2246,16 +2364,24 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
update_names(info, "information_schema", "COLUMNS");
if (pShow->data == MXS_SHOW_COLUMNS_FULL)
{
append_affected_field(info,
"COLLATION_NAME COLUMN_COMMENT COLUMN_DEFAULT "
"COLUMN_KEY COLUMN_NAME COLUMN_TYPE EXTRA "
"IS_NULLABLE PRIVILEGES");
update_field_infos(info, "information_schema", "COLUMNS", "COLLATION_NAME", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_COMMENT", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_DEFAULT", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_KEY", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_NAME", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_TYPE", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "EXTRA", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "IS_NULLABLE", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "PRIVILEGES", NULL);
}
else
{
append_affected_field(info,
"COLUMN_DEFAULT COLUMN_KEY COLUMN_NAME "
"COLUMN_TYPE EXTRA IS_NULLABLE");
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_DEFAULT", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_KEY", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_NAME", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "COLUMN_TYPE", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "EXTRA", NULL);
update_field_infos(info, "information_schema", "COLUMNS", "IS_NULLABLE", NULL);
}
}
break;
@ -2278,7 +2404,7 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
{
info->types = QUERY_TYPE_SHOW_DATABASES;
update_names(info, "information_schema", "SCHEMATA");
append_affected_field(info, "SCHEMA_NAME");
update_field_infos(info, "information_schema", "SCHEMATA", "SCHEMA_NAME", NULL);
}
break;
@ -2288,10 +2414,19 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
{
info->types = QUERY_TYPE_WRITE;
update_names(info, "information_schema", "STATISTICS");
append_affected_field(info,
"CARDINALITY COLLATION COLUMN_NAME COMMENT INDEX_COMMENT "
"INDEX_NAME INDEX_TYPE NON_UNIQUE NULLABLE PACKED SEQ_IN_INDEX "
"SUB_PART TABLE_NAME");
update_field_infos(info, "information_schema", "STATISTICS", "CARDINALITY", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "COLLATION", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "COLUMN_NAME", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "COMMENT", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "INDEX_COMMENT", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "INDEX_NAME", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "INDEX_TYPE", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "NON_UNIQUE", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "NULLABLE", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "PACKED", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "SEQ_IN_INDEX", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "SUB_PART", NULL);
update_field_infos(info, "information_schema", "STATISTICS", "TABLE_NAME", NULL);
}
break;
@ -2299,12 +2434,24 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
{
info->types = QUERY_TYPE_WRITE;
update_names(info, "information_schema", "TABLES");
append_affected_field(info,
"AUTO_INCREMENT AVG_ROW_LENGTH CHECKSUM CHECK_TIME "
"CREATE_OPTIONS CREATE_TIME DATA_FREE DATA_LENGTH "
"ENGINE INDEX_LENGTH MAX_DATA_LENGTH ROW_FORMAT "
"TABLE_COLLATION TABLE_COMMENT TABLE_NAME "
"TABLE_ROWS UPDATE_TIME VERSION");
update_field_infos(info, "information_schema", "TABLES", "AUTO_INCREMENT", NULL);
update_field_infos(info, "information_schema", "TABLES", "AVG_ROW_LENGTH", NULL);
update_field_infos(info, "information_schema", "TABLES", "CHECKSUM", NULL);
update_field_infos(info, "information_schema", "TABLES", "CHECK_TIME", NULL);
update_field_infos(info, "information_schema", "TABLES", "CREATE_OPTIONS", NULL);
update_field_infos(info, "information_schema", "TABLES", "CREATE_TIME", NULL);
update_field_infos(info, "information_schema", "TABLES", "DATA_FREE", NULL);
update_field_infos(info, "information_schema", "TABLES", "DATA_LENGTH", NULL);
update_field_infos(info, "information_schema", "TABLES", "ENGINE", NULL);
update_field_infos(info, "information_schema", "TABLES", "INDEX_LENGTH", NULL);
update_field_infos(info, "information_schema", "TABLES", "MAX_DATA_LENGTH", NULL);
update_field_infos(info, "information_schema", "TABLES", "ROW_FORMAT", NULL);
update_field_infos(info, "information_schema", "TABLES", "TABLE_COLLATION", NULL);
update_field_infos(info, "information_schema", "TABLES", "TABLE_COMMENT", NULL);
update_field_infos(info, "information_schema", "TABLES", "TABLE_NAME", NULL);
update_field_infos(info, "information_schema", "TABLES", "TABLE_ROWS", NULL);
update_field_infos(info, "information_schema", "TABLES", "UPDATE_TIME", NULL);
update_field_infos(info, "information_schema", "TABLES", "VERSION", NULL);
}
break;
@ -2318,7 +2465,8 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
// TODO: qc_mysqlembedded does not set the type bit.
info->types = QUERY_TYPE_UNKNOWN;
update_names(info, "information_schema", "SESSION_STATUS");
append_affected_field(info, "VARIABLE_NAME VARIABLE_VALUE");
update_field_infos(info, "information_schema", "SESSION_STATUS", "VARIABLE_NAME", NULL);
update_field_infos(info, "information_schema", "SESSION_STATUS", "VARIABLE_VALUE", NULL);
break;
case MXS_SHOW_STATUS_MASTER:
@ -2343,7 +2491,7 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
{
info->types = QUERY_TYPE_SHOW_TABLES;
update_names(info, "information_schema", "TABLE_NAMES");
append_affected_field(info, "TABLE_NAME");
update_field_infos(info, "information_schema", "TABLE_NAMES", "TABLE_NAME", NULL);
}
break;
@ -2358,7 +2506,8 @@ extern void maxscaleShow(Parse* pParse, MxsShow* pShow)
info->types = QUERY_TYPE_SYSVAR_READ;
}
update_names(info, "information_schema", "SESSION_VARIABLES");
append_affected_field(info, "VARIABLE_NAME VARIABLE_VALUE");
update_field_infos(info, "information_schema", "SESSION_STATUS", "VARIABLE_NAME", NULL);
update_field_infos(info, "information_schema", "SESSION_STATUS", "VARIABLE_VALUE", NULL);
}
break;
@ -2435,7 +2584,6 @@ static bool qc_sqlite_is_real_query(GWBUF* query);
static char** qc_sqlite_get_table_names(GWBUF* query, int* tblsize, bool fullnames);
static char* qc_sqlite_get_canonical(GWBUF* query);
static bool qc_sqlite_query_has_clause(GWBUF* query);
static char* qc_sqlite_get_affected_fields(GWBUF* query);
static char** qc_sqlite_get_database_names(GWBUF* query, int* sizep);
static bool get_key_and_value(char* arg, const char** pkey, const char** pvalue)
@ -2843,42 +2991,6 @@ static bool qc_sqlite_query_has_clause(GWBUF* query)
return has_clause;
}
static char* qc_sqlite_get_affected_fields(GWBUF* query)
{
QC_TRACE();
ss_dassert(this_unit.initialized);
ss_dassert(this_thread.initialized);
char* affected_fields = NULL;
QC_SQLITE_INFO* info = get_query_info(query);
if (info)
{
if (qc_info_is_valid(info->status))
{
affected_fields = info->affected_fields;
}
else if (MXS_LOG_PRIORITY_IS_ENABLED(LOG_INFO))
{
log_invalid_data(query, "cannot report what fields are affected");
}
}
else
{
MXS_ERROR("The query could not be parsed. Response not valid.");
}
if (!affected_fields)
{
affected_fields = "";
}
affected_fields = MXS_STRDUP(affected_fields);
MXS_ABORT_IF_NULL(affected_fields);
return affected_fields;
}
static char** qc_sqlite_get_database_names(GWBUF* query, int* sizep)
{
QC_TRACE();
@ -2969,16 +3081,33 @@ static qc_query_op_t qc_sqlite_get_prepare_operation(GWBUF* query)
return op;
}
void qc_sqlite_get_field_info(GWBUF* stmt, const QC_FIELD_INFO** infos, size_t* n_infos)
void qc_sqlite_get_field_info(GWBUF* query, const QC_FIELD_INFO** infos, size_t* n_infos)
{
QC_TRACE();
ss_dassert(this_unit.initialized);
ss_dassert(this_thread.initialized);
MXS_ERROR("qc_get_field_info not implemented yet.");
*infos = NULL;
*n_infos = 0;
QC_SQLITE_INFO* info = get_query_info(query);
if (info)
{
if (qc_info_is_valid(info->status))
{
*infos = info->field_infos;
*n_infos = info->field_infos_len;
}
else if (MXS_LOG_PRIORITY_IS_ENABLED(LOG_INFO))
{
log_invalid_data(query, "cannot report field info");
}
}
else
{
MXS_ERROR("The query could not be parsed. Response not valid.");
}
}
/**
@ -3002,7 +3131,6 @@ static QUERY_CLASSIFIER qc =
qc_sqlite_get_table_names,
NULL,
qc_sqlite_query_has_clause,
qc_sqlite_get_affected_fields,
qc_sqlite_get_database_names,
qc_sqlite_get_prepare_name,
qc_sqlite_get_prepare_operation,

View File

@ -196,10 +196,12 @@ int test(FILE* input, FILE* expected)
{
tok = strpbrk(strbuff, ";");
unsigned int qlen = tok - strbuff + 1;
GWBUF* buff = gwbuf_alloc(qlen + 6);
*((unsigned char*)(GWBUF_DATA(buff))) = qlen;
*((unsigned char*)(GWBUF_DATA(buff) + 1)) = (qlen >> 8);
*((unsigned char*)(GWBUF_DATA(buff) + 2)) = (qlen >> 16);
unsigned int payload_len = qlen + 1;
unsigned int buf_len = payload_len + 4;
GWBUF* buff = gwbuf_alloc(buf_len);
*((unsigned char*)(GWBUF_DATA(buff))) = payload_len;
*((unsigned char*)(GWBUF_DATA(buff) + 1)) = (payload_len >> 8);
*((unsigned char*)(GWBUF_DATA(buff) + 2)) = (payload_len >> 16);
*((unsigned char*)(GWBUF_DATA(buff) + 3)) = 0x00;
*((unsigned char*)(GWBUF_DATA(buff) + 4)) = 0x03;
memcpy(GWBUF_DATA(buff) + 5, strbuff, qlen);

View File

@ -715,68 +715,6 @@ ostream& operator << (ostream& o, const std::set<string>& s)
return o;
}
bool compare_get_affected_fields(QUERY_CLASSIFIER* pClassifier1, GWBUF* pCopy1,
QUERY_CLASSIFIER* pClassifier2, GWBUF* pCopy2)
{
bool success = false;
const char HEADING[] = "qc_get_affected_fields : ";
char* rv1 = pClassifier1->qc_get_affected_fields(pCopy1);
char* rv2 = pClassifier2->qc_get_affected_fields(pCopy2);
std::set<string> fields1;
std::set<string> fields2;
if (rv1)
{
add_fields(fields1, rv1);
}
if (rv2)
{
add_fields(fields2, rv2);
}
stringstream ss;
ss << HEADING;
if ((!rv1 && !rv2) || (rv1 && rv2 && (fields1 == fields2)))
{
ss << "Ok : " << fields1;
success = true;
}
else
{
ss << "ERR: ";
if (rv1)
{
ss << fields1;
}
else
{
ss << "NULL";
}
ss << " != ";
if (rv2)
{
ss << fields2;
}
else
{
ss << "NULL";
}
}
report(success, ss.str());
free(rv1);
free(rv2);
return success;
}
bool compare_get_database_names(QUERY_CLASSIFIER* pClassifier1, GWBUF* pCopy1,
QUERY_CLASSIFIER* pClassifier2, GWBUF* pCopy2)
{
@ -871,6 +809,236 @@ bool compare_get_prepare_operation(QUERY_CLASSIFIER* pClassifier1, GWBUF* pCopy1
return success;
}
bool operator == (const QC_FIELD_INFO& lhs, const QC_FIELD_INFO& rhs)
{
bool rv = false;
if (lhs.column && rhs.column && (strcasecmp(lhs.column, rhs.column) == 0))
{
if (!lhs.table && !rhs.table)
{
rv = true;
}
else if (lhs.table && rhs.table && (strcmp(lhs.table, rhs.table) == 0))
{
if (!lhs.database && !rhs.database)
{
rv = true;
}
else if (lhs.database && rhs.database && (strcmp(lhs.database, rhs.database) == 0))
{
rv = true;
}
}
}
return rv;
}
ostream& operator << (ostream& out, const QC_FIELD_INFO& x)
{
if (x.database)
{
out << x.database;
out << ".";
ss_dassert(x.table);
}
if (x.table)
{
out << x.table;
out << ".";
}
ss_dassert(x.column);
out << x.column;
return out;
}
class QcFieldInfo
{
public:
QcFieldInfo(const QC_FIELD_INFO& info)
: m_database(info.database ? info.database : "")
, m_table(info.table ? info.table : "")
, m_column(info.column ? info.column : "")
{}
bool eq(const QcFieldInfo& rhs) const
{
return
m_database == rhs.m_database &&
m_table == rhs.m_table &&
m_column == rhs.m_column;
}
bool lt(const QcFieldInfo& rhs) const
{
bool rv = false;
if (m_database < rhs.m_database)
{
rv = true;
}
else if (m_database > rhs.m_database)
{
rv = false;
}
else
{
if (m_table < rhs.m_table)
{
rv = true;
}
else if (m_table > rhs.m_table)
{
rv = false;
}
else
{
if (m_column < rhs.m_column)
{
rv = true;
}
}
}
return rv;
}
void print(ostream& out) const
{
if (!m_database.empty())
{
out << m_database;
out << ".";
}
if (!m_table.empty())
{
out << m_table;
out << ".";
}
out << m_column;
}
private:
std::string m_database;
std::string m_table;
std::string m_column;
};
ostream& operator << (ostream& out, const QcFieldInfo& x)
{
x.print(out);
return out;
}
ostream& operator << (ostream& out, std::set<QcFieldInfo>& x)
{
std::set<QcFieldInfo>::iterator i = x.begin();
std::set<QcFieldInfo>::iterator end = x.end();
while (i != end)
{
out << *i++;
if (i != end)
{
out << " ";
}
}
return out;
}
bool operator < (const QcFieldInfo& lhs, const QcFieldInfo& rhs)
{
return lhs.lt(rhs);
}
bool operator == (const QcFieldInfo& lhs, const QcFieldInfo& rhs)
{
return lhs.eq(rhs);
}
bool are_equal(const QC_FIELD_INFO* fields1, size_t n_fields1,
const QC_FIELD_INFO* fields2, size_t n_fields2)
{
bool rv = (n_fields1 == n_fields2);
if (rv)
{
size_t i = 0;
while (rv && (i < n_fields1))
{
rv = *fields1 == *fields2;
++i;
}
}
return rv;
}
ostream& print(ostream& out, const QC_FIELD_INFO* fields, size_t n_fields)
{
size_t i = 0;
while (i < n_fields)
{
out << fields[i++];
if (i != n_fields)
{
out << " ";
}
}
return out;
}
bool compare_get_field_info(QUERY_CLASSIFIER* pClassifier1, GWBUF* pCopy1,
QUERY_CLASSIFIER* pClassifier2, GWBUF* pCopy2)
{
bool success = false;
const char HEADING[] = "qc_get_field_info : ";
const QC_FIELD_INFO* infos1;
const QC_FIELD_INFO* infos2;
size_t n_infos1;
size_t n_infos2;
pClassifier1->qc_get_field_info(pCopy1, &infos1, &n_infos1);
pClassifier2->qc_get_field_info(pCopy2, &infos2, &n_infos2);
stringstream ss;
ss << HEADING;
int i;
std::set<QcFieldInfo> f1;
f1.insert(infos1, infos1 + n_infos1);
std::set<QcFieldInfo> f2;
f2.insert(infos2, infos2 + n_infos2);
if (f1 == f2)
{
ss << "Ok : ";
ss << f1;
success = true;
}
else
{
ss << "ERR: " << f1 << " != " << f2;
}
report(success, ss.str());
return success;
}
bool compare(QUERY_CLASSIFIER* pClassifier1, QUERY_CLASSIFIER* pClassifier2, const string& s)
{
GWBUF* pCopy1 = create_gwbuf(s);
@ -887,10 +1055,10 @@ bool compare(QUERY_CLASSIFIER* pClassifier1, QUERY_CLASSIFIER* pClassifier2, con
errors += !compare_get_table_names(pClassifier1, pCopy1, pClassifier2, pCopy2, false);
errors += !compare_get_table_names(pClassifier1, pCopy1, pClassifier2, pCopy2, true);
errors += !compare_query_has_clause(pClassifier1, pCopy1, pClassifier2, pCopy2);
errors += !compare_get_affected_fields(pClassifier1, pCopy1, pClassifier2, pCopy2);
errors += !compare_get_database_names(pClassifier1, pCopy1, pClassifier2, pCopy2);
errors += !compare_get_prepare_name(pClassifier1, pCopy1, pClassifier2, pCopy2);
errors += !compare_get_prepare_operation(pClassifier1, pCopy1, pClassifier2, pCopy2);
errors += !compare_get_field_info(pClassifier1, pCopy1, pClassifier2, pCopy2);
gwbuf_free(pCopy1);
gwbuf_free(pCopy2);

View File

@ -222,9 +222,11 @@ replace into t1 values (4, 4);
select row_count();
# Reports that 2 rows are affected. This conforms to documentation.
# (Useful for differentiating inserts from updates).
insert into t1 values (2, 2) on duplicate key update data= data + 10;
# MXSTODO: insert into t1 values (2, 2) on duplicate key update data= data + 10;
# qc_sqlite: Cannot parse "on duplicate"
select row_count();
insert into t1 values (5, 5) on duplicate key update data= data + 10;
# MXSTODO: insert into t1 values (5, 5) on duplicate key update data= data + 10;
# qc_sqlite: Cannot parse "on duplicate"
select row_count();
drop table t1;

View File

@ -20,3 +20,8 @@ SET @x:= (SELECT h FROM t1 WHERE (a,b,c,d,e,f,g)=(1,2,3,4,5,6,7));
# REMOVE: expr(A) ::= LP(B) expr(X) RP(E). {A.pExpr = X.pExpr; spanSet(&A,&B,&E);}
# REMOVE: expr(A) ::= LP expr(X) COMMA(OP) expr(Y) RP. {spanBinaryExpr(&A,pParse,@OP,&X,&Y);}
# ADD : expr(A) ::= LP exprlist RP. { ... }
insert into t1 values (2, 2) on duplicate key update data= data + 10;
# Problem: warning: [qc_sqlite] Statement was only partially parsed (Sqlite3 error: SQL logic error
# or missing database, near "on": syntax error): "insert into t1 values (2, 2) on duplicate
# key update data= data + 10;"

View File

@ -12,7 +12,7 @@
*/
/**
* @file atomic.c - Implementation of atomic opertions for the gateway
* @file atomic.c - Implementation of atomic operations for MaxScale
*
* @verbatim
* Revision History
@ -23,22 +23,6 @@
* @endverbatim
*/
/**
* Implementation of an atomic add operation for the GCC environment, or the
* X86 processor. If we are working within GNU C then we can use the GCC
* atomic add built in function, which is portable across platforms that
* implement GCC. Otherwise, this function currently supports only X86
* architecture (without further development).
*
* Adds a value to the contents of a location pointed to by the first parameter.
* The add operation is atomic and the return value is the value stored in the
* location prior to the operation. The number that is added may be signed,
* therefore atomic_subtract is merely an atomic add with a negative value.
*
* @param variable Pointer the the variable to add to
* @param value Value to be added
* @return The value of variable before the add occurred
*/
int
atomic_add(int *variable, int value)
{

View File

@ -189,14 +189,6 @@ bool qc_query_has_clause(GWBUF* query)
return classifier->qc_query_has_clause(query);
}
char* qc_get_affected_fields(GWBUF* query)
{
QC_TRACE();
ss_dassert(classifier);
return classifier->qc_get_affected_fields(query);
}
void qc_get_field_info(GWBUF* query, const QC_FIELD_INFO** infos, size_t* n_infos)
{
QC_TRACE();

View File

@ -66,6 +66,9 @@
#include <maxscale/alloc.h>
#include <maxscale/utils.h>
/** Base value for server weights */
#define SERVICE_BASE_SERVER_WEIGHT 1000
/** To be used with configuration type checks */
typedef struct typelib_st
{
@ -96,6 +99,7 @@ static void service_add_qualified_param(SERVICE* svc,
CONFIG_PARAMETER* param);
static void service_internal_restart(void *data);
static void service_queue_check(void *data);
static void service_calculate_weights(SERVICE *service);
/**
* Allocate a new service for the gateway to support
@ -474,6 +478,9 @@ static void free_string_array(char** array)
int
serviceStart(SERVICE *service)
{
/** Calculate the server weights */
service_calculate_weights(service);
int listeners = 0;
char **router_options = copy_string_array(service->routerOptions);
@ -729,6 +736,27 @@ int serviceHasProtocol(SERVICE *service, const char *protocol,
return proto != NULL;
}
/**
* Allocate a new server reference
*
* @param server Server to refer to
* @return Server reference or NULL on error
*/
static SERVER_REF* server_ref_alloc(SERVER *server)
{
SERVER_REF *sref = MXS_MALLOC(sizeof(SERVER_REF));
if (sref)
{
sref->next = NULL;
sref->server = server;
sref->weight = SERVICE_BASE_SERVER_WEIGHT;
sref->connections = 0;
}
return sref;
}
/**
* Add a backend database server to a service
*
@ -738,13 +766,10 @@ int serviceHasProtocol(SERVICE *service, const char *protocol,
void
serviceAddBackend(SERVICE *service, SERVER *server)
{
SERVER_REF *sref = MXS_MALLOC(sizeof(SERVER_REF));
SERVER_REF *sref = server_ref_alloc(server);
if (sref)
{
sref->next = NULL;
sref->server = server;
spinlock_acquire(&service->spin);
if (service->dbref)
{
@ -2027,3 +2052,76 @@ bool service_all_services_have_listeners()
spinlock_release(&service_spin);
return rval;
}
static void service_calculate_weights(SERVICE *service)
{
char *weightby = serviceGetWeightingParameter(service);
if (weightby && service->dbref)
{
/** Service has a weighting parameter and at least one server */
int total = 0;
/** Calculate total weight */
for (SERVER_REF *server = service->dbref; server; server = server->next)
{
server->weight = SERVICE_BASE_SERVER_WEIGHT;
char *param = serverGetParameter(server->server, weightby);
if (param)
{
total += atoi(param);
}
}
if (total == 0)
{
MXS_WARNING("Weighting Parameter for service '%s' will be ignored as "
"no servers have values for the parameter '%s'.",
service->name, weightby);
}
else if (total < 0)
{
MXS_ERROR("Sum of weighting parameter '%s' for service '%s' exceeds "
"maximum value of %d. Weighting will be ignored.",
weightby, service->name, INT_MAX);
}
else
{
/** Calculate the relative weight of the servers */
for (SERVER_REF *server = service->dbref; server; server = server->next)
{
char *param = serverGetParameter(server->server, weightby);
if (param)
{
int wght = atoi(param);
int perc = (wght * SERVICE_BASE_SERVER_WEIGHT) / total;
if (perc == 0)
{
MXS_ERROR("Weighting parameter '%s' with a value of %d for"
" server '%s' rounds down to zero with total weight"
" of %d for service '%s'. No queries will be "
"routed to this server as long as a server with"
" positive weight is available.",
weightby, wght, server->server->unique_name,
total, service->name);
}
else if (perc < 0)
{
MXS_ERROR("Weighting parameter '%s' for server '%s' is too large, "
"maximum value is %d. No weighting will be used for this "
"server.", weightby, server->server->unique_name,
INT_MAX / SERVICE_BASE_SERVER_WEIGHT);
perc = SERVICE_BASE_SERVER_WEIGHT;
}
server->weight = perc;
}
else
{
MXS_WARNING("Server '%s' has no parameter '%s' used for weighting"
" for service '%s'.", server->server->unique_name,
weightby, service->name);
}
}
}
}
}

View File

@ -124,6 +124,8 @@ session_alloc(SERVICE *service, DCB *client_dcb)
MXS_OOM();
return NULL;
}
/** Assign a session id and increase */
session->ses_id = (size_t)atomic_add(&session_id, 1) + 1;
session->ses_is_child = (bool) DCB_IS_CLONE(client_dcb);
session->service = service;
session->client_dcb = client_dcb;
@ -221,8 +223,6 @@ session_alloc(SERVICE *service, DCB *client_dcb)
session->client_dcb->user,
session->client_dcb->remote);
}
/** Assign a session id and increase, insert session into list */
session->ses_id = (size_t)atomic_add(&session_id, 1) + 1;
atomic_add(&service->stats.n_sessions, 1);
atomic_add(&service->stats.n_current, 1);
CHK_SESSION(session);

View File

@ -1701,13 +1701,12 @@ bool rule_matches(FW_INSTANCE* my_instance,
RULELIST *rulelist,
char* query)
{
char *ptr, *where, *msg = NULL;
char *ptr, *msg = NULL;
char emsg[512];
unsigned char* memptr = (unsigned char*) queue->start;
bool is_sql, is_real, matches;
qc_query_op_t optype = QUERY_OP_UNDEFINED;
STRLINK* strln = NULL;
QUERYSPEED* queryspeed = NULL;
QUERYSPEED* rule_qs = NULL;
time_t time_now;
@ -1745,7 +1744,7 @@ bool rule_matches(FW_INSTANCE* my_instance,
case QUERY_OP_UPDATE:
case QUERY_OP_INSERT:
case QUERY_OP_DELETE:
// In these cases, we have to be able to trust what qc_get_affected_fields
// In these cases, we have to be able to trust what qc_get_field_info
// returns. Unless the query was parsed completely, we cannot do that.
msg = create_parse_error(my_instance, "parsed completely", query, &matches);
goto queryresolved;
@ -1817,32 +1816,29 @@ bool rule_matches(FW_INSTANCE* my_instance,
case RT_COLUMN:
if (is_sql && is_real)
{
where = qc_get_affected_fields(queue);
if (where != NULL)
{
char* saveptr;
char* tok = strtok_r(where, " ", &saveptr);
while (tok)
{
strln = (STRLINK*) rulelist->rule->data;
while (strln)
{
if (strcasecmp(tok, strln->value) == 0)
{
matches = true;
const QC_FIELD_INFO* infos;
size_t n_infos;
qc_get_field_info(queue, &infos, &n_infos);
sprintf(emsg, "Permission denied to column '%s'.", strln->value);
MXS_INFO("dbfwfilter: rule '%s': query targets forbidden column: %s",
rulelist->rule->name, strln->value);
msg = MXS_STRDUP_A(emsg);
MXS_FREE(where);
goto queryresolved;
}
strln = strln->next;
for (size_t i = 0; i < n_infos; ++i)
{
const char* tok = infos[i].column;
STRLINK* strln = (STRLINK*) rulelist->rule->data;
while (strln)
{
if (strcasecmp(tok, strln->value) == 0)
{
matches = true;
sprintf(emsg, "Permission denied to column '%s'.", strln->value);
MXS_INFO("dbfwfilter: rule '%s': query targets forbidden column: %s",
rulelist->rule->name, strln->value);
msg = MXS_STRDUP_A(emsg);
goto queryresolved;
}
tok = strtok_r(NULL, ",", &saveptr);
strln = strln->next;
}
MXS_FREE(where);
}
}
break;
@ -1850,23 +1846,22 @@ bool rule_matches(FW_INSTANCE* my_instance,
case RT_WILDCARD:
if (is_sql && is_real)
{
char * strptr;
where = qc_get_affected_fields(queue);
const QC_FIELD_INFO* infos;
size_t n_infos;
qc_get_field_info(queue, &infos, &n_infos);
if (where != NULL)
for (size_t i = 0; i < n_infos; ++i)
{
strptr = where;
const char* column = infos[i].column;
if (strchr(strptr, '*'))
if (strcmp(column, "*") == 0)
{
matches = true;
msg = MXS_STRDUP_A("Usage of wildcard denied.");
MXS_INFO("dbfwfilter: rule '%s': query contains a wildcard.",
rulelist->rule->name);
MXS_FREE(where);
goto queryresolved;
}
MXS_FREE(where);
}
}
break;

View File

@ -203,11 +203,8 @@ static ROUTER *
createInstance(SERVICE *service, char **options)
{
ROUTER_INSTANCE *inst;
SERVER *server;
SERVER_REF *sref;
int i, n;
BACKEND *backend;
char *weightby;
if ((inst = MXS_CALLOC(1, sizeof(ROUTER_INSTANCE))) == NULL)
{
@ -243,77 +240,11 @@ createInstance(SERVICE *service, char **options)
}
inst->servers[n]->server = sref->server;
inst->servers[n]->current_connection_count = 0;
inst->servers[n]->weight = 1000;
inst->servers[n]->weight = sref->weight;
n++;
}
inst->servers[n] = NULL;
if ((weightby = serviceGetWeightingParameter(service)) != NULL)
{
int total = 0;
for (int n = 0; inst->servers[n]; n++)
{
BACKEND *backend = inst->servers[n];
char *param = serverGetParameter(backend->server, weightby);
if (param)
{
total += atoi(param);
}
}
if (total == 0)
{
MXS_WARNING("Weighting Parameter for service '%s' "
"will be ignored as no servers have values "
"for the parameter '%s'.",
service->name, weightby);
}
else if (total < 0)
{
MXS_ERROR("Sum of weighting parameter '%s' for service '%s' exceeds "
"maximum value of %d. Weighting will be ignored.",
weightby, service->name, INT_MAX);
}
else
{
for (int n = 0; inst->servers[n]; n++)
{
BACKEND *backend = inst->servers[n];
char *param = serverGetParameter(backend->server, weightby);
if (param)
{
int wght = atoi(param);
int perc = (wght * 1000) / total;
if (perc == 0)
{
perc = 1;
MXS_ERROR("Weighting parameter '%s' with a value of %d for"
" server '%s' rounds down to zero with total weight"
" of %d for service '%s'. No queries will be "
"routed to this server.", weightby, wght,
backend->server->unique_name, total,
service->name);
}
else if (perc < 0)
{
MXS_ERROR("Weighting parameter '%s' for server '%s' is too large, "
"maximum value is %d. No weighting will be used for this server.",
weightby, backend->server->unique_name, INT_MAX / 1000);
perc = 1000;
}
backend->weight = perc;
}
else
{
MXS_WARNING("Server '%s' has no parameter '%s' used for weighting"
" for service '%s'.", backend->server->unique_name,
weightby, service->name);
}
}
}
}
/*
* Process the options
*/

View File

@ -183,12 +183,7 @@ ROUTER_OBJECT *GetModuleObject()
static ROUTER *createInstance(SERVICE *service, char **options)
{
ROUTER_INSTANCE *router;
SERVER *server;
SERVER_REF *sref;
int nservers;
int i;
CONFIG_PARAMETER *param;
char *weightby;
if ((router = MXS_CALLOC(1, sizeof(ROUTER_INSTANCE))) == NULL)
{
@ -197,131 +192,11 @@ static ROUTER *createInstance(SERVICE *service, char **options)
router->service = service;
spinlock_init(&router->lock);
/** Calculate number of servers */
sref = service->dbref;
nservers = 0;
while (sref != NULL)
{
nservers++;
sref = sref->next;
}
router->servers = (BACKEND **)MXS_CALLOC(nservers + 1, sizeof(BACKEND *));
if (router->servers == NULL)
{
free_rwsplit_instance(router);
return NULL;
}
/**
* Create an array of the backend servers in the router structure to
* maintain a count of the number of connections to each
* backend server.
*/
sref = service->dbref;
nservers = 0;
while (sref != NULL)
{
if ((router->servers[nservers] = MXS_MALLOC(sizeof(BACKEND))) == NULL)
{
free_rwsplit_instance(router);
return NULL;
}
router->servers[nservers]->backend_server = sref->server;
router->servers[nservers]->backend_conn_count = 0;
router->servers[nservers]->be_valid = false;
router->servers[nservers]->weight = 1000;
#if defined(SS_DEBUG)
router->servers[nservers]->be_chk_top = CHK_NUM_BACKEND;
router->servers[nservers]->be_chk_tail = CHK_NUM_BACKEND;
#endif
nservers += 1;
sref = sref->next;
}
router->servers[nservers] = NULL;
/*
* Until we know otherwise assume we have some available slaves.
*/
router->available_slaves = true;
/*
* If server weighting has been defined calculate the percentage
* of load that will be sent to each server. This is only used for
* calculating the least connections, either globally or within a
* service, or the number of current operations on a server.
*/
if ((weightby = serviceGetWeightingParameter(service)) != NULL)
{
int total = 0;
for (int n = 0; router->servers[n]; n++)
{
BACKEND *backend = router->servers[n];
char *param = serverGetParameter(backend->backend_server, weightby);
if (param)
{
total += atoi(param);
}
}
if (total == 0)
{
MXS_WARNING("Weighting Parameter for service '%s' "
"will be ignored as no servers have values "
"for the parameter '%s'.",
service->name, weightby);
}
else if (total < 0)
{
MXS_ERROR("Sum of weighting parameter '%s' for service '%s' exceeds "
"maximum value of %d. Weighting will be ignored.",
weightby, service->name, INT_MAX);
}
else
{
for (int n = 0; router->servers[n]; n++)
{
BACKEND *backend = router->servers[n];
char *param = serverGetParameter(backend->backend_server, weightby);
if (param)
{
int wght = atoi(param);
int perc = (wght * 1000) / total;
if (perc == 0)
{
MXS_ERROR("Weighting parameter '%s' with a value of %d for"
" server '%s' rounds down to zero with total weight"
" of %d for service '%s'. No queries will be "
"routed to this server as long as a server with"
" positive weight is available.",
weightby, wght, backend->backend_server->unique_name,
total, service->name);
}
else if (perc < 0)
{
MXS_ERROR("Weighting parameter '%s' for server '%s' is too large, "
"maximum value is %d. No weighting will be used for this "
"server.",
weightby, backend->backend_server->unique_name,
INT_MAX / 1000);
perc = 1000;
}
backend->weight = perc;
}
else
{
MXS_WARNING("Server '%s' has no parameter '%s' used for weighting"
" for service '%s'.",
backend->backend_server->unique_name, weightby,
service->name);
}
}
}
}
/** Enable strict multistatement handling by default */
router->rwsplit_config.rw_strict_multi_stmt = true;
@ -343,6 +218,13 @@ static ROUTER *createInstance(SERVICE *service, char **options)
router->rwsplit_config.rw_max_sescmd_history_size = 0;
}
int nservers = 0;
for (SERVER_REF *ref = service->dbref; ref; ref = ref->next)
{
nservers++;
}
/**
* Set default value for max_slave_connections as 100%. This way
* LEAST_CURRENT_OPERATIONS allows us to balance evenly across all the
@ -416,8 +298,7 @@ static ROUTER *createInstance(SERVICE *service, char **options)
*/
static void *newSession(ROUTER *router_inst, SESSION *session)
{
backend_ref_t
*backend_ref; /*< array of backend references (DCB,BACKEND,cursor) */
backend_ref_t *backend_ref; /*< array of backend references (DCB,BACKEND,cursor) */
backend_ref_t *master_ref = NULL; /*< pointer to selected master */
ROUTER_CLIENT_SES *client_rses = NULL;
ROUTER_INSTANCE *router = (ROUTER_INSTANCE *)router_inst;
@ -492,7 +373,8 @@ static void *newSession(ROUTER *router_inst, SESSION *session)
* Initialize backend references with BACKEND ptr.
* Initialize session command cursors for each backend reference.
*/
for (i = 0; i < router_nservers; i++)
i = 0;
for (SERVER_REF *sref = router->service->dbref; sref; sref = sref->next)
{
#if defined(SS_DEBUG)
backend_ref[i].bref_chk_top = CHK_NUM_BACKEND_REF;
@ -501,13 +383,14 @@ static void *newSession(ROUTER *router_inst, SESSION *session)
backend_ref[i].bref_sescmd_cur.scmd_cur_chk_tail = CHK_NUM_SESCMD_CUR;
#endif
backend_ref[i].bref_state = 0;
backend_ref[i].bref_backend = router->servers[i];
backend_ref[i].ref = sref;
/** store pointers to sescmd list to both cursors */
backend_ref[i].bref_sescmd_cur.scmd_cur_rses = client_rses;
backend_ref[i].bref_sescmd_cur.scmd_cur_active = false;
backend_ref[i].bref_sescmd_cur.scmd_cur_ptr_property =
&client_rses->rses_properties[RSES_PROP_TYPE_SESCMD];
backend_ref[i].bref_sescmd_cur.scmd_cur_cmd = NULL;
i++;
}
max_nslaves = rses_get_max_slavecount(client_rses, router_nservers);
max_slave_rlag = rses_get_max_replication_lag(client_rses);
@ -661,7 +544,7 @@ static void closeSession(ROUTER *instance, void *router_session)
*/
dcb_close(dcb);
/** decrease server current connection counters */
atomic_add(&bref->bref_backend->backend_conn_count, -1);
atomic_add(&bref->ref->connections, -1);
}
else
{
@ -804,7 +687,6 @@ static void diagnostics(ROUTER *instance, DCB *dcb)
ROUTER_CLIENT_SES *router_cli_ses;
ROUTER_INSTANCE *router = (ROUTER_INSTANCE *)instance;
int i = 0;
BACKEND *backend;
char *weightby;
spinlock_acquire(&router->lock);
@ -845,13 +727,12 @@ static void diagnostics(ROUTER *instance, DCB *dcb)
dcb_printf(dcb, "\t\tServer Target %% Connections "
"Operations\n");
dcb_printf(dcb, "\t\t Global Router\n");
for (i = 0; router->servers[i]; i++)
for (SERVER_REF *ref = router->service->dbref; ref; ref = ref->next)
{
backend = router->servers[i];
dcb_printf(dcb, "\t\t%-20s %3.1f%% %-6d %-6d %d\n",
backend->backend_server->unique_name, (float)backend->weight / 10,
backend->backend_server->stats.n_current, backend->backend_conn_count,
backend->backend_server->stats.n_current_ops);
ref->server->unique_name, (float)ref->weight / 10,
ref->server->stats.n_current, ref->connections,
ref->server->stats.n_current_ops);
}
}
}
@ -998,17 +879,15 @@ static void clientReply(ROUTER *instance, void *router_session, GWBUF *writebuf,
{
bool succp;
MXS_INFO("Backend %s:%d processed reply and starts to execute "
"active cursor.", bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port);
MXS_INFO("Backend %s:%d processed reply and starts to execute active cursor.",
bref->ref->server->name, bref->ref->server->port);
succp = execute_sescmd_in_backend(bref);
if (!succp)
{
MXS_INFO("Backend %s:%d failed to execute session command.",
bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port);
bref->ref->server->name, bref->ref->server->port);
}
}
else if (bref->bref_pending_cmd != NULL) /*< non-sescmd is waiting to be routed */
@ -1167,13 +1046,13 @@ void bref_clear_state(backend_ref_t *bref, bref_state_t state)
else
{
/** Decrease global operation count */
prev2 = atomic_add(&bref->bref_backend->backend_server->stats.n_current_ops, -1);
prev2 = atomic_add(&bref->ref->server->stats.n_current_ops, -1);
ss_dassert(prev2 > 0);
if (prev2 <= 0)
{
MXS_ERROR("[%s] Error: negative current operation count in backend %s:%u",
__FUNCTION__, bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port);
__FUNCTION__, bref->ref->server->name,
bref->ref->server->port);
}
}
}
@ -1212,19 +1091,16 @@ void bref_set_state(backend_ref_t *bref, bref_state_t state)
if (prev1 < 0)
{
MXS_ERROR("[%s] Error: negative number of connections waiting for "
"results in backend %s:%u",
__FUNCTION__, bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port);
"results in backend %s:%u", __FUNCTION__,
bref->ref->server->name, bref->ref->server->port);
}
/** Increase global operation count */
prev2 =
atomic_add(&bref->bref_backend->backend_server->stats.n_current_ops, 1);
prev2 = atomic_add(&bref->ref->server->stats.n_current_ops, 1);
ss_dassert(prev2 >= 0);
if (prev2 < 0)
{
MXS_ERROR("[%s] Error: negative current operation count in backend %s:%u",
__FUNCTION__, bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port);
__FUNCTION__, bref->ref->server->name, bref->ref->server->port);
}
}
@ -1391,7 +1267,7 @@ int router_handle_state_switch(DCB *dcb, DCB_REASON reason, void *data)
bref = (backend_ref_t *)data;
CHK_BACKEND_REF(bref);
srv = bref->bref_backend->backend_server;
srv = bref->ref->server;
if (SERVER_IS_RUNNING(srv) && SERVER_IS_IN_CLUSTER(srv))
{
@ -1606,9 +1482,9 @@ static void handleError(ROUTER *instance, void *router_session,
* handled so that session could continue.
*/
if (rses->rses_master_ref && rses->rses_master_ref->bref_dcb == problem_dcb &&
!SERVER_IS_MASTER(rses->rses_master_ref->bref_backend->backend_server))
!SERVER_IS_MASTER(rses->rses_master_ref->ref->server))
{
SERVER *srv = rses->rses_master_ref->bref_backend->backend_server;
SERVER *srv = rses->rses_master_ref->ref->server;
backend_ref_t *bref;
bref = get_bref_from_dcb(rses, problem_dcb);
if (bref != NULL)
@ -1869,9 +1745,8 @@ return_succp:
static int router_get_servercount(ROUTER_INSTANCE *inst)
{
int router_nservers = 0;
BACKEND **b = inst->servers;
/** count servers */
while (*(b++) != NULL)
for (SERVER_REF *ref = inst->service->dbref; ref; ref = ref->next)
{
router_nservers++;
}
@ -2149,14 +2024,6 @@ static void free_rwsplit_instance(ROUTER_INSTANCE *router)
{
if (router)
{
if (router->servers)
{
for (int i = 0; router->servers[i]; i++)
{
MXS_FREE(router->servers[i]);
}
}
MXS_FREE(router->servers);
MXS_FREE(router);
}
}

View File

@ -199,27 +199,6 @@ typedef struct sescmd_cursor_st
#endif
} sescmd_cursor_t;
/**
* Internal structure used to define the set of backend servers we are routing
* connections to. This provides the storage for routing module specific data
* that is required for each of the backend servers.
*
* Owned by router_instance, referenced by each routing session.
*/
typedef struct backend_st
{
#if defined(SS_DEBUG)
skygw_chk_t be_chk_top;
#endif
SERVER* backend_server; /*< The server itself */
int backend_conn_count; /*< Number of connections to the server */
bool be_valid; /*< Valid when belongs to the router's configuration */
int weight; /*< Desired weighting on the load. Expressed in .1% increments */
#if defined(SS_DEBUG)
skygw_chk_t be_chk_tail;
#endif
} BACKEND;
/**
* Reference to BACKEND.
*
@ -230,7 +209,7 @@ typedef struct backend_ref_st
#if defined(SS_DEBUG)
skygw_chk_t bref_chk_top;
#endif
BACKEND* bref_backend;
SERVER_REF* ref;
DCB* bref_dcb;
bref_state_t bref_state;
int bref_num_result_wait;
@ -348,8 +327,6 @@ typedef struct router_instance
SERVICE* service; /*< Pointer to service */
ROUTER_CLIENT_SES* connections; /*< List of client connections */
SPINLOCK lock; /*< Lock for the instance data */
BACKEND** servers; /*< Backend servers */
BACKEND* master; /*< NULL or pointer */
rwsplit_config_t rwsplit_config; /*< expanded config info from SERVICE */
int rwsplit_version; /*< version number for router's config */
ROUTER_STATS stats; /*< Statistics for this router */

View File

@ -404,7 +404,7 @@ void print_error_packet(ROUTER_CLIENT_SES *rses, GWBUF *buf, DCB *dcb)
{
if (bref[i].bref_dcb == dcb)
{
srv = bref[i].bref_backend->backend_server;
srv = bref[i].ref->server;
}
}
ss_dassert(srv != NULL);
@ -453,8 +453,8 @@ void check_session_command_reply(GWBUF *writebuf, sescmd_cursor_t *scur, backend
ss_dassert(len + 4 == GWBUF_LENGTH(scur->scmd_cur_cmd->my_sescmd_buf));
MXS_ERROR("Failed to execute session command in %s:%d. Error was: %s %s",
bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port, err, replystr);
bref->ref->server->name,
bref->ref->server->port, err, replystr);
MXS_FREE(err);
MXS_FREE(replystr);
}

View File

@ -253,10 +253,10 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
BREF_IS_IN_USE((&backend_ref[i])))
{
MXS_INFO("Route query to %s \t%s:%d%s",
(SERVER_IS_MASTER(backend_ref[i].bref_backend->backend_server)
(SERVER_IS_MASTER(backend_ref[i].ref->server)
? "master" : "slave"),
backend_ref[i].bref_backend->backend_server->name,
backend_ref[i].bref_backend->backend_server->port,
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port,
(i + 1 == router_cli_ses->rses_nbackends ? " <" : " "));
}
@ -368,10 +368,10 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
if (MXS_LOG_PRIORITY_IS_ENABLED(LOG_INFO))
{
MXS_INFO("Route query to %s \t%s:%d%s",
(SERVER_IS_MASTER(backend_ref[i].bref_backend->backend_server)
(SERVER_IS_MASTER(backend_ref[i].ref->server)
? "master" : "slave"),
backend_ref[i].bref_backend->backend_server->name,
backend_ref[i].bref_backend->backend_server->port,
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port,
(i + 1 == router_cli_ses->rses_nbackends ? " <" : " "));
}
@ -391,8 +391,8 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
{
nsucc += 1;
MXS_INFO("Backend %s:%d already executing sescmd.",
backend_ref[i].bref_backend->backend_server->name,
backend_ref[i].bref_backend->backend_server->port);
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port);
}
else
{
@ -403,8 +403,8 @@ bool route_session_write(ROUTER_CLIENT_SES *router_cli_ses,
else
{
MXS_ERROR("Failed to execute session command in %s:%d",
backend_ref[i].bref_backend->backend_server->name,
backend_ref[i].bref_backend->backend_server->port);
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port);
}
}
}
@ -533,9 +533,9 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
for (i = 0; i < rses->rses_nbackends; i++)
{
BACKEND *b = backend_ref[i].bref_backend;
SERVER_REF *b = backend_ref[i].ref;
SERVER server;
server.status = backend_ref[i].bref_backend->backend_server->status;
server.status = b->server->status;
/**
* To become chosen:
* backend must be in use, name must match,
@ -543,7 +543,7 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
* server, or master.
*/
if (BREF_IS_IN_USE((&backend_ref[i])) &&
(strncasecmp(name, b->backend_server->unique_name, PATH_MAX) == 0) &&
(strncasecmp(name, b->server->unique_name, PATH_MAX) == 0) &&
(SERVER_IS_SLAVE(&server) || SERVER_IS_RELAY_SERVER(&server) ||
SERVER_IS_MASTER(&server)))
{
@ -569,10 +569,10 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
for (i = 0; i < rses->rses_nbackends; i++)
{
BACKEND *b = (&backend_ref[i])->bref_backend;
SERVER_REF *b = backend_ref[i].ref;
SERVER server;
SERVER candidate;
server.status = backend_ref[i].bref_backend->backend_server->status;
server.status = b->server->status;
/**
* Unused backend or backend which is not master nor
* slave can't be used
@ -596,7 +596,7 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
{
/** found master */
candidate_bref = &backend_ref[i];
candidate.status = candidate_bref->bref_backend->backend_server->status;
candidate.status = candidate_bref->ref->server->status;
succp = true;
}
/**
@ -605,12 +605,12 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
* maximum allowed replication lag.
*/
else if (max_rlag == MAX_RLAG_UNDEFINED ||
(b->backend_server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->backend_server->rlag <= max_rlag))
(b->server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->server->rlag <= max_rlag))
{
/** found slave */
candidate_bref = &backend_ref[i];
candidate.status = candidate_bref->bref_backend->backend_server->status;
candidate.status = candidate_bref->ref->server->status;
succp = true;
}
}
@ -620,13 +620,13 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
*/
else if (SERVER_IS_MASTER(&candidate) && SERVER_IS_SLAVE(&server) &&
(max_rlag == MAX_RLAG_UNDEFINED ||
(b->backend_server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->backend_server->rlag <= max_rlag)) &&
(b->server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->server->rlag <= max_rlag)) &&
!rses->rses_config.rw_master_reads)
{
/** found slave */
candidate_bref = &backend_ref[i];
candidate.status = candidate_bref->bref_backend->backend_server->status;
candidate.status = candidate_bref->ref->server->status;
succp = true;
}
/**
@ -637,21 +637,17 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
else if (SERVER_IS_SLAVE(&server))
{
if (max_rlag == MAX_RLAG_UNDEFINED ||
(b->backend_server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->backend_server->rlag <= max_rlag))
(b->server->rlag != MAX_RLAG_NOT_AVAILABLE &&
b->server->rlag <= max_rlag))
{
candidate_bref =
check_candidate_bref(candidate_bref, &backend_ref[i],
rses->rses_config.rw_slave_select_criteria);
candidate.status =
candidate_bref->bref_backend->backend_server->status;
candidate_bref = check_candidate_bref(candidate_bref, &backend_ref[i],
rses->rses_config.rw_slave_select_criteria);
candidate.status = candidate_bref->ref->server->status;
}
else
{
MXS_INFO("Server %s:%d is too much behind the "
"master, %d s. and can't be chosen.",
b->backend_server->name, b->backend_server->port,
b->backend_server->rlag);
MXS_INFO("Server %s:%d is too much behind the master, %d s. and can't be chosen.",
b->server->name, b->server->port, b->server->rlag);
}
}
} /*< for */
@ -675,7 +671,7 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
* so copying it locally will make possible error messages
* easier to understand */
SERVER server;
server.status = master_bref->bref_backend->backend_server->status;
server.status = master_bref->ref->server->status;
if (BREF_IS_IN_USE(master_bref) && SERVER_IS_MASTER(&server))
{
*p_dcb = master_bref->bref_dcb;
@ -687,8 +683,8 @@ bool rwsplit_get_dcb(DCB **p_dcb, ROUTER_CLIENT_SES *rses, backend_type_t btype,
{
MXS_ERROR("Server at %s:%d should be master but "
"is %s instead and can't be chosen to master.",
master_bref->bref_backend->backend_server->name,
master_bref->bref_backend->backend_server->port,
master_bref->ref->server->name,
master_bref->ref->server->port,
STRSRVSTATUS(&server));
succp = false;
}
@ -1191,9 +1187,8 @@ handle_got_target(ROUTER_INSTANCE *inst, ROUTER_CLIENT_SES *rses,
ss_dassert(target_dcb != NULL);
MXS_INFO("Route query to %s \t%s:%d <",
(SERVER_IS_MASTER(bref->bref_backend->backend_server) ? "master"
: "slave"), bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port);
(SERVER_IS_MASTER(bref->ref->server) ? "master"
: "slave"), bref->ref->server->name, bref->ref->server->port);
/**
* Store current stmt if execution of previous session command
* haven't completed yet.
@ -1372,14 +1367,13 @@ static backend_ref_t *get_root_master_bref(ROUTER_CLIENT_SES *rses)
if (bref == rses->rses_master_ref)
{
/** Store master state for better error reporting */
master.status = bref->bref_backend->backend_server->status;
master.status = bref->ref->server->status;
}
if (bref->bref_backend->backend_server->status & SERVER_MASTER)
if (SERVER_IS_MASTER(bref->ref->server))
{
if (candidate_bref == NULL ||
(bref->bref_backend->backend_server->depth <
candidate_bref->bref_backend->backend_server->depth))
(bref->ref->server->depth < candidate_bref->ref->server->depth))
{
candidate_bref = bref;
}

View File

@ -38,7 +38,7 @@ static bool connect_server(backend_ref_t *bref, SESSION *session, bool execute_h
static void log_server_connections(select_criteria_t select_criteria,
backend_ref_t *backend_ref, int router_nservers);
static BACKEND *get_root_master(backend_ref_t *servers, int router_nservers);
static SERVER_REF *get_root_master(backend_ref_t *servers, int router_nservers);
static int bref_cmp_global_conn(const void *bref1, const void *bref2);
@ -103,10 +103,10 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
}
/* get the root Master */
BACKEND *master_host = get_root_master(backend_ref, router_nservers);
SERVER_REF *master_host = get_root_master(backend_ref, router_nservers);
if (router->rwsplit_config.rw_master_failure_mode == RW_FAIL_INSTANTLY &&
(master_host == NULL || SERVER_IS_DOWN(master_host->backend_server)))
(master_host == NULL || SERVER_IS_DOWN(master_host->server)))
{
MXS_ERROR("Couldn't find suitable Master from %d candidates.", router_nservers);
return false;
@ -145,7 +145,7 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
for (int i = 0; i < router_nservers &&
(slaves_connected < max_nslaves || !master_connected); i++)
{
SERVER *serv = backend_ref[i].bref_backend->backend_server;
SERVER *serv = backend_ref[i].ref->server;
if (!BREF_HAS_FAILED(&backend_ref[i]) && SERVER_IS_RUNNING(serv))
{
@ -155,7 +155,7 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
(serv->rlag != MAX_RLAG_NOT_AVAILABLE &&
serv->rlag <= max_slave_rlag)) &&
(SERVER_IS_SLAVE(serv) || SERVER_IS_RELAY_SERVER(serv)) &&
(master_host == NULL || (serv != master_host->backend_server)))
(master_host == NULL || (serv != master_host->server)))
{
slaves_found += 1;
@ -166,7 +166,7 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
}
}
/* take the master_host for master */
else if (master_host && (serv == master_host->backend_server))
else if (master_host && (serv == master_host->server))
{
/** p_master_ref must be assigned with this backend_ref pointer
* because its original value may have been lost when backend
@ -205,9 +205,9 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
if (BREF_IS_IN_USE((&backend_ref[i])))
{
MXS_INFO("Selected %s in \t%s:%d",
STRSRVSTATUS(backend_ref[i].bref_backend->backend_server),
backend_ref[i].bref_backend->backend_server->name,
backend_ref[i].bref_backend->backend_server->port);
STRSRVSTATUS(backend_ref[i].ref->server),
backend_ref[i].ref->server->name,
backend_ref[i].ref->server->port);
}
} /* for */
}
@ -226,12 +226,12 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
{
if (BREF_IS_IN_USE((&backend_ref[i])))
{
ss_dassert(backend_ref[i].bref_backend->backend_conn_count > 0);
ss_dassert(backend_ref[i].ref->connections > 0);
/** disconnect opened connections */
bref_clear_state(&backend_ref[i], BREF_IN_USE);
/** Decrease backend's connection counter. */
atomic_add(&backend_ref[i].bref_backend->backend_conn_count, -1);
atomic_add(&backend_ref[i].ref->connections, -1);
dcb_close(backend_ref[i].bref_dcb);
}
}
@ -243,13 +243,12 @@ bool select_connect_backend_servers(backend_ref_t **p_master_ref,
/** Compare number of connections from this router in backend servers */
static int bref_cmp_router_conn(const void *bref1, const void *bref2)
{
BACKEND *b1 = ((backend_ref_t *)bref1)->bref_backend;
BACKEND *b2 = ((backend_ref_t *)bref2)->bref_backend;
SERVER_REF *b1 = ((backend_ref_t *)bref1)->ref;
SERVER_REF *b2 = ((backend_ref_t *)bref2)->ref;
if (b1->weight == 0 && b2->weight == 0)
{
return b1->backend_server->stats.n_current -
b2->backend_server->stats.n_current;
return b1->connections - b2->connections;
}
else if (b1->weight == 0)
{
@ -260,20 +259,20 @@ static int bref_cmp_router_conn(const void *bref1, const void *bref2)
return -1;
}
return ((1000 + 1000 * b1->backend_conn_count) / b1->weight) -
((1000 + 1000 * b2->backend_conn_count) / b2->weight);
return ((1000 + 1000 * b1->connections) / b1->weight) -
((1000 + 1000 * b2->connections) / b2->weight);
}
/** Compare number of global connections in backend servers */
static int bref_cmp_global_conn(const void *bref1, const void *bref2)
{
BACKEND *b1 = ((backend_ref_t *)bref1)->bref_backend;
BACKEND *b2 = ((backend_ref_t *)bref2)->bref_backend;
SERVER_REF *b1 = ((backend_ref_t *)bref1)->ref;
SERVER_REF *b2 = ((backend_ref_t *)bref2)->ref;
if (b1->weight == 0 && b2->weight == 0)
{
return b1->backend_server->stats.n_current -
b2->backend_server->stats.n_current;
return b1->server->stats.n_current -
b2->server->stats.n_current;
}
else if (b1->weight == 0)
{
@ -284,32 +283,29 @@ static int bref_cmp_global_conn(const void *bref1, const void *bref2)
return -1;
}
return ((1000 + 1000 * b1->backend_server->stats.n_current) / b1->weight) -
((1000 + 1000 * b2->backend_server->stats.n_current) / b2->weight);
return ((1000 + 1000 * b1->server->stats.n_current) / b1->weight) -
((1000 + 1000 * b2->server->stats.n_current) / b2->weight);
}
/** Compare replication lag between backend servers */
static int bref_cmp_behind_master(const void *bref1, const void *bref2)
{
BACKEND *b1 = ((backend_ref_t *)bref1)->bref_backend;
BACKEND *b2 = ((backend_ref_t *)bref2)->bref_backend;
SERVER_REF *b1 = ((backend_ref_t *)bref1)->ref;
SERVER_REF *b2 = ((backend_ref_t *)bref2)->ref;
return ((b1->backend_server->rlag < b2->backend_server->rlag) ? -1
: ((b1->backend_server->rlag > b2->backend_server->rlag) ? 1 : 0));
return b1->server->rlag - b2->server->rlag;
}
/** Compare number of current operations in backend servers */
static int bref_cmp_current_load(const void *bref1, const void *bref2)
{
SERVER *s1 = ((backend_ref_t *)bref1)->bref_backend->backend_server;
SERVER *s2 = ((backend_ref_t *)bref2)->bref_backend->backend_server;
BACKEND *b1 = ((backend_ref_t *)bref1)->bref_backend;
BACKEND *b2 = ((backend_ref_t *)bref2)->bref_backend;
SERVER_REF *b1 = ((backend_ref_t *)bref1)->ref;
SERVER_REF *b2 = ((backend_ref_t *)bref2)->ref;
if (b1->weight == 0 && b2->weight == 0)
{
return b1->backend_server->stats.n_current -
b2->backend_server->stats.n_current;
// TODO: Fix this so that operations are used instead of connections
return b1->server->stats.n_current - b2->server->stats.n_current;
}
else if (b1->weight == 0)
{
@ -320,8 +316,8 @@ static int bref_cmp_current_load(const void *bref1, const void *bref2)
return -1;
}
return ((1000 * s1->stats.n_current_ops) - b1->weight) -
((1000 * s2->stats.n_current_ops) - b2->weight);
return ((1000 * b1->server->stats.n_current_ops) - b1->weight) -
((1000 * b2->server->stats.n_current_ops) - b2->weight);
}
/**
@ -338,7 +334,7 @@ static int bref_cmp_current_load(const void *bref1, const void *bref2)
*/
static bool connect_server(backend_ref_t *bref, SESSION *session, bool execute_history)
{
SERVER *serv = bref->bref_backend->backend_server;
SERVER *serv = bref->ref->server;
bool rval = false;
bref->bref_dcb = dcb_connect(serv, session, serv->protocol);
@ -354,16 +350,16 @@ static bool connect_server(backend_ref_t *bref, SESSION *session, bool execute_h
&router_handle_state_switch, (void *) bref);
bref->bref_state = 0;
bref_set_state(bref, BREF_IN_USE);
atomic_add(&bref->bref_backend->backend_conn_count, 1);
atomic_add(&bref->ref->connections, 1);
rval = true;
}
else
{
MXS_ERROR("Failed to execute session command in %s (%s:%d). See earlier "
"errors for more details.",
bref->bref_backend->backend_server->unique_name,
bref->bref_backend->backend_server->name,
bref->bref_backend->backend_server->port);
bref->ref->server->unique_name,
bref->ref->server->name,
bref->ref->server->port);
dcb_close(bref->bref_dcb);
bref->bref_dcb = NULL;
}
@ -398,33 +394,33 @@ static void log_server_connections(select_criteria_t select_criteria,
for (int i = 0; i < router_nservers; i++)
{
BACKEND *b = backend_ref[i].bref_backend;
SERVER_REF *b = backend_ref[i].ref;
switch (select_criteria)
{
case LEAST_GLOBAL_CONNECTIONS:
MXS_INFO("MaxScale connections : %d in \t%s:%d %s",
b->backend_server->stats.n_current, b->backend_server->name,
b->backend_server->port, STRSRVSTATUS(b->backend_server));
b->server->stats.n_current, b->server->name,
b->server->port, STRSRVSTATUS(b->server));
break;
case LEAST_ROUTER_CONNECTIONS:
MXS_INFO("RWSplit connections : %d in \t%s:%d %s",
b->backend_conn_count, b->backend_server->name,
b->backend_server->port, STRSRVSTATUS(b->backend_server));
b->connections, b->server->name,
b->server->port, STRSRVSTATUS(b->server));
break;
case LEAST_CURRENT_OPERATIONS:
MXS_INFO("current operations : %d in \t%s:%d %s",
b->backend_server->stats.n_current_ops,
b->backend_server->name, b->backend_server->port,
STRSRVSTATUS(b->backend_server));
b->server->stats.n_current_ops,
b->server->name, b->server->port,
STRSRVSTATUS(b->server));
break;
case LEAST_BEHIND_MASTER:
MXS_INFO("replication lag : %d in \t%s:%d %s",
b->backend_server->rlag, b->backend_server->name,
b->backend_server->port, STRSRVSTATUS(b->backend_server));
b->server->rlag, b->server->name,
b->server->port, STRSRVSTATUS(b->server));
default:
break;
}
@ -445,27 +441,26 @@ static void log_server_connections(select_criteria_t select_criteria,
* @return The Master found
*
*/
static BACKEND *get_root_master(backend_ref_t *servers, int router_nservers)
static SERVER_REF *get_root_master(backend_ref_t *servers, int router_nservers)
{
int i = 0;
BACKEND *master_host = NULL;
SERVER_REF *master_host = NULL;
for (i = 0; i < router_nservers; i++)
{
BACKEND *b;
if (servers[i].bref_backend == NULL)
if (servers[i].ref == NULL)
{
/** This should not happen */
ss_dassert(false);
continue;
}
b = servers[i].bref_backend;
SERVER_REF *b = servers[i].ref;
if ((b->backend_server->status & (SERVER_MASTER | SERVER_MAINT)) ==
SERVER_MASTER)
if (SERVER_IS_MASTER(b->server))
{
if (master_host == NULL ||
(b->backend_server->depth < master_host->backend_server->depth))
(b->server->depth < master_host->server->depth))
{
master_host = b;
}

View File

@ -173,7 +173,7 @@ GWBUF *sescmd_cursor_process_replies(GWBUF *replybuf,
{
MXS_ERROR("Slave server '%s': response differs from master's response. "
"Closing connection due to inconsistent session state.",
bref->bref_backend->backend_server->unique_name);
bref->ref->server->unique_name);
sescmd_cursor_set_active(scur, false);
bref_clear_state(bref, BREF_QUERY_ACTIVE);
bref_clear_state(bref, BREF_IN_USE);
@ -205,7 +205,7 @@ GWBUF *sescmd_cursor_process_replies(GWBUF *replybuf,
scmd->reply_cmd = *((unsigned char *)replybuf->start + 4);
MXS_INFO("Server '%s' responded to a session command, sending the response "
"to the client.", bref->bref_backend->backend_server->unique_name);
"to the client.", bref->ref->server->unique_name);
for (int i = 0; i < ses->rses_nbackends; i++)
{
@ -226,8 +226,8 @@ GWBUF *sescmd_cursor_process_replies(GWBUF *replybuf,
*reconnect = true;
MXS_INFO("Disabling slave %s:%d, result differs from "
"master's result. Master: %d Slave: %d",
ses->rses_backend_ref[i].bref_backend->backend_server->name,
ses->rses_backend_ref[i].bref_backend->backend_server->port,
ses->rses_backend_ref[i].ref->server->name,
ses->rses_backend_ref[i].ref->server->port,
bref->reply_cmd, ses->rses_backend_ref[i].reply_cmd);
}
}
@ -237,11 +237,11 @@ GWBUF *sescmd_cursor_process_replies(GWBUF *replybuf,
else
{
MXS_INFO("Slave '%s' responded before master to a session command. Result: %d",
bref->bref_backend->backend_server->unique_name,
bref->ref->server->unique_name,
(int)bref->reply_cmd);
if (bref->reply_cmd == 0xff)
{
SERVER *serv = bref->bref_backend->backend_server;
SERVER *serv = bref->ref->server;
MXS_ERROR("Slave '%s' (%s:%u) failed to execute session command.",
serv->unique_name, serv->name, serv->port);
}