Merge branch 'release-1.3.0' into develop

This commit is contained in:
Markus Makela 2016-01-08 11:14:02 +02:00
commit 40cfe1a864
46 changed files with 349 additions and 257 deletions

View File

@ -18,7 +18,7 @@ The default master selection is based only on MIN(wsrep_local_index). This can b
* If Master changes (ie. new Master promotion) during current connection the router cannot check the change
* LONGBLOB is not supported
* Sending of LONGBLOB data is not supported
## Limitations in the Read/Write Splitter

View File

@ -6,7 +6,7 @@
## About MaxScale
- [About MaxScale](About/About-MaxScale.md)
- [MaxScale 1.2.0 Release Notes](Release-Notes/MaxScale-1.2.0-Release-Notes.md)
- [MaxScale 1.3.0 Release Notes](Release-Notes/MaxScale-1.3.0-Release-Notes.md)
- [Changelog](Changelog.md)
- [Limitations](About/Limitations.md)
- [COPYRIGHT](About/COPYRIGHT.md)
@ -20,6 +20,7 @@
## Upgrading MaxScale
- [Upgrading MaxScale from 1.2 to 1.3](Upgrading/Upgrading-To-MaxScale-1.3.md)
- [Upgrading MaxScale from 1.1.1 to 1.2](Upgrading/Upgrading-To-MaxScale-1.2.md)
- [Upgrading MaxScale from 1.0.5 to 1.1.0](Upgrading/Upgrading-To-MaxScale-1.1.0.md)
@ -31,7 +32,7 @@
- [How Errors are Handled in MaxScale](Reference/How-errors-are-handled-in-MaxScale.md)
- [Debug and Diagnostic Support](Reference/Debug-And-Diagnostic-Support.md)
- [Routing Hints](Reference/Hint-Syntax.md)
## Tutorials
The main tutorial for MaxScale consist of setting up MaxScale for the environment you are using with either a connection-based or a read/write-based configuration.
@ -52,13 +53,24 @@ These tutorials are for specific use cases and module combinations.
## Routers
The routing module is the core of a MaxScale service. The router documentation
contains all module specific configuration options and detailed explanations
of their use.
- [Read Write Split](Routers/ReadWriteSplit.md)
- [Read Connection Router](Routers/ReadConnRoute.md)
- [Schemarouter](Routers/SchemaRouter.md)
- [Binlogrouter](Routers/Binlogrouter.md)
There are also two diagnostic routing modules. The CLI is for MaxAdmin and
the Debug CLI client for Telnet.
- [CLI](Routers/CLI.md)
- [Debug CLI](Routers/Debug-CLI.md)
## Filters
Here are detailed documents about the filters MaxScale offers. They contain configuration guides and example use cases. Before reading these,you should have read the filter tutorial so that you know how they work and how to configure them.
Here are detailed documents about the filters MaxScale offers. They contain configuration guides and example use cases. Before reading these, you should have read the filter tutorial so that you know how they work and how to configure them.
- [Query Log All](Filters/Query-Log-All-Filter.md)
- [Regex Filter](Filters/Regex-Filter.md)

View File

@ -1,10 +1,10 @@
Named Server Filter
# Named Server Filter
# Overview
## Overview
The **namedserverfilter** is a filter module for MaxScale which is able to route queries to servers based on regular expression matches.
# Configuration
## Configuration
The configuration block for the Named Server filter requires the minimal filter options in it’s section within the maxscale.cnf file, stored in /etc/maxscale.cnf.

View File

@ -107,3 +107,7 @@ filters=ProductsSelectLogger
```
The result of then putting this filter into the service used by the application would be a log file of all select queries that mentioned the table but did not mention the PRODUCT_ID primary key in the predicates for the query.
Executing `SELECT * FROM PRODUCTS` would log the following into `/var/logs/qla/SelectProducts`:
```
07:12:56.324 7/01/2016, SELECT * FROM PRODUCTS
```

View File

@ -1,6 +1,6 @@
Regex Filter
# Regex Filter
# Overview
## Overview
The regex filter is a filter module for MaxScale that is able to rewrite query content using regular expression matches and text substitution. It uses the PCRE2 syntax which differs from the POSIX regular expressions used in MaxScale versions prior to 1.3.0.
@ -8,7 +8,7 @@ For all details about the PCRE2 syntax, please read the [PCRE2 syntax documentat
Please note that the PCRE2 library uses a different syntax to refer to capture groups in the replacement string. The main difference is the usage of the dollar character instead of the backslash character for references e.g. `$1` instead of `\1`. For more details about the replacement string differences, please read the [Creating a new string with substitutions](http://www.pcre.org/current/doc/html/pcre2api.html#SEC34) chapter in the PCRE2 manual.
# Configuration
## Configuration
The configuration block for the Regex filter requires the minimal filter options in it’s section within the maxscale.cnf file, stored in /etc/maxscale.cnf.
@ -80,7 +80,12 @@ log_file=/tmp/regexfilter.log
### `log_trace`
The optional log_trace parameter toggles the logging of non-matching and matching queries with their replacements into the trace log file. This is the preferred method of diagnosing the matching of queries since the trace log can be disabled mid-session if such a need rises.
The optional log_trace parameter toggles the logging of non-matching and
matching queries with their replacements into the log file on the *info* level.
This is the preferred method of diagnosing the matching of queries since the
log level can be changed at runtime. For more details about logging levels and
session specific logging, please read the [Configuration Guide](../Getting-Started/Configuration-Guide.md#global-settings)
and the [MaxAdmin](../Reference/MaxAdmin.md#change-maxscale-logging-options) documentation on changing the logging levels.
```
log_trace=true

View File

@ -1,10 +1,10 @@
TEE Filter
# Tee Filter
# Overview
## Overview
The tee filter is a filter module for MaxScale is a "plumbing" fitting in the MaxScale filter toolkit. It can be used in a filter pipeline of a service to make a copy of requests from the client and dispatch a copy of the request to another service within MaxScale.
# Configuration
## Configuration
The configuration block for the TEE filter requires the minimal filter parameters in it’s section within the maxscale.cnf file, stored in /etc/maxscale.cnf, that defines the filter to load and the service to send the duplicates to. Currently the tee filter does not support multi-statements.

View File

@ -1,10 +1,10 @@
Top Filter
# Top Filter
# Overview
## Overview
The top filter is a filter module for MaxScale that monitors every SQL statement that passes through the filter. It measures the duration of that statement, the time between the statement being sent and the first result being returned. The top N times are kept, along with the SQL text itself and a list sorted on the execution times of the query is written to a file upon closure of the client session.
# Configuration
## Configuration
The configuration block for the TOP filter requires the minimal filter options in it’s section within the maxscale.cnf file, stored in /etc/maxscale.cnf.

View File

@ -69,19 +69,20 @@ The global settings, in a section named `[MaxScale]`, allow various parameters t
#### `threads`
This parameter controls the number of worker threads that are handling the
events coming from the kernel. MaxScale will auto-detect the number of
processors of the system unless number of threads is manually configured.
It is recommended that you let MaxScale detect how many cores the system
has and leave this parameter undefined. The number of used cores will be
logged into the message logs and if you are not satisfied with the
auto-detected value, you can manually configure it. Increasing the amount
of worker threads beyond the number of processor cores does not improve
the performance, rather is likely to degrade it, and can consume resources
needlessly.
events coming from the kernel. The default is 1 thread. It is recommended that
you start with one thread and increase the number if you require greater
performance. Increasing the amount of worker threads beyond the number of
processor cores does not improve the performance, rather is likely to degrade
it, and can consume resources needlessly.
You can enable automatic configuration of this value by setting the value to
`auto`. This way MaxScale will detect the number of available processors and
set the amount of threads to be equal to that number. This should only be used
for systems dedicated for running MaxScale.
```
# Valid options are:
# threads=<number of epoll threads>
# threads=[<number of threads> | auto ]
[MaxScale]
threads=1
@ -588,6 +589,8 @@ indicating a number of seconds. A DCB placed in the persistent pool for a server
only be reused if the elapsed time since it joined the pool is less than the given
value. Otherwise, the DCB will be discarded and the connection closed.
For more information about persistent connections, please read the [Administration Tutorial](../Tutorials/Administration-Tutorial.md).
### Listener
The listener defines a port and protocol pair that is used to listen for connections to a service. A service may have multiple listeners associated with it, either to support multiple protocols or multiple ports. As with other elements of the configuration the section name is the listener name and it can be selected freely. A type parameter is used to identify the section as a listener definition. Address is optional and it allows the user to limit connections to certain interface only. Socket is also optional and used for Unix socket connections.

View File

@ -24,8 +24,14 @@ using the *Persistent Connection* feature as it may reduce the time it
takes from establishing a connection from the client through MaxScale to
the backend server.
**NOTE**: The persistent connections do not track session state. This means
that changing the default database or modifying the session state will cause
those changes to be active even for new connections. If you use queries with
implicit databases or use connections with different client settings, you
should take great care when using persistent connections.
Additional information is available in the following document:
* [Administration Tutorial](../Tutorials/Administration-Tutorial.md)
* [Administration Tutorial](../Tutorials/Administration-Tutorial.md#persistent-connections)
### Binlog Server
@ -53,7 +59,8 @@ definition.
Additional information is available in the following documents:
* [Binlogrouter Tutorial](../Tutorials/Replication-Proxy-Binlog-Router-Tutorial.md)
* [Upgrading binlogrouter to 1.3.0](../Upgrading/Upgrading-Binlogrouter-to-1.3.0.md)
* [Upgrading Binlogrouter to 1.3](../Upgrading/Upgrading-BinlogRouter-To-Maxscale-1.3.md)
* [Binlogrouter Documentation](../Routers/Binlogrouter.md)
### Logging Changes
@ -231,7 +238,7 @@ the most serious of this are listed below.
* When users have different passwords based on the host from which they connect MaxScale is unable to determine which password it should use to connect to the backend database. This results in failed connections and unusable usernames in MaxScale.
* LONGBLOB are currently not supported.
* The readconnroute module does not support sending of LONGBLOB data.
* Galera Cluster variables, such as @@wsrep_node_name, are not resolved by the embedded MariaDB parser.
@ -243,3 +250,13 @@ the most serious of this are listed below.
RPM and Debian packages are provided for the Linux distributions supported
by MariaDB Enterprise.
Packages can be downloaded [here](https://mariadb.com/resources/downloads).
## Source Code
The source code of MaxScale is tagged at GitHub with a tag, which is identical
with the version of MaxScale. For instance, the tag of version 1.2.1 of MaxScale
is 1.2.1. Further, *master* always refers to the latest released non-beta version.
The source code is available [here](https://github.com/mariadb-corporation/MaxScale).

View File

@ -17,7 +17,7 @@ Binlogrouter is configured with a comma-separated list of key-value pairs. The f
### `binlogdir`
This parameter allows the location that MaxScale uses to store binlog files to be set. If this parameter is not set to a directory name then MaxScale will store the binlog files in the directory /var/cache/maxscale/<Service Name>.
In the binlog dir there is also the 'cache' directory that contains data retrieved from the master dureing registration phase and the master.ini file wich contains the configuration of current configured master.
In the binlog dir there is also the 'cache' directory that contains data retrieved from the master during registration phase and the master.ini file which contains the configuration of current configured master.
### `uuid`
@ -87,7 +87,7 @@ The default value is off, set transaction_safety=on to enable the incomplete tra
### `send_slave_heartbeat`
This defines whether (on | off) MaxSclale sends to the slave the heartbeat packet when there are no real binlog events to send. The default value if 'off', no heartbeat event is sent to slave server. If value is 'on' the interval value (requested by the slave during registration) is reported in the diagnostic output and the packect is send after the time interval without any event to send.
This defines whether (on | off) MaxScale sends to the slave the heartbeat packet when there are no real binlog events to send. The default value if 'off', no heartbeat event is sent to slave server. If value is 'on' the interval value (requested by the slave during registration) is reported in the diagnostic output and the packet is send after the time interval without any event to send.
A complete example of a service entry for a binlog router service would be as follows.
```

View File

@ -99,11 +99,13 @@ router_options=slave_selection_criteria=<criteria>
Where `<criteria>` is one of the following values.
* `LEAST_GLOBAL_CONNECTIONS`, the slave with least connections in total
* `LEAST_ROUTER_CONNECTIONS`, the slave with least connections from this router
* `LEAST_GLOBAL_CONNECTIONS`, the slave with least connections from MaxScale
* `LEAST_ROUTER_CONNECTIONS`, the slave with least connections from this service
* `LEAST_BEHIND_MASTER`, the slave with smallest replication lag
* `LEAST_CURRENT_OPERATIONS` (default), the slave with least active operations
The `LEAST_GLOBAL_CONNECTIONS` and `LEAST_ROUTER_CONNECTIONS` use the connections from MaxScale to the server, not the amount of connections reported by the server itself.
### `max_sescmd_history`
**`max_sescmd_history`** sets a limit on how many session commands each session can execute before the session command history is disabled. The default is an unlimited number of session commands.
@ -133,6 +135,14 @@ disable_sescmd_history=true
master_accept_reads=true
```
### `weightby`
This parameter defines the name of the value which is used to calculate the
weights of the servers. The value should be the name of a parameter in the
server definitions and it should exist in all the servers used by this router.
For more information, see the description of the `weightby` parameter in
the [Configuration Guide](../Getting-Started/Configuration-Guide.md).
## Routing hints
The readwritesplit router supports routing hints. For a detailed guide on hint syntax and functionality, please read [this](../Reference/Hint-Syntax.md) document.

View File

@ -125,7 +125,7 @@ than `persistmaxtime` seconds. It was also be discarded if it has been disconne
by the back end server. Connections will be selected that match the user name and
protocol for the new request.
Please note that because persistent connections have previously been in use, they
**Please note** that because persistent connections have previously been in use, they
may give a different environment from a fresh connection. For example, if the
previous use of the connection issued "use mydatabase" then this setting will be
carried over into the reuse of the same connection. For many applications this will

View File

@ -8,6 +8,8 @@ archiving the data and one for actual use, a RabbitMQ server and a MaxScale serv
For testing purposes some of these can locate on the same server but for actual
use, an HA solution is recommended.
![Data archiving with Mqfilter and Tee filters](images/rabbit-and-tee.png)
The use case for this tutorial is a production system with one main server where all
queries are routed and an archive server where only INSERT, UPDATE and DELETE statements
are routed. The queries routed to the archive servers are also transformed into a canonical
@ -30,14 +32,14 @@ in the Creating Database Users section of the [MaxScale Tutorial](MaxScale-Tutor
## Setting up RabbitMQ server
To set up the RabbitMQ server, follow the instructions for your OS onthe [RabbitMQ website](https://www.rabbitmq.com/download.html).
To set up the RabbitMQ server, follow the instructions for your OS on the [RabbitMQ website](https://www.rabbitmq.com/download.html).
Useful documentation about access rights can be found on the [Access Control](https://www.rabbitmq.com/access-control.html)
page and for UNIX systems the [`rabbitmqctl` manpage](https://www.rabbitmq.com/man/rabbitmqctl.1.man.html)
has all the needed commands to manage your installation of RabbitMQ.
For this tutorial, we will use a RabbitMQ server installed on a CentOS 7 from
the RPM packages. Since CentOS 7 doesn't have the RabbitMQ server in the default
repositores, we will need two extra repositories: The EPEL repository and the Erlang repository.
repositories, we will need two extra repositories: The EPEL repository and the Erlang repository.
* [EPEL repositories](https://fedoraproject.org/wiki/EPEL)
* [Erlang repositories](https://www.erlang-solutions.com/resources/download.html)
@ -132,7 +134,7 @@ filters=MQ Filter
The `filters` parameters for the services refer to the filters we will be creating next.
The Production service will use the Tee filter to duplicate INSERT, UPDATE and DELETE
statements to the Archive service. The statements passed to the Archive service will
use the MQ Filter to send the canonic versions of the statements to the RabbitMQ broker.
use the MQ Filter to send the canonical versions of the statements to the RabbitMQ broker.
The Production service will use the `production-1` server and the Archive service will
use the `archive-1` server. Both services user the `maxuser` user with the `maxpwd` password.
@ -156,7 +158,7 @@ The `port` parameter controls which port the listener will listen on and where t
connections should be made. The `service` parameter tells which listener belongs to which
service.
After the serivces and their listeners are configured we will configure the two filters we'll use. We
After the services and their listeners are configured we will configure the two filters we'll use. We
begin with the Tee filter.
```
@ -335,7 +337,7 @@ Listing queues ...
```
If we create a connection on the Production service on port 4000 and execute
a set of data modifying statemets we should see an equal number of statements
a set of data modifying statements we should see an equal number of statements
being sent to the RabbitMQ server:
```

View File

@ -6,7 +6,7 @@ In a traditional MySQL replication setup a single master server is created and a
Introducing a proxy layer between the master server and the slave servers can improve the situation, by reducing the load on the master to simply serving the proxy layer rather than all of the slaves. The slaves only need to be aware of the proxy layer and not of the real master server. Removing need for the slaves to have knowledge of the master, greatly simplifies the process of replacing a failed master within a replication environment.
## MariaDB/MySQL as a Binlog Server
The most obvious solution to the requirement for a proxy layer within a replication environment is to use a MariaDB or MySQL database instance. The database server is designed to allow this, since a slave server is able to be configured such that it will produce binary logs for updates it has itself received via replication from the master server. This is done with the *log_slave_updates* configuration option of the server. In this case the server is known as an intermediate master, it is simultanously a slave to the real master and a master to the other slaves in the configuration.
The most obvious solution to the requirement for a proxy layer within a replication environment is to use a MariaDB or MySQL database instance. The database server is designed to allow this, since a slave server is able to be configured such that it will produce binary logs for updates it has itself received via replication from the master server. This is done with the *log_slave_updates* configuration option of the server. In this case the server is known as an intermediate master, it is simultaneously a slave to the real master and a master to the other slaves in the configuration.
Using an intermediate master does not, however, solve all the problems and introduces some new ones, due to the way replication is implemented. A slave server reads the binary log data and creates a relay log from that binary log. This log provides a source of SQL statements, which are executed within the slave in order to make the same changes to the databases on the slaves as were made on the master. If the *log_slave_updates* option has been enabled, new binary log entries are created for the statements executed from the relay log.
@ -60,7 +60,7 @@ The final configuration requirement is the router specific options. The binlog r
### binlogdir
This parameter allows the location that MaxScale uses to store binlog files to be set. If this parameter is not set to a directory name then MaxScale will store the binlog files in the directory */var/cache/maxscale/<Service Name>*.
In the binlog dir there is also the 'cache' directory that contains data retrieved from the master during registration phase and the *master.ini* file wich contains the configuration of current configured master.
In the binlog dir there is also the 'cache' directory that contains data retrieved from the master during registration phase and the *master.ini* file which contains the configuration of current configured master.
### uuid
@ -110,7 +110,7 @@ This defines the value of the heartbeat interval in seconds for the connection t
### send_slave_heartbeat
This defines whether (on | off) MaxScale sends to the slave the heartbeat packet when there are no real binlog events to send. The default value if 'off', no heartbeat event is sent to slave server. If value is 'on' the interval value (requested by the slave during registration) is reported in the diagnostic output and the packect is send after the time interval without any event to send.
This defines whether (on | off) MaxScale sends to the slave the heartbeat packet when there are no real binlog events to send. The default value if 'off', no heartbeat event is sent to slave server. If value is 'on' the interval value (requested by the slave during registration) is reported in the diagnostic output and the packet is send after the time interval without any event to send.
### burstsize
@ -262,7 +262,7 @@ Enabling replication from a master server requires:
It's possible to specify the desired *MASTER_LOG_FILE* but position must be 4
The initfile option is nolonger available, filestem option also not available as the stem is automatically set by parsing *MASTER_LOG_FILE*.
The initfile option is no longer available, filestem option also not available as the stem is automatically set by parsing *MASTER_LOG_FILE*.
### Stop/start the replication
@ -287,11 +287,11 @@ Next step is the master configuration
MariaDB> CHANGE MASTER TO ...
A succesful configuration change results in *master.ini* being updated.
A successful configuration change results in *master.ini* being updated.
Any error is reported in the MySQL and in log files
The upported options are:
The supported options are:
MASTER_HOST
MASTER_PORT
@ -311,7 +311,7 @@ Examples:
MariaDB> CHANGE MASTER TO MASTER_LOG_FILE=‘mysql-bin.000003',MASTER_LOG_POS=8888
This could be applied to current master_host/port or a new one.
If there is a master server maintenance and a slave is beeing promoted as master it should be checked that binlog file and position are valid: in case of any error replication stops and errors are reported via *SHOW SLAVE STATUS* and in error logs.
If there is a master server maintenance and a slave is being promoted as master it should be checked that binlog file and position are valid: in case of any error replication stops and errors are reported via *SHOW SLAVE STATUS* and in error logs.
2) Current binlog file is ‘mysql-bin.000099', position 1234
@ -319,7 +319,7 @@ If there is a master server maintenance and a slave is beeing promoted as master
This could be applied with current master_host/port or a new one
If transaction safety option is on and the current binlog file contains an incomplete transaction it will be truncated to the position where transaction started.
In such situation a proper message is reported in MySQL connection and with next START SLAVE binlog file truncation will occurr and MaxScale will request events from the master using the next binlog file at position 4.
In such situation a proper message is reported in MySQL connection and with next START SLAVE binlog file truncation will occur and MaxScale will request events from the master using the next binlog file at position 4.
The above scenario might refer to a master crash/failure:
the new server that has just been promoted as master doesn't have last transaction events but it should have the new binlog file (the next in sequence).
@ -332,7 +332,7 @@ Check for any error in log files and with
MariaDB> SHOW SLAVE STATUS;
In some situations replication state could be *STOPPED* and proper messages are displyed in error logs and in *SHOW SLAVE STATUS*,
In some situations replication state could be *STOPPED* and proper messages are displayed in error logs and in *SHOW SLAVE STATUS*,
In order to resolve any mistake done with *CHANGE MASTER TO MASTER_LOG_FILE / MASTER_LOG_POS*, another administrative command would be helpful.

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

View File

@ -1,23 +1,18 @@
# Build the PCRE2 library from source
#
# This will add a 'pcre2' target to CMake which will generate the libpcre2-8.so
# dynamic library and the pcre2.h header. If your target requires PCRE2 you
# This will add a 'pcre2' target to CMake which will generate the libpcre2-8.a
# static library and the pcre2.h header. If your target requires PCRE2 you
# need to add a dependeny on the 'pcre2' target by adding add_dependencies(<target> pcre2)
# to the CMakeLists.txt
# to the CMakeLists.txt. You don't need to link against the pcre2 library
# because the static symbols will be in MaxScale.
include(ExternalProject)
set(PCRE2_ROOT_DIR ${CMAKE_SOURCE_DIR}/pcre2/)
set(PCRE2_BUILD_DIR ${CMAKE_BINARY_DIR}/pcre2/)
set(PCRE2_LIBRARIES ${CMAKE_BINARY_DIR}/pcre2/libpcre2-8.so
${CMAKE_BINARY_DIR}/pcre2/libpcre2-8.so.1.0.0
CACHE STRING "PCRE2 dynamic libraries" FORCE)
ExternalProject_Add(pcre2 SOURCE_DIR ${PCRE2_ROOT_DIR}
CMAKE_ARGS -DBUILD_SHARED_LIBS=Y -DPCRE2_BUILD_PCRE2GREP=N -DPCRE2_BUILD_TESTS=N
BINARY_DIR ${PCRE2_BUILD_DIR}
ExternalProject_Add(pcre2 SOURCE_DIR ${CMAKE_SOURCE_DIR}/pcre2/
CMAKE_ARGS -DBUILD_SHARED_LIBS=N -DPCRE2_BUILD_PCRE2GREP=N -DPCRE2_BUILD_TESTS=N
BINARY_DIR ${CMAKE_BINARY_DIR}/pcre2/
BUILD_COMMAND make
INSTALL_COMMAND "")
include_directories(${CMAKE_BINARY_DIR}/pcre2/)
install(PROGRAMS ${PCRE2_LIBRARIES} DESTINATION ${MAXSCALE_LIBDIR})
set(PCRE2_LIBRARIES ${CMAKE_BINARY_DIR}/pcre2/libpcre2-8.a CACHE INTERNAL "")

View File

@ -3,3 +3,4 @@ set(CPACK_GENERATOR "${CPACK_GENERATOR};DEB")
set(CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA "${CMAKE_BINARY_DIR}/postinst;{CMAKE_BINARY_DIR}/postrm")
execute_process(COMMAND dpgk --print-architecture OUTPUT_VARIABLE DEB_ARCHITECTURE)
set(CPACK_DEBIAN_PACKAGE_ARCHITECTURE ${DEB_ARCHITECTURE})
set(CPACK_DEBIAN_PACKAGE_SHLIBDEPS ON)

View File

@ -4,7 +4,7 @@ After=network.target
[Service]
Type=forking
Restart=on-failure
Restart=on-abnormal
PIDFile=@MAXSCALE_VARDIR@/run/maxscale/maxscale.pid
ExecStartPre=/usr/bin/install -d @MAXSCALE_VARDIR@/run/maxscale -o maxscale -g maxscale
ExecStart=@CMAKE_INSTALL_PREFIX@/@MAXSCALE_BINDIR@/maxscale --user=maxscale

View File

@ -8,7 +8,7 @@ else()
endif()
endif()
add_executable(canonizer canonizer.c ${CMAKE_SOURCE_DIR}/server/core/random_jkiss.c)
target_link_libraries(canonizer pthread query_classifier z dl ssl aio crypt crypto rt m ${EMBEDDED_LIB} fullcore stdc++)
target_link_libraries(canonizer ${PCRE2_LIBRARIES} utils pthread query_classifier z dl ssl aio crypt crypto rt m ${EMBEDDED_LIB} fullcore stdc++)
add_test(NAME Internal-TestCanonicalQuery COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/canontest.sh
${CMAKE_CURRENT_BINARY_DIR}/test.log
${CMAKE_CURRENT_SOURCE_DIR}/input.sql

View File

@ -5,7 +5,7 @@ if(BUILD_TESTS OR BUILD_TOOLS)
elseif(WITH_TCMALLOC)
target_link_libraries(fullcore ${TCMALLOC_LIBRARIES})
endif()
target_link_libraries(fullcore ${CURL_LIBRARIES} utils log_manager pthread ${LZMA_LINK_FLAGS} ${EMBEDDED_LIB} ${PCRE_LINK_FLAGS} ssl aio rt crypt dl crypto inih z m stdc++)
target_link_libraries(fullcore ${CURL_LIBRARIES} utils ${PCRE2_LIBRARIES} log_manager pthread ${LZMA_LINK_FLAGS} ${EMBEDDED_LIB} ${PCRE_LINK_FLAGS} ssl aio rt crypt dl crypto inih z m stdc++)
add_dependencies(fullcore pcre2)
endif()
@ -22,15 +22,15 @@ elseif(WITH_TCMALLOC)
target_link_libraries(maxscale ${TCMALLOC_LIBRARIES})
endif()
target_link_libraries(maxscale ${LZMA_LINK_FLAGS} ${EMBEDDED_LIB} ${PCRE_LINK_FLAGS} ${CURL_LIBRARIES} log_manager utils ssl aio pthread crypt dl crypto inih z rt m stdc++)
target_link_libraries(maxscale ${LZMA_LINK_FLAGS} ${PCRE2_LIBRARIES} ${EMBEDDED_LIB} ${PCRE_LINK_FLAGS} ${CURL_LIBRARIES} log_manager utils ssl aio pthread crypt dl crypto inih z rt m stdc++)
install(TARGETS maxscale DESTINATION ${MAXSCALE_BINDIR})
add_executable(maxkeys maxkeys.c spinlock.c secrets.c utils.c gwdirs.c random_jkiss.c ${CMAKE_SOURCE_DIR}/log_manager/log_manager.cc)
target_link_libraries(maxkeys utils pthread crypt crypto)
target_link_libraries(maxkeys utils pthread crypt crypto ${PCRE2_LIBRARIES})
install(TARGETS maxkeys DESTINATION ${MAXSCALE_BINDIR})
add_executable(maxpasswd maxpasswd.c spinlock.c secrets.c utils.c gwdirs.c random_jkiss.c ${CMAKE_SOURCE_DIR}/log_manager/log_manager.cc)
target_link_libraries(maxpasswd utils pthread crypt crypto)
target_link_libraries(maxpasswd utils pthread crypt crypto ${PCRE2_LIBRARIES})
install(TARGETS maxpasswd DESTINATION ${MAXSCALE_BINDIR})
if(BUILD_TESTS)

View File

@ -1578,15 +1578,33 @@ handle_global_item(const char *name, const char *value)
int i;
if (strcmp(name, "threads") == 0)
{
int thrcount = atoi(value);
if (thrcount > 0)
if (strcmp(name, "auto") == 0)
{
gateway.n_threads = thrcount;
if ((gateway.n_threads = get_processor_count()) > 1)
{
gateway.n_threads--;
}
}
else
{
MXS_WARNING("Invalid value for 'threads': %s.", value);
return 0;
int thrcount = atoi(value);
if (thrcount > 0)
{
gateway.n_threads = thrcount;
int processor_count = get_processor_count();
if (thrcount > processor_count)
{
MXS_WARNING("Number of threads set to %d which is greater than"
" the number of processors available: %d",
thrcount, processor_count);
}
}
else
{
MXS_WARNING("Invalid value for 'threads': %s.", value);
return 0;
}
}
}
else if (strcmp(name, "non_blocking_polls") == 0)
@ -1706,7 +1724,7 @@ global_defaults()
{
uint8_t mac_addr[6]="";
struct utsname uname_data;
gateway.n_threads = get_processor_count();
gateway.n_threads = DEFAULT_NTHREADS;
gateway.n_nbpoll = DEFAULT_NBPOLLS;
gateway.pollsleep = DEFAULT_POLLSLEEP;
gateway.auth_conn_timeout = DEFAULT_AUTH_CONNECT_TIMEOUT;
@ -1922,7 +1940,7 @@ process_config_update(CONFIG_CONTEXT *context)
"count or\n\t<int>%% for specifying the "
"maximum percentage of available the "
"slaves that will be connected.",
((SERVICE*)obj->element)->name,
service->name,
param->name,
param->value);
}
@ -1960,7 +1978,7 @@ process_config_update(CONFIG_CONTEXT *context)
"for parameter \'%s.%s = %s\'\n\tExpected "
"type is <int> for maximum "
"slave replication lag.",
((SERVICE*)obj->element)->name,
service->name,
param->name,
param->value);
}
@ -2528,6 +2546,7 @@ config_get_ifaddr(unsigned char *output)
{
memcpy(output, ifr.ifr_hwaddr.sa_data, 6);
}
close(sock);
return success;
}

View File

@ -2343,29 +2343,30 @@ static void *
dbusers_keyread(int fd)
{
MYSQL_USER_HOST *dbkey;
int tmp;
if ((dbkey = (MYSQL_USER_HOST *) malloc(sizeof(MYSQL_USER_HOST))) == NULL)
{
return NULL;
}
if (read(fd, &tmp, sizeof(tmp)) != sizeof(tmp))
int user_size;
if (read(fd, &user_size, sizeof(user_size)) != sizeof(user_size))
{
free(dbkey);
return NULL;
}
if ((dbkey->user = (char *) malloc(tmp + 1)) == NULL)
if ((dbkey->user = (char *) malloc(user_size + 1)) == NULL)
{
free(dbkey);
return NULL;
}
if (read(fd, dbkey->user, tmp) != tmp)
if (read(fd, dbkey->user, user_size) != user_size)
{
free(dbkey->user);
free(dbkey);
return NULL;
}
dbkey->user[tmp] = 0; // NULL Terminate
dbkey->user[user_size] = 0; // NULL Terminate
if (read(fd, &dbkey->ipv4, sizeof(dbkey->ipv4)) != sizeof(dbkey->ipv4))
{
free(dbkey->user);
@ -2378,28 +2379,30 @@ dbusers_keyread(int fd)
free(dbkey);
return NULL;
}
if (read(fd, &tmp, sizeof(tmp)) != sizeof(tmp))
int res_size;
if (read(fd, &res_size, sizeof(res_size)) != sizeof(res_size))
{
free(dbkey->user);
free(dbkey);
return NULL;
}
if (tmp != -1)
else if (res_size != -1)
{
if ((dbkey->resource = (char *) malloc(tmp + 1)) == NULL)
if ((dbkey->resource = (char *) malloc(res_size + 1)) == NULL)
{
free(dbkey->user);
free(dbkey);
return NULL;
}
if (read(fd, dbkey->resource, tmp) != tmp)
if (read(fd, dbkey->resource, res_size) != res_size)
{
free(dbkey->resource);
free(dbkey->user);
free(dbkey);
return NULL;
}
dbkey->resource[tmp] = 0; // NULL Terminate
dbkey->resource[res_size] = 0; // NULL Terminate
}
else // NULL is valid, so represent with a length of -1
{

View File

@ -643,6 +643,9 @@ dcb_process_victim_queue(DCB *listofdcb)
}
else
{
#if defined(FAKE_CODE)
conn_open[dcb->fd] = false;
#endif /* FAKE_CODE */
dcb->fd = DCBFD_CLOSED;
MXS_DEBUG("%lu [dcb_process_victim_queue] Closed socket "
@ -650,9 +653,6 @@ dcb_process_victim_queue(DCB *listofdcb)
pthread_self(),
dcb->fd,
dcb);
#if defined(FAKE_CODE)
conn_open[dcb->fd] = false;
#endif /* FAKE_CODE */
}
}

View File

@ -1024,7 +1024,7 @@ int main(int argc, char **argv)
int n_services;
int eno = 0; /*< local variable for errno */
int opt;
int daemon_pipe[2];
int daemon_pipe[2] = {-1, -1};
bool parent_process;
int child_status;
void** threads = NULL; /*< thread list */

View File

@ -237,8 +237,8 @@ long get_processor_count()
#ifdef _SC_NPROCESSORS_ONLN
if ((processors = sysconf(_SC_NPROCESSORS_ONLN)) <= 0)
{
MXS_WARNING("Unable to establish the number of available cores. Defaulting to 4.");
processors = 4;
MXS_WARNING("Unable to establish the number of available cores. Defaulting to 1.");
processors = 1;
}
#else
#error _SC_NPROCESSORS_ONLN not available.

View File

@ -17,66 +17,101 @@
*/
#include <gwdirs.h>
#include <gw.h>
/**
* Set the configuration file directory
* @param str Path to directory
*/
void set_configdir(char* str)
{
free(configdir);
clean_up_pathname(str);
configdir = str;
}
/**
* Set the log file directory
* @param str Path to directory
*/
void set_logdir(char* str)
{
free(logdir);
clean_up_pathname(str);
logdir = str;
}
/**
* Set the language file directory
* @param str Path to directory
*/
void set_langdir(char* str)
{
free(langdir);
clean_up_pathname(str);
langdir = str;
}
/**
* Set the PID file directory
* @param str Path to directory
*/
void set_piddir(char* str)
{
free(piddir);
clean_up_pathname(str);
piddir = str;
}
/**
* Set the cache directory
* @param str Path to directory
*/
void set_cachedir(char* param)
{
free(cachedir);
clean_up_pathname(param);
cachedir = param;
}
/**
* Set the data directory
* @param str Path to directory
*/
void set_datadir(char* param)
{
free(maxscaledatadir);
clean_up_pathname(param);
maxscaledatadir = param;
}
/**
* Set the library directory. Modules will be loaded from here.
* @param str Path to directory
*/
void set_libdir(char* param)
{
free(libdir);
clean_up_pathname(param);
libdir = param;
}
/**
* Get the directory with all the modules.
* @return The module directory
*/
char* get_libdir()
{
return libdir ? libdir : (char*)default_libdir;
return libdir ? libdir : (char*) default_libdir;
}
void set_libdir(char* param)
{
if (libdir)
{
free(libdir);
}
libdir = param;
}
/**
* Get the service cache directory
* @return The path to the cache directory
*/
char* get_cachedir()
{
return cachedir ? cachedir : (char*)default_cachedir;
}
void set_cachedir(char* param)
{
if (cachedir)
{
free(cachedir);
}
cachedir = param;
return cachedir ? cachedir : (char*) default_cachedir;
}
/**
@ -85,35 +120,41 @@ void set_cachedir(char* param)
*/
char* get_datadir()
{
return maxscaledatadir ? maxscaledatadir : (char*)default_datadir;
}
void set_datadir(char* param)
{
if (maxscaledatadir)
{
free(maxscaledatadir);
}
maxscaledatadir = param;
return maxscaledatadir ? maxscaledatadir : (char*) default_datadir;
}
/**
* Get the configuration file directory
* @return The path to the configuration file directory
*/
char* get_configdir()
{
return configdir ? configdir : (char*)default_configdir;
return configdir ? configdir : (char*) default_configdir;
}
/**
* Get the PID file directory which contains maxscale.pid
* @return Path to the PID file directory
*/
char* get_piddir()
{
return piddir ? piddir : (char*)default_piddir;
return piddir ? piddir : (char*) default_piddir;
}
/**
* Return the log file directory
* @return Path to the log file directory
*/
char* get_logdir()
{
return logdir ? logdir : (char*)default_logdir;
return logdir ? logdir : (char*) default_logdir;
}
/**
* Path to the directory which contains the errmsg.sys language file
* @return Path to the language file directory
*/
char* get_langdir()
{
return langdir ? langdir : (char*)default_langdir;
return langdir ? langdir : (char*) default_langdir;
}

View File

@ -25,6 +25,8 @@
#include <gwdirs.h>
#include <random_jkiss.h>
#include "gw.h"
/**
* Generate a random printable character
*
@ -67,6 +69,7 @@ secrets_readKeys(const char* path)
if (path != NULL)
{
snprintf(secret_file, PATH_MAX, "%s/.secrets", path);
clean_up_pathname(secret_file);
}
else
{
@ -224,7 +227,7 @@ int secrets_writeKeys(const char *path)
}
snprintf(secret_file, PATH_MAX + 9, "%s/.secrets", path);
secret_file[PATH_MAX + 9] = '\0';
clean_up_pathname(secret_file);
/* Open for writing | Create | Truncate the file for writing */
if ((fd = open(secret_file, O_CREAT | O_WRONLY | O_TRUNC, S_IRUSR)) < 0)

View File

@ -278,3 +278,44 @@ char *create_hex_sha1_sha1_passwd(char *passwd)
return hexpasswd;
}
/**
* Remove duplicate and trailing forward slashes from a path.
* @param path Path to clean up
*/
void clean_up_pathname(char *path)
{
char *data = path;
size_t len = strlen(path);
if (len > PATH_MAX)
{
MXS_WARNING("Pathname too long: %s", path);
}
while (*data != '\0')
{
if (*data == '/')
{
if (*(data + 1) == '/')
{
memmove(data, data + 1, len);
len--;
}
else if (*(data + 1) == '\0' && data != path)
{
*data = '\0';
}
else
{
data++;
len--;
}
}
else
{
data++;
len--;
}
}
}

View File

@ -89,4 +89,5 @@ int parse_bindconfig(char *, unsigned short, struct sockaddr_in *);
int setipaddress(struct in_addr *, char *);
char* get_libdir();
long get_processor_count();
void clean_up_pathname(char *path);
#endif

View File

@ -41,6 +41,7 @@
#define DEFAULT_POLLSLEEP 1000 /**< Default poll wait time (milliseconds) */
#define _SYSNAME_STR_LENGTH 256 /**< sysname len */
#define _RELEASE_STR_LENGTH 256 /**< release len */
#define DEFAULT_NTHREADS 1 /**< Default number of polling threads */
/**
* Maximum length for configuration parameter value.
*/

View File

@ -3,12 +3,11 @@
# Global parameters
#
# Number of threads is autodetected, uncomment for manual configuration
# Complete list of configuration options:
# https://github.com/mariadb-corporation/MaxScale/blob/master/Documentation/Getting-Started/Configuration-Guide.md
[maxscale]
#threads=8
threads=1
# Server definitions
#

View File

@ -11,7 +11,7 @@ if(BUILD_RABBITMQ)
endif()
add_library(regexfilter SHARED regexfilter.c)
target_link_libraries(regexfilter log_manager ${PCRE2_LIBRARIES})
target_link_libraries(regexfilter log_manager)
add_dependencies(regexfilter pcre2)
set_target_properties(regexfilter PROPERTIES VERSION "1.1.0")
install(TARGETS regexfilter DESTINATION ${MAXSCALE_LIBDIR})

View File

@ -998,7 +998,7 @@ bool parse_rule_definition(FW_INSTANCE* instance, RULE* ruledef, char* rule, cha
{
bool escaped = false;
regex_t *re;
char* start, *str;
char* start;
tok = strtok_r(NULL, " ", saveptr);
char delim = '\'';
int n_char = 0;
@ -1052,20 +1052,13 @@ bool parse_rule_definition(FW_INSTANCE* instance, RULE* ruledef, char* rule, cha
goto retblock;
}
str = calloc(((tok - start) + 1), sizeof(char));
if (str == NULL)
{
MXS_ERROR("Fatal Error: malloc returned NULL.");
rval = false;
goto retblock;
}
char str[(tok - start) + 1];
re = (regex_t*) malloc(sizeof(regex_t));
if (re == NULL)
{
MXS_ERROR("Fatal Error: malloc returned NULL.");
rval = false;
free(str);
goto retblock;
}
@ -1083,7 +1076,6 @@ bool parse_rule_definition(FW_INSTANCE* instance, RULE* ruledef, char* rule, cha
ruledef->type = RT_REGEX;
ruledef->data = (void*) re;
}
free(str);
}
else if (strcmp(tok, "limit_queries") == 0)
@ -1277,11 +1269,8 @@ createInstance(char **options, FILTER_PARAMETER **params)
{
if (strcmp(params[i]->name, "rules") == 0)
{
if (filename)
{
free(filename);
}
filename = strdup(params[i]->value);
filename = params[i]->value;
break;
}
}
@ -1292,6 +1281,7 @@ createInstance(char **options, FILTER_PARAMETER **params)
if (strcmp(options[i], "ignorecase") == 0)
{
my_instance->regflags |= REG_ICASE;
break;
}
}
}
@ -1310,7 +1300,6 @@ createInstance(char **options, FILTER_PARAMETER **params)
MXS_ERROR("Error while opening rule file for firewall filter.");
hashtable_free(my_instance->htable);
free(my_instance);
free(filename);
return NULL;
}
@ -1360,16 +1349,15 @@ createInstance(char **options, FILTER_PARAMETER **params)
if (file_empty)
{
MXS_ERROR("dbfwfilter: File is empty: %s", filename);
free(filename);
err = true;
goto retblock;
}
fclose(file);
free(filename);
/**Apply the rules to users*/
ptr = my_instance->userstrings;
my_instance->userstrings = NULL;
if (ptr == NULL)
{
@ -2205,6 +2193,10 @@ bool parse_at_times(const char** tok, char** saveptr, RULE* ruledef)
{
ruledef->active = tr;
}
else
{
free(tr);
}
return success;
}

View File

@ -502,7 +502,7 @@ createInstance(char **options, FILTER_PARAMETER **params)
MQ_INSTANCE *my_instance;
int paramcount = 0, parammax = 64, i = 0, x = 0, arrsize = 0;
FILTER_PARAMETER** paramlist;
char** arr;
char** arr = NULL;
char taskname[512];
if ((my_instance = calloc(1, sizeof(MQ_INSTANCE))))
@ -514,6 +514,8 @@ createInstance(char **options, FILTER_PARAMETER **params)
if ((my_instance->conn = amqp_new_connection()) == NULL)
{
free(paramlist);
free(my_instance);
return NULL;
}
my_instance->channel = 1;
@ -610,6 +612,10 @@ createInstance(char **options, FILTER_PARAMETER **params)
if (arrsize > 0)
{
for (int x = 0; x < arrsize; x++)
{
free(arr[x]);
}
free(arr);
}
arrsize = 0;
@ -777,7 +783,11 @@ createInstance(char **options, FILTER_PARAMETER **params)
snprintf(taskname, 511, "mqtask%d", atomic_add(&hktask_id, 1));
hktask_add(taskname, sendMessage, (void*) my_instance, 5);
for (int x = 0; x < arrsize; x++)
{
free(arr[x]);
}
free(arr);
}
return(FILTER *) my_instance;
}
@ -834,7 +844,7 @@ void sendMessage(void* data)
{
MQ_INSTANCE *instance = (MQ_INSTANCE*) data;
mqmessage *tmp;
int err_num;
int err_num = AMQP_STATUS_OK;
spinlock_acquire(&instance->rconn_lock);
if (instance->conn_stat != AMQP_STATUS_OK)

View File

@ -391,7 +391,7 @@ routeQuery(FILTER *instance, void *session, GWBUF *queue)
REGEX_SESSION *my_session = (REGEX_SESSION *) session;
char *sql, *newsql;
if (modutil_is_SQL(queue))
if (my_session->active && modutil_is_SQL(queue))
{
if (queue->next != NULL)
{

View File

@ -611,7 +611,7 @@ monitorMain(void *arg)
evtype = mon_get_event_type(ptr);
if (isGaleraEvent(evtype))
{
MXS_INFO("Server changed state: %s[%s:%u]: %s",
MXS_NOTICE("Server changed state: %s[%s:%u]: %s",
ptr->server->unique_name,
ptr->server->name, ptr->server->port,
mon_get_event_name(ptr));

View File

@ -652,7 +652,7 @@ monitorMain(void *arg)
evtype = mon_get_event_type(ptr);
if (isMySQLEvent(evtype))
{
MXS_INFO("Server changed state: %s[%s:%u]: %s",
MXS_NOTICE("Server changed state: %s[%s:%u]: %s",
ptr->server->unique_name,
ptr->server->name, ptr->server->port,
mon_get_event_name(ptr));

View File

@ -946,7 +946,7 @@ monitorMain(void *arg)
evtype = mon_get_event_type(ptr);
if (isMySQLEvent(evtype))
{
MXS_INFO("Server changed state: %s[%s:%u]: %s",
MXS_NOTICE("Server changed state: %s[%s:%u]: %s",
ptr->server->unique_name,
ptr->server->name, ptr->server->port,
mon_get_event_name(ptr));

View File

@ -417,7 +417,7 @@ monitorMain(void *arg)
evtype = mon_get_event_type(ptr);
if (isNdbEvent(evtype))
{
MXS_INFO("Server changed state: %s[%s:%u]: %s",
MXS_NOTICE("Server changed state: %s[%s:%u]: %s",
ptr->server->unique_name,
ptr->server->name, ptr->server->port,
mon_get_event_name(ptr));

View File

@ -360,6 +360,10 @@ int n_connect = 0;
}
n_connect++;
}
else
{
close(so);
}
}
}

View File

@ -353,9 +353,12 @@ static int gw_read_backend_event(DCB *dcb) {
}
spinlock_release(&dcb->delayqlock);
/* try reload users' table for next connection */
if (backend_protocol->protocol_auth_state ==
MYSQL_AUTH_FAILED)
/* Only reload the users table if authentication failed and the
* client session is not stopping. It is possible that authentication
* fails because the client has closed the connection before all
* backends have done authentication. */
if (backend_protocol->protocol_auth_state == MYSQL_AUTH_FAILED &&
dcb->session->state != SESSION_STATE_STOPPING)
{
service_refresh_users(dcb->session->service);
}
@ -694,12 +697,15 @@ gw_MySQLWrite_backend(DCB *dcb, GWBUF *queue)
switch (backend_protocol->protocol_auth_state) {
case MYSQL_HANDSHAKE_FAILED:
case MYSQL_AUTH_FAILED:
MXS_ERROR("Unable to write to backend '%s' due to "
"%s failure. Server in state %s.",
dcb->server->unique_name,
backend_protocol->protocol_auth_state == MYSQL_HANDSHAKE_FAILED ?
"handshake" : "authentication",
STRSRVSTATUS(dcb->server));
if (dcb->session->state != SESSION_STATE_STOPPING)
{
MXS_ERROR("Unable to write to backend '%s' due to "
"%s failure. Server in state %s.",
dcb->server->unique_name,
backend_protocol->protocol_auth_state == MYSQL_HANDSHAKE_FAILED ?
"handshake" : "authentication",
STRSRVSTATUS(dcb->server));
}
/** Consume query buffer */
while ((queue = gwbuf_consume(
queue,

View File

@ -20,7 +20,7 @@ add_executable(maxbinlogcheck maxbinlogcheck.c blr_file.c blr_cache.c blr_master
${CMAKE_SOURCE_DIR}/log_manager/log_manager.cc ${CMAKE_SOURCE_DIR}/server/core/externcmd.c)
target_link_libraries(maxbinlogcheck utils ssl pthread ${LZMA_LINK_FLAGS} ${EMBEDDED_LIB} ${PCRE_LINK_FLAGS} aio rt crypt dl crypto inih z m stdc++ ${CURL_LIBRARIES})
target_link_libraries(maxbinlogcheck utils ssl pthread ${LZMA_LINK_FLAGS} ${EMBEDDED_LIB} ${PCRE_LINK_FLAGS} aio rt crypt dl crypto inih z m stdc++ ${CURL_LIBRARIES} ${PCRE2_LIBRARIES})
install(TARGETS maxbinlogcheck DESTINATION bin)

View File

@ -156,6 +156,7 @@ MAXINFO_TREE *col, *table;
{
/** Unknown token after RESTART MONITOR|SERVICE */
*parse_error = PARSE_SYNTAX_ERROR;
free(text);
free_tree(tree);
return NULL;
}
@ -376,7 +377,10 @@ int i;
}
if (s1 == s2)
{
*text = NULL;
return NULL;
}
*text = strndup(s1, s2 - s1);
for (i = 0; keywords[i].text; i++)

View File

@ -569,8 +569,8 @@ char* get_shard_target_name(ROUTER_INSTANCE* router, ROUTER_CLIENT_SES* client,
query = modutil_get_SQL(buffer);
if((tmp = strcasestr(query,"from")))
{
char* tok = strtok(tmp, " ;");
tok = strtok(NULL," ;");
char *saved, *tok = strtok_r(tmp, " ;", &saved);
tok = strtok_r(NULL, " ;", &saved);
ss_dassert(tok != NULL);
tmp = (char*) hashtable_fetch(ht, tok);
@ -652,49 +652,6 @@ bool check_shard_status(ROUTER_INSTANCE* router, char* shard)
return rval;
}
/**
* Turn a string into an array of strings. The last element in the list is a NULL
* pointer.
* @param str String to tokenize
* @return Pointer to an array of strings.
*/
char** tokenize_string(char* str)
{
char *tok;
char **list = NULL;
int sz = 2, count = 0;
tok = strtok(str,", ");
if(tok == NULL)
return NULL;
list = (char**)malloc(sizeof(char*)*(sz));
while(tok)
{
if(count + 1 >= sz)
{
char** tmp = realloc(list,sizeof(char*)*(sz*2));
if(tmp == NULL)
{
char errbuf[STRERROR_BUFLEN];
MXS_ERROR("realloc returned NULL: %s.",
strerror_r(errno, errbuf, sizeof(errbuf)));
free(list);
return NULL;
}
list = tmp;
sz *= 2;
}
list[count] = strdup(tok);
count++;
tok = strtok(NULL,", ");
}
list[count] = NULL;
return list;
}
/**
* A fake DCB read function used to forward queued queries.
* @param dcb Internal DCB used by the router session
@ -1254,7 +1211,7 @@ static void* newSession(
if(db[0] != 0x0)
{
/* Store the database the client is connecting to */
strncpy(client_rses->connect_db,db,MYSQL_DATABASE_MAXLEN+1);
snprintf(client_rses->connect_db, MYSQL_DATABASE_MAXLEN + 1, "%s", db);
}
@ -3797,7 +3754,7 @@ static bool route_session_write(
unsigned char packet_type,
skygw_query_type_t qtype)
{
bool succp;
bool succp = false;
rses_property_t* prop;
backend_ref_t* backend_ref;
int i;

View File

@ -482,8 +482,8 @@ get_shard_target_name(ROUTER_INSTANCE* router, ROUTER_CLIENT_SES* client, GWBUF*
query = modutil_get_SQL(buffer);
if((tmp = strcasestr(query,"from")))
{
char* tok = strtok(tmp, " ;");
tok = strtok(NULL," ;");
char *saved, *tok = strtok_r(tmp, " ;", &saved);
tok = strtok_r(NULL, " ;", &saved);
ss_dassert(tok != NULL);
tmp = (char*) hashtable_fetch(ht, tok);
if(tmp)
@ -542,44 +542,6 @@ get_shard_target_name(ROUTER_INSTANCE* router, ROUTER_CLIENT_SES* client, GWBUF*
return rval;
}
char**
tokenize_string(char* str)
{
char *tok;
char **list = NULL;
int sz = 2, count = 0;
tok = strtok(str, ", ");
if(tok == NULL)
return NULL;
list = (char**) malloc(sizeof(char*)*(sz));
while(tok)
{
if(count + 1 >= sz)
{
char** tmp = realloc(list, sizeof(char*)*(sz * 2));
if(tmp == NULL)
{
char errbuf[STRERROR_BUFLEN];
MXS_ERROR("realloc returned NULL: %s.",
strerror_r(errno, errbuf, sizeof(errbuf)));
free(list);
return NULL;
}
list = tmp;
sz *= 2;
}
list[count] = strdup(tok);
count++;
tok = strtok(NULL, ", ");
}
list[count] = NULL;
return list;
}
/**
* This is the function used to channel replies from a subservice up to the client.
* The values passed are set in the newSession function.
@ -1497,7 +1459,7 @@ gen_show_dbs_response(ROUTER_INSTANCE* router, ROUTER_CLIENT_SES* client)
rval = gwbuf_append(rval, last_packet);
rval = gwbuf_make_contiguous(rval);
hashtable_iterator_free(iter);
return rval;
}

View File

@ -1,3 +1,3 @@
add_library(utils skygw_utils.cc ../server/core/atomic.c)
target_link_libraries(utils stdc++ ${PCRE2_LIBRARIES})
target_link_libraries(utils stdc++)
add_dependencies(utils pcre2)