Merge branch '2.2' into develop

This commit is contained in:
Markus Mäkelä 2018-07-26 11:27:09 +03:00
commit 6c59da77fb
No known key found for this signature in database
GPG Key ID: 72D48FCE664F7B19
29 changed files with 409 additions and 417 deletions

View File

@ -38,6 +38,7 @@ For more details, please refer to:
the master. There is also limited capability for rejoining nodes.
For more details, please refer to:
* [MariaDB MaxScale 2.2.12 Release Notes](Release-Notes/MaxScale-2.2.12-Release-Notes.md)
* [MariaDB MaxScale 2.2.11 Release Notes](Release-Notes/MaxScale-2.2.11-Release-Notes.md)
* [MariaDB MaxScale 2.2.10 Release Notes](Release-Notes/MaxScale-2.2.10-Release-Notes.md)
* [MariaDB MaxScale 2.2.9 Release Notes](Release-Notes/MaxScale-2.2.9-Release-Notes.md)

View File

@ -43,6 +43,7 @@ listener | A listener is the network endpoint that is used to listen
connection failover | When a connection currently being used between MariaDB MaxScale and the database server fails a replacement will be automatically created to another server by MariaDB MaxScale without client intervention
backend database | A term used to refer to a database that sits behind MariaDB MaxScale and is accessed by applications via MariaDB MaxScale.
filter | A module that can be placed between the client and the MariaDB MaxScale router module. All client data passes through the filter module and may be examined or modified by the filter modules. Filters may be chained together to form processing pipelines.
REST API | HTTP administrative interface
## Configuration
@ -746,17 +747,16 @@ configuration file.
#### `admin_host`
The network interface where the HTTP admin interface listens on. The default
value is the IPv4 address `127.0.0.1` which only listens for local connections.
The network interface where the REST API listens on. The default value is the
IPv4 address `127.0.0.1` which only listens for local connections.
#### `admin_port`
The port where the HTTP admin interface listens on. The default value is port
8989.
The port where the REST API listens on. The default value is port 8989.
#### `admin_auth`
Enable HTTP admin interface authentication using HTTP Basic Access
Enable REST API authentication using HTTP Basic Access
authentication. This is not a secure method of authentication without HTTPS but
it does add a small layer of security. This option is enabled by default.

View File

@ -229,7 +229,7 @@ master), _switchover_ (swapping a slave with a running master) and _rejoin_
(joining a standalone server to the cluster). The features and the parameters
controlling them are presented in this section.
These features require that the monitor user (`user`) has the SUPER privilege.
These features require that the monitor user (`user`) has the SUPER and RELOAD privileges.
In addition, the monitor needs to know which username and password a slave
should use when starting replication. These are given in `replication_user` and
`replication_password`.
@ -357,7 +357,7 @@ error is logged and automatic failover is disabled. If this happens, the cluster
must be fixed manually and the failover needs to be re-enabled via the REST API
or MaxAdmin.
The monitor user must have the SUPER privilege for failover to work.
The monitor user must have the SUPER and RELOAD privileges for failover to work.
#### `auto_rejoin`

View File

@ -692,9 +692,10 @@ _shutdown service_ command. This will not affect the connections that are
already in place for a service, but will stop any new connections from being
accepted.
Stopping a service will not cause new connections to be rejected. All new
connections that were creted while the service was stopped will be processed
normally once the service is restared.
Connection requests are not processed while a service is stopped. New connection
requests will remain in a queue that is processed once the service is
restarted. A client application will see old connections work normally but new
connections are unresponsive as long as the service is stopped.
```
MaxScale> shutdown service RWSplit

View File

@ -0,0 +1,44 @@
# MariaDB MaxScale 2.2.12 Release Notes
Release 2.2.12 is a GA release.
This document describes the changes in release 2.2.12, when compared to
release 2.2.11.
For any problems you encounter, please consider submitting a bug
report on [our Jira](https://jira.mariadb.org/projects/MXS).
## New Features
### Configuration Exporting
The runtime configuration can now be dumped into a file with the
`--export-config` command line option. This allows changes done at runtime to be
collected into a single file for easier exporting.
## Bug fixes
* [MXS-1985](https://jira.mariadb.org/browse/MXS-1985) Concurrent KILL commands cause deadlock
* [MXS-1977](https://jira.mariadb.org/browse/MXS-1977) Maxscale 2.2.6 memory leak
* [MXS-1949](https://jira.mariadb.org/browse/MXS-1949) Warning for user load failure logged even when service has no users
* [MXS-1942](https://jira.mariadb.org/browse/MXS-1942) maxctrl --version is not helpful
## Known Issues and Limitations
There are some limitations and known issues within this version of MaxScale.
For more information, please refer to the [Limitations](../About/Limitations.md) document.
## Packaging
RPM and Debian packages are provided for supported the Linux distributions.
Packages can be downloaded [here](https://mariadb.com/downloads/mariadb-tx/maxscale).
## Source Code
The source code of MaxScale is tagged at GitHub with a tag, which is identical
with the version of MaxScale. For instance, the tag of version X.Y.Z of MaxScale
is `maxscale-X.Y.Z`. Further, the default branch is always the latest GA version
of MaxScale.
The source code is available [here](https://github.com/mariadb-corporation/MaxScale).

View File

@ -27,7 +27,13 @@ the monitor waits between each monitoring loop.
The monitor user requires the REPLICATION CLIENT privileges to do basic
monitoring. To create a user with the proper grants, execute the following SQL.
```
```sql
CREATE USER 'monitor_user'@'%' IDENTIFIED BY 'my_password';
GRANT REPLICATION CLIENT on *.* to 'monitor_user'@'%';
```
**Note:** If the automatic failover of the MariaDB Monitor will used, the user
will require additional grants. Execute the following SQL to grant them.
```sql
GRANT SUPER, RELOAD on *.* to 'monitor_user'@'%';
```

View File

@ -25,7 +25,8 @@ when you select the distribution you are downloading from.
After installation, we need to create a database user. We do this as we need to
connect to the backend databases to retrieve the user authentication
information. To create this user, execute the following SQL commands.
information. To create this user, execute the following SQL commands on
the master server of your database cluster.
```
CREATE USER 'maxscale'@'%' IDENTIFIED BY 'maxscale_pw';

View File

@ -363,16 +363,13 @@ extern char *gwbuf_get_property(GWBUF *buf, const char *name);
/**
* Convert a chain of GWBUF structures into a single GWBUF structure
*
* @param orig The chain to convert
* @param orig The chain to convert, must not be used after the function call
*
* @return NULL if @c buf is NULL or if a memory allocation fails,
* @c buf if @c buf already is contiguous, and otherwise
* a contigious copy of @c buf.
* @return A contiguous version of @c buf.
*
* @attention If a non-NULL value is returned, the @c buf should no
* longer be used as it may have been freed.
* @attention Never returns NULL, memory allocation failures abort the process
*/
extern GWBUF *gwbuf_make_contiguous(GWBUF *buf);
extern GWBUF* gwbuf_make_contiguous(GWBUF *buf);
/**
* Add a buffer object to GWBUF buffer.
@ -401,11 +398,12 @@ extern void dprintAllBuffers(void *pdcb);
#endif
/**
* Debug function for dumping buffer contents to INFO log
* Debug function for dumping buffer contents to log
*
* @param buffer Buffer to dump
* @param buffer Buffer to dump
* @param log_level Log priority where the message is written
*/
void gwbuf_hexdump(GWBUF* buffer);
void gwbuf_hexdump(GWBUF* buffer, int log_level);
/**
* Return pointer of the byte at offset from start of chained buffer

View File

@ -357,6 +357,8 @@ static inline void dcb_readq_set(DCB *dcb, GWBUF *buffer)
*
* @deprecated You should not use this function, use dcb_foreach_parallel instead
*
* @warning This must only be called from the main thread, otherwise deadlocks occur
*
* @param func Function to call. The function should return @c true to continue iteration
* and @c false to stop iteration earlier. The first parameter is a DCB and the second
* is the value of @c data that the user provided.
@ -366,21 +368,15 @@ static inline void dcb_readq_set(DCB *dcb, GWBUF *buffer)
bool dcb_foreach(bool (*func)(DCB *dcb, void *data), void *data);
/**
* @brief Call a function for each connected DCB
* @brief Call a function for each connected DCB on the current worker
*
* @note This function can call @c func from multiple thread at one time.
* @param func Function to call. The function should return @c true to continue
* iteration and @c false to stop iteration earlier. The first parameter
* is the current DCB.
*
* @param func Function to call. The function should return @c true to continue iteration
* and @c false to stop iteration earlier. The first is a DCB and
* the second is this thread's value in the @c data array that
* the user provided.
*
* @param data Array of user provided data passed as the second parameter to @c func.
* The array must have more space for pointers thann the return
* value of `config_threadcount()`. The value passed to @c func will
* be the value of the array at the index of the current thread's ID.
* @param data User provided data passed as the second parameter to @c func
*/
void dcb_foreach_parallel(bool (*func)(DCB *dcb, void *data), void **data);
void dcb_foreach_local(bool (*func)(DCB *dcb, void *data), void *data);
/**
* @brief Return the port number this DCB is connected to

View File

@ -37,8 +37,8 @@ public:
*
* @return New virtual client or NULL on error
*/
static LocalClient* create(MXS_SESSION* session, SERVICE* service);
static LocalClient* create(MXS_SESSION* session, SERVER* server);
static LocalClient* create(MYSQL_session* session, MySQLProtocol* proto, SERVICE* service);
static LocalClient* create(MYSQL_session* session, MySQLProtocol* proto, SERVER* server);
/**
* Queue a new query for execution
@ -57,8 +57,8 @@ public:
void self_destruct();
private:
static LocalClient* create(MXS_SESSION* session, const char* ip, uint64_t port);
LocalClient(MXS_SESSION* session, int fd);
static LocalClient* create(MYSQL_session* session, MySQLProtocol* proto, const char* ip, uint64_t port);
LocalClient(MYSQL_session* session, MySQLProtocol* proto, int fd);
static uint32_t poll_handler(struct mxs_poll_data* data, void* worker, uint32_t events);
void process(uint32_t events);
GWBUF* read_complete_packet();

View File

@ -4,7 +4,7 @@ if (BUILD_MAXCTRL)
if (NPM_FOUND AND NODEJS_FOUND AND NODEJS_VERSION VERSION_GREATER "6.0.0")
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/lib/version.js.in ${CMAKE_CURRENT_BINARY_DIR}/lib/version.js @ONLY)
include(configure_version.cmake)
add_custom_command(OUTPUT ${CMAKE_BINARY_DIR}/maxctrl/maxctrl
COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/build.sh ${CMAKE_SOURCE_DIR}

View File

@ -12,6 +12,7 @@ if [ "$PWD" != "$src" ]
then
# Copy sources to working directory
cp -r -t $PWD/maxctrl $src/maxctrl/*
cp -r -t $PWD/ $src/VERSION*.cmake
fi
cd $PWD/maxctrl

View File

@ -0,0 +1,2 @@
include(../VERSION22.cmake)
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/lib/version.js.in ${CMAKE_CURRENT_BINARY_DIR}/lib/version.js @ONLY)

View File

@ -5,7 +5,8 @@
"repository": "https://github.com/mariadb-corporation/MaxScale",
"main": "maxctrl.js",
"scripts": {
"test": "nyc mocha --timeout 15000 --slow 10000"
"test": "nyc mocha --timeout 15000 --slow 10000",
"preinstall": "cmake -P configure_version.cmake"
},
"keywords": [
"maxscale"

View File

@ -1040,6 +1040,10 @@ add_test_executable(mxs1849_table_sharding.cpp mxs1849_table_sharding mxs1849_ta
# https://jira.mariadb.org/browse/MXS-1961
add_test_executable(mxs1961_standalone_rejoin.cpp mxs1961_standalone_rejoin mxs1961_standalone_rejoin LABELS REPL_BACKEND)
# MXS-1985: MaxScale hangs on concurrent KILL processing
# https://jira.mariadb.org/browse/MXS-1985
add_test_executable(mxs1985_kill_hang.cpp mxs1985_kill_hang replication LABELS REPL_BACKEND)
configure_file(templates.h.in templates.h @ONLY)
include(CTest)

View File

@ -3,9 +3,7 @@
## Pre-release Checklist
* Create new release notes and add all fixed bugs, use a previous one as a template
* Update the link to the latest release notes in Documentation-Contents.md
* Add link to release notes and document major changes in Changelog.md
* Add link to release notes in the Upgrading guide
## 1. Tag

View File

@ -0,0 +1,55 @@
/**
* MXS-1985: MaxScale hangs on concurrent KILL processing
*/
#include "testconnections.h"
#include <atomic>
#include <thread>
#include <iostream>
using namespace std;
static atomic<bool> running{true};
int main(int argc, char *argv[])
{
TestConnections test(argc, argv);
vector<thread> threads;
for (int i = 0; i < 20; i++)
{
threads.emplace_back([&, i]()
{
while (running)
{
MYSQL* c = test.maxscales->open_rwsplit_connection();
// It doesn't really matter if the connection ID exists, this is just a
// handy way of generating cross-thread communication.
for (auto&& a: get_result(c, "SELECT id FROM information_schema.processlist"
" WHERE user like '%skysql%'"))
{
if (execute_query_silent(c, std::string("KILL " + a[0]).c_str()))
{
break;
}
}
mysql_close(c);
}
});
}
sleep(10);
running = false;
// If MaxScale hangs, at least one thread will not return in time
test.set_timeout(30);
for (auto&& a: threads)
{
a.join();
}
return test.global_result;
}

View File

@ -9,228 +9,89 @@
* - check Maxscale is alive
*/
#include "testconnections.h"
#include "sql_t1.h"
//#include "get_com_select_insert.h"
typedef struct
#include <atomic>
#include <thread>
#include <vector>
static std::atomic<bool> running{true};
void query_thread(TestConnections* t)
{
int exit_flag;
int thread_id;
long i;
int rwsplit_only;
TestConnections * Test;
MYSQL * conn1;
MYSQL * conn2;
MYSQL * conn3;
} openclose_thread_data;
void *query_thread1( void *ptr );
TestConnections& test = *t; // For some reason CentOS 7 doesn't like passing references to std::thread
std::string sql(1000000, '\0');
create_insert_string(&sql[0], 1000, 2);
MYSQL* conn1 = test.maxscales->open_rwsplit_connection();
MYSQL* conn2 = test.maxscales->open_readconn_master_connection();
test.add_result(mysql_errno(conn1), "Error connecting to readwritesplit: %s", mysql_error(conn1));
test.add_result(mysql_errno(conn2), "Error connecting to readconnroute: %s", mysql_error(conn2));
test.try_query(conn1, "SET SESSION SQL_LOG_BIN=0");
test.try_query(conn2, "SET SESSION SQL_LOG_BIN=0");
while (running)
{
test.try_query(conn1, "%s", sql.c_str());
test.try_query(conn2, "%s", sql.c_str());
}
mysql_close(conn1);
mysql_close(conn2);
}
int main(int argc, char *argv[])
{
TestConnections * Test = new TestConnections(argc, argv);
Test->stop_timeout();
TestConnections test(argc, argv);
int threads_num = 4;
openclose_thread_data data[threads_num];
int master = test.maxscales->find_master_maxadmin(test.galera);
test.tprintf("Master: %d", master);
std::set<int> slaves{0, 1, 2, 3};
slaves.erase(master);
int i;
int run_time = 100;
test.maxscales->connect();
test.try_query(test.maxscales->conn_rwsplit[0], "DROP TABLE IF EXISTS t1");
test.try_query(test.maxscales->conn_rwsplit[0], "CREATE TABLE t1 (x1 int, fl int)");
test.maxscales->disconnect();
if (Test->smoke)
std::vector<std::thread> threads;
for (int i = 0; i < 4; i++)
{
run_time = 10;
threads.emplace_back(query_thread, &test);
}
for (i = 0; i < threads_num; i++)
for (auto&& i : slaves)
{
data[i].i = 0;
data[i].exit_flag = 0;
data[i].Test = Test;
data[i].rwsplit_only = 1;
data[i].thread_id = i;
test.tprintf("Blocking node %d", i);
test.galera->block_node(i);
test.maxscales->wait_for_monitor();
}
test.tprintf("Unblocking nodes\n");
pthread_t thread1[threads_num];
//Test->repl->flush_hosts();
Test->set_timeout(20);
int master = Test->maxscales->find_master_maxadmin(Test->galera);
Test->stop_timeout();
Test->tprintf(("Master is %d\n"), master);
int k = 0;
int x = 0;
int slaves[2];
while (k < 2 )
for (auto&& i : slaves)
{
if (x != master)
{
slaves[k] = x;
k++;
x++;
}
else
{
x++;
}
}
Test->tprintf(("Slave1 is %d\n"), slaves[0]);
Test->tprintf(("Slave2 is %d\n"), slaves[1]);
Test->set_timeout(20);
Test->repl->connect();
Test->maxscales->connect_maxscale(0);
Test->set_timeout(20);
create_t1(Test->maxscales->conn_rwsplit[0]);
Test->repl->execute_query_all_nodes((char *) "set global max_connections = 2000;");
Test->set_timeout(20);
Test->try_query(Test->maxscales->conn_rwsplit[0], (char *) "DROP TABLE IF EXISTS t1");
Test->try_query(Test->maxscales->conn_rwsplit[0], (char *) "CREATE TABLE t1 (x1 int, fl int)");
for (i = 0; i < threads_num; i++)
{
data[i].rwsplit_only = 1;
}
/* Create independent threads each of them will execute function */
for (i = 0; i < threads_num; i++)
{
pthread_create(&thread1[i], NULL, query_thread1, &data[i]);
}
Test->tprintf("Threads are running %d seconds \n", run_time);
Test->set_timeout(3 * run_time + 60);
sleep(20);
sleep(run_time);
Test->tprintf("Blocking slave %d\n", slaves[0]);
Test->galera->block_node(slaves[0]);
sleep(run_time);
Test->galera->block_node(slaves[1]);
Test->tprintf("Blocking slave %d\n", slaves[1]);
sleep(run_time);
Test->tprintf("Unblocking slaves\n");
Test->galera->unblock_node(slaves[0]);
Test->galera->unblock_node(slaves[1]);
Test->set_timeout(120);
Test->tprintf("Waiting for all threads exit\n");
for (i = 0; i < threads_num; i++)
{
data[i].exit_flag = 1;
pthread_join(thread1[i], NULL);
Test->tprintf("exit %d\n", i);
test.galera->unblock_node(i);
}
Test->tprintf("all maxscales->routers[0] are involved, threads are running %d seconds more\n", run_time);
test.maxscales->wait_for_monitor();
for (i = 0; i < threads_num; i++)
running = false;
test.set_timeout(120);
test.tprintf("Waiting for all threads to exit");
for (auto&& a : threads)
{
data[i].rwsplit_only = 0;
}
for (i = 0; i < threads_num; i++)
{
pthread_create(&thread1[i], NULL, query_thread1, &data[i]);
a.join();
}
Test->set_timeout(3 * run_time + 60);
sleep(20);
sleep(run_time);
Test->tprintf("Blocking node %d\n", slaves[0]);
Test->galera->block_node(slaves[0]);
sleep(run_time);
Test->tprintf("Blocking node %d\n", slaves[1]);
Test->galera->block_node(slaves[1]);
sleep(run_time);
Test->tprintf("Unblocking nodes\n");
Test->galera->unblock_node(slaves[0]);
Test->galera->unblock_node(slaves[1]);
test.maxscales->connect();
execute_query(test.maxscales->conn_rwsplit[0], "DROP TABLE t1");
test.maxscales->disconnect();
Test->set_timeout(120);
Test->tprintf("Waiting for all threads exit\n");
for (i = 0; i < threads_num; i++)
{
data[i].exit_flag = 1;
pthread_join(thread1[i], NULL);
}
sleep(5);
Test->set_timeout(60);
Test->tprintf("set global max_connections = 100 for all backends\n");
Test->repl->execute_query_all_nodes((char *) "set global max_connections = 100;");
Test->tprintf("Drop t1\n");
Test->try_query(Test->maxscales->conn_rwsplit[0], (char *) "DROP TABLE IF EXISTS t1;");
Test->maxscales->close_maxscale_connections(0);
Test->tprintf("Checking if Maxscale alive\n");
Test->check_maxscale_alive(0);
//Test->tprintf("Checking log for unwanted errors\n");
//Test->check_log_err(0, (char *) "due to authentication failure", false);
//Test->check_log_err(0, (char *) "fatal signal 11", false);
//Test->check_log_err(0, (char *) "due to handshake failure", false);
//Test->check_log_err(0, (char *) "Refresh rate limit exceeded for load of users' table", false);
int rval = Test->global_result;
delete Test;
return rval;
}
void *query_thread1( void *ptr )
{
openclose_thread_data * data = (openclose_thread_data *) ptr;
char sql[1000000];
sleep(data->thread_id);
create_insert_string(sql, 1000, 2);
data->conn1 = data->Test->maxscales->open_rwsplit_connection(0);
if ((data->conn1 == NULL) || (mysql_errno(data->conn1) != 0 ))
{
data->Test->add_result(1, "Error connecting to RWSplit\n");
return NULL;
}
data->Test->try_query(data->conn1, (char *) "SET SESSION SQL_LOG_BIN=0;");
if (data->rwsplit_only == 0)
{
data->conn2 = data->Test->maxscales->open_readconn_master_connection(0);
if ((data->conn2 == NULL) || (mysql_errno(data->conn2) != 0 ))
{
data->Test->add_result(1, "Error connecting to ReadConn Master\n");
return NULL;
}
data->Test->try_query(data->conn2, (char *) "SET SESSION SQL_LOG_BIN=0;");
}
while (data->exit_flag == 0)
{
if (data->Test->try_query(data->conn1, "%s", sql))
{
data->Test->add_result(1, "Query to ReadConn Master failed\n");
return NULL;
}
if (data->rwsplit_only == 0)
{
if (data->Test->try_query(data->conn2, "%s", sql))
{
data->Test->add_result(1, "Query to RWSplit failed\n");
return NULL;
}
}
data->i++;
}
if (data->conn1 != NULL)
{
mysql_close(data->conn1);
}
if (data->rwsplit_only == 0)
{
if (data->conn2 != NULL)
{
mysql_close(data->conn2);
}
}
return NULL;
return test.global_result;
}

View File

@ -106,8 +106,10 @@ int Client::process(string url, string method, const char* upload_data, size_t *
if (m_data.length() &&
(json = json_loadb(m_data.c_str(), m_data.size(), 0, &err)) == NULL)
{
MHD_Response *response =
MHD_create_response_from_buffer(0, NULL, MHD_RESPMEM_PERSISTENT);
string msg = string("{\"errors\": [ { \"detail\": \"Invalid JSON in request: ")
+ err.text + "\" } ] }";
MHD_Response *response = MHD_create_response_from_buffer(msg.size(), &msg[0],
MHD_RESPMEM_MUST_COPY);
MHD_queue_response(m_connection, MHD_HTTP_BAD_REQUEST, response);
MHD_destroy_response(response);
return MHD_YES;

View File

@ -761,15 +761,11 @@ gwbuf_get_property(GWBUF *buf, const char *name)
return prop ? prop->value : NULL;
}
GWBUF *
gwbuf_make_contiguous(GWBUF *orig)
GWBUF* gwbuf_make_contiguous(GWBUF *orig)
{
GWBUF *newbuf;
uint8_t *ptr;
int len;
if (orig == NULL)
{
ss_info_dassert(!true, "gwbuf_make_contiguous: NULL buffer");
return NULL;
}
if (orig->next == NULL)
@ -777,20 +773,21 @@ gwbuf_make_contiguous(GWBUF *orig)
return orig;
}
if ((newbuf = gwbuf_alloc(gwbuf_length(orig))) != NULL)
{
newbuf->gwbuf_type = orig->gwbuf_type;
newbuf->hint = hint_dup(orig->hint);
ptr = GWBUF_DATA(newbuf);
GWBUF* newbuf = gwbuf_alloc(gwbuf_length(orig));
MXS_ABORT_IF_NULL(newbuf);
while (orig)
{
len = GWBUF_LENGTH(orig);
memcpy(ptr, GWBUF_DATA(orig), len);
ptr += len;
orig = gwbuf_consume(orig, len);
}
newbuf->gwbuf_type = orig->gwbuf_type;
newbuf->hint = hint_dup(orig->hint);
uint8_t* ptr = GWBUF_DATA(newbuf);
while (orig)
{
int len = GWBUF_LENGTH(orig);
memcpy(ptr, GWBUF_DATA(orig), len);
ptr += len;
orig = gwbuf_consume(orig, len);
}
return newbuf;
}
@ -888,7 +885,7 @@ static std::string dump_one_buffer(GWBUF* buffer)
return rval;
}
void gwbuf_hexdump(GWBUF* buffer)
void gwbuf_hexdump(GWBUF* buffer, int log_level)
{
std::stringstream ss;
@ -906,5 +903,5 @@ void gwbuf_hexdump(GWBUF* buffer)
n = 1024;
}
MXS_INFO("%.*s", n, ss.str().c_str());
MXS_LOG_MESSAGE(log_level, "%.*s", n, ss.str().c_str());
}

View File

@ -2953,45 +2953,23 @@ private:
bool dcb_foreach(bool(*func)(DCB *dcb, void *data), void *data)
{
ss_dassert(RoutingWorker::get_current() == RoutingWorker::get(RoutingWorker::MAIN));
SerialDcbTask task(func, data);
RoutingWorker::execute_serially(task);
return task.more();
}
/** Helper class for parallel iteration over all DCBs */
class ParallelDcbTask : public WorkerTask
void dcb_foreach_local(bool(*func)(DCB *dcb, void *data), void *data)
{
public:
int thread_id = RoutingWorker::get_current_id();
ParallelDcbTask(bool(*func)(DCB *, void *), void **data):
m_func(func),
m_data(data)
for (DCB *dcb = this_unit.all_dcbs[thread_id]; dcb; dcb = dcb->thread.next)
{
}
void execute(Worker& worker)
{
RoutingWorker& rworker = static_cast<RoutingWorker&>(worker);
int thread_id = rworker.id();
for (DCB *dcb = this_unit.all_dcbs[thread_id]; dcb; dcb = dcb->thread.next)
if (!func(dcb, data))
{
if (!m_func(dcb, m_data[thread_id]))
{
break;
}
break;
}
}
private:
bool(*m_func)(DCB *dcb, void *data);
void** m_data;
};
void dcb_foreach_parallel(bool(*func)(DCB *dcb, void *data), void **data)
{
ParallelDcbTask task(func, data);
RoutingWorker::execute_concurrently(task);
}
int dcb_get_port(const DCB *dcb)

View File

@ -488,6 +488,17 @@ int modutil_send_mysql_err_packet(DCB *dcb,
return dcb->func.write(dcb, buf);
}
// Helper function for debug assertions
static bool only_one_packet(GWBUF* buffer)
{
ss_dassert(buffer);
uint8_t header[4] = {};
gwbuf_copy_data(buffer, 0, MYSQL_HEADER_LEN, header);
size_t packet_len = gw_mysql_get_byte3(header);
size_t buffer_len = gwbuf_length(buffer);
return packet_len + MYSQL_HEADER_LEN == buffer_len;
}
/**
* Return the first packet from a buffer.
*
@ -534,6 +545,7 @@ GWBUF* modutil_get_next_MySQL_packet(GWBUF** p_readbuf)
}
}
ss_dassert(!packet || only_one_packet(packet));
return packet;
}

View File

@ -182,6 +182,16 @@ bool foreach_table(QueryClassifier& qc,
}
}
if (tables)
{
for (int i = 0; i < n_tables; i++)
{
MXS_FREE(tables[i]);
}
MXS_FREE(tables);
}
return rval;
}

View File

@ -101,7 +101,9 @@ TeeSession* TeeSession::create(Tee* my_instance, MXS_SESSION* session)
return NULL;
}
if ((client = LocalClient::create(session, my_instance->get_service())) == NULL)
if ((client = LocalClient::create((MYSQL_session*)session->client_dcb->data,
(MySQLProtocol*)session->client_dcb->protocol,
my_instance->get_service())) == NULL)
{
return NULL;
}

View File

@ -25,20 +25,15 @@
static const uint32_t poll_events = EPOLLIN | EPOLLOUT | EPOLLET | ERROR_EVENTS;
LocalClient::LocalClient(MXS_SESSION* session, int fd):
LocalClient::LocalClient(MYSQL_session* session, MySQLProtocol* proto, int fd):
m_state(VC_WAITING_HANDSHAKE),
m_sock(fd),
m_expected_bytes(0),
m_client({}),
m_protocol({}),
m_client(*session),
m_protocol(*proto),
m_self_destruct(false)
{
MXS_POLL_DATA::handler = LocalClient::poll_handler;
MySQLProtocol* client = (MySQLProtocol*)session->client_dcb->protocol;
m_protocol.charset = client->charset;
m_protocol.client_capabilities = client->client_capabilities;
m_protocol.extra_capabilities = client->extra_capabilities;
gw_get_shared_session_auth_info(session->client_dcb, &m_client);
}
LocalClient::~LocalClient()
@ -237,7 +232,7 @@ uint32_t LocalClient::poll_handler(struct mxs_poll_data* data, void* worker, uin
return 0;
}
LocalClient* LocalClient::create(MXS_SESSION* session, const char* ip, uint64_t port)
LocalClient* LocalClient::create(MYSQL_session* session, MySQLProtocol* proto, const char* ip, uint64_t port)
{
LocalClient* rval = NULL;
sockaddr_storage addr;
@ -245,7 +240,7 @@ LocalClient* LocalClient::create(MXS_SESSION* session, const char* ip, uint64_t
if (fd > 0 && (connect(fd, (struct sockaddr*)&addr, sizeof(addr)) == 0 || errno == EINPROGRESS))
{
LocalClient* relay = new (std::nothrow) LocalClient(session, fd);
LocalClient* relay = new (std::nothrow) LocalClient(session, proto, fd);
if (relay)
{
@ -271,7 +266,7 @@ LocalClient* LocalClient::create(MXS_SESSION* session, const char* ip, uint64_t
return rval;
}
LocalClient* LocalClient::create(MXS_SESSION* session, SERVICE* service)
LocalClient* LocalClient::create(MYSQL_session* session, MySQLProtocol* proto, SERVICE* service)
{
LocalClient* rval = NULL;
LISTENER_ITERATOR iter;
@ -282,7 +277,7 @@ LocalClient* LocalClient::create(MXS_SESSION* session, SERVICE* service)
if (listener->port > 0)
{
/** Pick the first network listener */
rval = create(session, "127.0.0.1", service->ports->port);
rval = create(session, proto, "127.0.0.1", service->ports->port);
break;
}
}
@ -290,7 +285,7 @@ LocalClient* LocalClient::create(MXS_SESSION* session, SERVICE* service)
return rval;
}
LocalClient* LocalClient::create(MXS_SESSION* session, SERVER* server)
LocalClient* LocalClient::create(MYSQL_session* session, MySQLProtocol* proto, SERVER* server)
{
return create(session, server->address, server->port);
return create(session, proto, server->address, server->port);
}

View File

@ -1619,17 +1619,6 @@ static bool reauthenticate_client(MXS_SESSION* session, GWBUF* packetbuf)
return rval;
}
// Helper function for debug assertions
static bool only_one_packet(GWBUF* buffer)
{
ss_dassert(buffer);
uint8_t header[4] = {};
gwbuf_copy_data(buffer, 0, MYSQL_HEADER_LEN, header);
size_t packet_len = gw_mysql_get_byte3(header);
size_t buffer_len = gwbuf_length(buffer);
return packet_len + MYSQL_HEADER_LEN == buffer_len;
}
/**
* Detect if buffer includes partial mysql packet or multiple packets.
* Store partial packet to dcb_readqueue. Send complete packets one by one
@ -1654,12 +1643,11 @@ static int route_by_statement(MXS_SESSION* session, uint64_t capabilities, GWBUF
// Process client request one packet at a time
packetbuf = modutil_get_next_MySQL_packet(p_readbuf);
// TODO: Do this only when RCAP_TYPE_CONTIGUOUS_INPUT is requested
packetbuf = gwbuf_make_contiguous(packetbuf);
if (packetbuf != NULL)
{
ss_dassert(only_one_packet(packetbuf));
// TODO: Do this only when RCAP_TYPE_CONTIGUOUS_INPUT is requested
packetbuf = gwbuf_make_contiguous(packetbuf);
CHK_GWBUF(packetbuf);
MySQLProtocol* proto = (MySQLProtocol*)session->client_dcb->protocol;
@ -1760,6 +1748,9 @@ static int route_by_statement(MXS_SESSION* session, uint64_t capabilities, GWBUF
rc = 0;
gwbuf_free(packetbuf);
packetbuf = NULL;
MXS_ERROR("User reauthentication failed for '%s'@'%s'",
session->client_dcb->user,
session->client_dcb->remote);
}
}

View File

@ -156,7 +156,10 @@ public:
bypass_whitespace();
if (is_set(m_pI))
// Check that there's enough characters to contain a SET keyword
bool long_enough = m_pEnd - m_pI > 3 ;
if (long_enough && is_set(m_pI))
{
rv = parse(pSql_mode);
}

View File

@ -19,7 +19,7 @@
#include <set>
#include <sstream>
#include <vector>
#include <map>
#include <maxscale/alloc.h>
#include <maxscale/clock.h>
@ -30,6 +30,7 @@
#include <maxscale/utils.h>
#include <maxscale/protocol/mariadb_client.hh>
#include <maxscale/poll.h>
#include <maxscale/routingworker.h>
uint8_t null_client_sha1[MYSQL_SCRAMBLE_LEN] = "";
@ -1309,121 +1310,106 @@ bool mxs_mysql_command_will_respond(uint8_t cmd)
cmd != MXS_COM_STMT_CLOSE;
}
typedef std::vector< std::pair<SERVER*, uint64_t> > TargetList;
namespace
{
// Servers and queries to execute on them
typedef std::map<SERVER*, std::string> TargetList;
struct KillInfo
{
uint64_t target_id;
typedef bool (*DcbCallback)(DCB *dcb, void *data);
KillInfo(std::string query, MXS_SESSION* ses, DcbCallback callback):
origin(mxs_rworker_get_current_id()),
query_base(query),
protocol(*(MySQLProtocol*)ses->client_dcb->protocol),
cb(callback)
{
gw_get_shared_session_auth_info(ses->client_dcb, &session);
}
int origin;
std::string query_base;
MYSQL_session session;
MySQLProtocol protocol;
DcbCallback cb;
TargetList targets;
};
static bool kill_func(DCB *dcb, void *data);
struct ConnKillInfo: public KillInfo
{
ConnKillInfo(uint64_t id, std::string query, MXS_SESSION* ses):
KillInfo(query, ses, kill_func),
target_id(id)
{}
uint64_t target_id;
};
static bool kill_user_func(DCB *dcb, void *data);
struct UserKillInfo: public KillInfo
{
UserKillInfo(std::string name, std::string query, MXS_SESSION* ses):
KillInfo(query, ses, kill_user_func),
user(name)
{}
std::string user;
};
static bool kill_func(DCB *dcb, void *data)
{
bool rval = true;
KillInfo* info = (KillInfo*)data;
ConnKillInfo* info = static_cast<ConnKillInfo*>(data);
if (dcb->session->ses_id == info->target_id)
{
for (auto it = dcb->session->dcb_set->begin(); it != dcb->session->dcb_set->end(); it++)
MySQLProtocol* proto = (MySQLProtocol*)dcb->protocol;
if (proto->thread_id)
{
MySQLProtocol* proto = (MySQLProtocol*)(*it)->protocol;
if (proto->thread_id)
{
// DCB is connected and we know the thread ID so we can kill it
info->targets.push_back(std::make_pair((*it)->server, proto->thread_id));
}
else
{
// DCB is not yet connected, send a hangup to forcibly close it
dcb->session->close_reason = SESSION_CLOSE_KILLED;
poll_fake_hangup_event(*it);
}
}
// Found the session, stop iterating over DCBs
rval = false;
}
return rval;
}
void mxs_mysql_execute_kill(MXS_SESSION* issuer, uint64_t target_id, kill_type_t type)
{
// Gather a list of servers and connection IDs to kill
KillInfo info = {target_id};
dcb_foreach(kill_func, &info);
if (info.targets.empty())
{
// No session found, send an error
std::stringstream err;
err << "Unknown thread id: " << target_id;
mysql_send_standard_error(issuer->client_dcb, 1, 1094, err.str().c_str());
}
else
{
// Execute the KILL on all of the servers
for (TargetList::iterator it = info.targets.begin();
it != info.targets.end(); it++)
{
LocalClient* client = LocalClient::create(issuer, it->first);
const char* hard = (type & KT_HARD) ? "HARD " :
(type & KT_SOFT) ? "SOFT " :
"";
const char* query = (type & KT_QUERY) ? "QUERY " : "";
// DCB is connected and we know the thread ID so we can kill it
std::stringstream ss;
ss << "KILL " << hard << query << it->second;
GWBUF* buffer = modutil_create_query(ss.str().c_str());
client->queue_query(buffer);
gwbuf_free(buffer);
// The LocalClient needs to delete itself once the queries are done
client->self_destruct();
ss << info->query_base << proto->thread_id;
info->targets[dcb->server] = ss.str();
}
else
{
// DCB is not yet connected, send a hangup to forcibly close it
dcb->session->close_reason = SESSION_CLOSE_KILLED;
poll_fake_hangup_event(dcb);
}
mxs_mysql_send_ok(issuer->client_dcb, 1, 0, NULL);
}
}
typedef std::set<SERVER*> ServerSet;
struct KillUserInfo
{
std::string user;
ServerSet targets;
};
static bool kill_user_func(DCB *dcb, void *data)
{
KillUserInfo* info = (KillUserInfo*)data;
if (dcb->dcb_role == DCB_ROLE_BACKEND_HANDLER &&
strcasecmp(dcb->session->client_dcb->user, info->user.c_str()) == 0)
{
info->targets.insert(dcb->server);
}
return true;
}
void mxs_mysql_execute_kill_user(MXS_SESSION* issuer, const char* user, kill_type_t type)
static bool kill_user_func(DCB *dcb, void *data)
{
// Gather a list of servers and connection IDs to kill
KillUserInfo info = {user};
dcb_foreach(kill_user_func, &info);
UserKillInfo* info = (UserKillInfo*)data;
// Execute the KILL on all of the servers
for (ServerSet::iterator it = info.targets.begin();
it != info.targets.end(); it++)
if (dcb->dcb_role == DCB_ROLE_BACKEND_HANDLER &&
strcasecmp(dcb->session->client_dcb->user, info->user.c_str()) == 0)
{
LocalClient* client = LocalClient::create(issuer, *it);
const char* hard = (type & KT_HARD) ? "HARD " :
(type & KT_SOFT) ? "SOFT " : "";
const char* query = (type & KT_QUERY) ? "QUERY " : "";
std::stringstream ss;
ss << "KILL " << hard << query << "USER " << user;
GWBUF* buffer = modutil_create_query(ss.str().c_str());
info->targets[dcb->server] = info->query_base;
}
return true;
}
static void worker_func(int thread_id, void* data)
{
KillInfo* info = static_cast<KillInfo*>(data);
dcb_foreach_local(info->cb, info);
for (TargetList::iterator it = info->targets.begin();
it != info->targets.end(); it++)
{
LocalClient* client = LocalClient::create(&info->session, &info->protocol, it->first);
GWBUF* buffer = modutil_create_query(it->second.c_str());
client->queue_query(buffer);
gwbuf_free(buffer);
@ -1431,7 +1417,45 @@ void mxs_mysql_execute_kill_user(MXS_SESSION* issuer, const char* user, kill_typ
client->self_destruct();
}
mxs_mysql_send_ok(issuer->client_dcb, info.targets.size(), 0, NULL);
delete info;
}
}
void mxs_mysql_execute_kill(MXS_SESSION* issuer, uint64_t target_id, kill_type_t type)
{
const char* hard = (type & KT_HARD) ? "HARD " : (type & KT_SOFT) ? "SOFT " : "";
const char* query = (type & KT_QUERY) ? "QUERY " : "";
std::stringstream ss;
ss << "KILL " << hard << query;
for (int i = 0; i < config_threadcount(); i++)
{
MXS_WORKER* worker = mxs_rworker_get(i);
ss_dassert(worker);
mxs_worker_post_message(worker, MXS_WORKER_MSG_CALL, (intptr_t)worker_func,
(intptr_t)new ConnKillInfo(target_id, ss.str(), issuer));
}
mxs_mysql_send_ok(issuer->client_dcb, 1, 0, NULL);
}
void mxs_mysql_execute_kill_user(MXS_SESSION* issuer, const char* user, kill_type_t type)
{
const char* hard = (type & KT_HARD) ? "HARD " : (type & KT_SOFT) ? "SOFT " : "";
const char* query = (type & KT_QUERY) ? "QUERY " : "";
std::stringstream ss;
ss << "KILL " << hard << query << "USER " << user;
for (int i = 0; i < config_threadcount(); i++)
{
MXS_WORKER* worker = mxs_rworker_get(i);
ss_dassert(worker);
mxs_worker_post_message(worker, MXS_WORKER_MSG_CALL, (intptr_t)worker_func,
(intptr_t)new UserKillInfo(user, ss.str(), issuer));
}
mxs_mysql_send_ok(issuer->client_dcb, 1, 0, NULL);
}
/**

View File

@ -185,6 +185,15 @@ bool RWSplitSession::route_stored_query()
MXS_INFO("Routing stored queries");
GWBUF* query_queue = modutil_get_next_MySQL_packet(&m_query_queue);
query_queue = gwbuf_make_contiguous(query_queue);
ss_dassert(query_queue);
if (query_queue == NULL)
{
MXS_ALERT("Queued query unexpectedly empty. Bytes queued: %d Hexdump: ",
gwbuf_length(m_query_queue));
gwbuf_hexdump(m_query_queue, LOG_ALERT);
return true;
}
/** Store the query queue locally for the duration of the routeQuery call.
* This prevents recursive calls into this function. */