Merge branch '2.3' of github.com:mariadb-corporation/MaxScale into 2.3

This commit is contained in:
Timofey Turenko 2019-08-08 23:11:44 +03:00
commit 284e5db68b
47 changed files with 624 additions and 379 deletions

View File

@ -28,10 +28,17 @@ if [ "$box_type" == "RPM" ] ; then
cd $path_prefix/$platform
ln -s $platform_version "$platform_version"server
ln -s $platform_version "$platform_version"Server
cd ..
if [ "$platform" == "centos" ] ; then
cd ..
ln -s centos rhel
fi
if [ "$platform" == "opensuse" ] ; then
mkdir -p sles
cd sles
ln -s ../opensuse/$platform_version $platform_version
cd ..
fi
eval "cat <<EOF
$(<${script_dir}/templates/repository-config/rpm.json.template)

View File

@ -31,6 +31,8 @@
For more details, please refer to:
* [MariaDB MaxScale 2.3.11 Release Notes](Release-Notes/MaxScale-2.3.11-Release-Notes.md)
* [MariaDB MaxScale 2.3.10 Release Notes](Release-Notes/MaxScale-2.3.10-Release-Notes.md)
* [MariaDB MaxScale 2.3.9 Release Notes](Release-Notes/MaxScale-2.3.9-Release-Notes.md)
* [MariaDB MaxScale 2.3.8 Release Notes](Release-Notes/MaxScale-2.3.8-Release-Notes.md)
* [MariaDB MaxScale 2.3.7 Release Notes](Release-Notes/MaxScale-2.3.7-Release-Notes.md)

View File

@ -867,13 +867,17 @@ MaxScale will at startup load the users from the backend server, but if
the authentication of a user fails, MaxScale assumes it is because a new
user has been created and will thus refresh the users. By default, MaxScale
will do that at most once per 30 seconds and with this configuration option
that can be changed. The minimum allowed value is 10 seconds. A negative
that can be changed. A value of 0 allows infinite refreshes and a negative
value disables the refreshing entirelly. Note that using `maxadmin` it is
possible to explicitly cause the users of a service to be reloaded.
```
users_refresh_time=120
```
In MaxScale 2.3.9 and older versions, the minimum allowed value was 10 seconds
but, due to a bug, the default value was 0 which allowed infinite refreshes.
### `retain_last_statements`
How many statements MaxScale should store for each session. This is for

View File

@ -0,0 +1,49 @@
# MariaDB MaxScale 2.3.10 Release Notes -- 2019-08-01
Release 2.3.10 is a GA release.
This document describes the changes in release 2.3.10, when compared to the
previous release in the same series.
For any problems you encounter, please consider submitting a bug
report on [our Jira](https://jira.mariadb.org/projects/MXS).
## Bug fixes
* [MXS-2613](https://jira.mariadb.org/browse/MXS-2613) Fix cachefilter diagnostics
* [MXS-2607](https://jira.mariadb.org/browse/MXS-2607) Unexpected trailing spaces with --tsv option in MaxCtrl
* [MXS-2606](https://jira.mariadb.org/browse/MXS-2606) Users are loaded from the first available server
* [MXS-2605](https://jira.mariadb.org/browse/MXS-2605) debug assert at readwritesplit.cc:418 failed: a.second.total == a.second.read + a.second.write
* [MXS-2598](https://jira.mariadb.org/browse/MXS-2598) memory leak on handling COM_CHANGE_USER
* [MXS-2597](https://jira.mariadb.org/browse/MXS-2597) MaxScale doesn't handle errors from microhttpd
* [MXS-2594](https://jira.mariadb.org/browse/MXS-2594) Enabling use_priority for not set priority on server level triggers an election
* [MXS-2587](https://jira.mariadb.org/browse/MXS-2587) mxs1507_trx_replay: debug assert in routeQuery
* [MXS-2586](https://jira.mariadb.org/browse/MXS-2586) user_refresh_time default value is wrong
* [MXS-2559](https://jira.mariadb.org/browse/MXS-2559) Log doesn't tell from which server users are loaded from
* [MXS-2520](https://jira.mariadb.org/browse/MXS-2520) Readwritesplit won't connect to master for reads
* [MXS-2502](https://jira.mariadb.org/browse/MXS-2502) Specifying 'information_schema' as default schema upon connection gives 'access denied'
* [MXS-2490](https://jira.mariadb.org/browse/MXS-2490) Unknown prepared statement handler (0) given to mysqld_stmt_execute
* [MXS-2486](https://jira.mariadb.org/browse/MXS-2486) MaxScale 2.3.6 received fatal signal 11
* [MXS-2449](https://jira.mariadb.org/browse/MXS-2449) Maxadmin shows wrong monitor status
* [MXS-2261](https://jira.mariadb.org/browse/MXS-2261) maxkeys overwrites existing key without warning
* [MXS-1901](https://jira.mariadb.org/browse/MXS-1901) Multi continues COM_STMT_SEND_LONG_DATA route to different backends
## Known Issues and Limitations
There are some limitations and known issues within this version of MaxScale.
For more information, please refer to the [Limitations](../About/Limitations.md) document.
## Packaging
RPM and Debian packages are provided for supported the Linux distributions.
Packages can be downloaded [here](https://mariadb.com/downloads/#mariadb_platform-mariadb_maxscale).
## Source Code
The source code of MaxScale is tagged at GitHub with a tag, which is identical
with the version of MaxScale. For instance, the tag of version X.Y.Z of MaxScale
is `maxscale-X.Y.Z`. Further, the default branch is always the latest GA version
of MaxScale.
The source code is available [here](https://github.com/mariadb-corporation/MaxScale).

View File

@ -0,0 +1,33 @@
# MariaDB MaxScale 2.3.11 Release Notes -- 2019-08-02
Release 2.3.11 is a GA release.
This document describes the changes in release 2.3.11, when compared to the
previous release in the same series.
For any problems you encounter, please consider submitting a bug
report on [our Jira](https://jira.mariadb.org/projects/MXS).
## Bug fixes
* [MXS-2621](https://jira.mariadb.org/browse/MXS-2621) Incorrect SQL if lower_case_table_names is used.
## Known Issues and Limitations
There are some limitations and known issues within this version of MaxScale.
For more information, please refer to the [Limitations](../About/Limitations.md) document.
## Packaging
RPM and Debian packages are provided for supported the Linux distributions.
Packages can be downloaded [here](https://mariadb.com/downloads/#mariadb_platform-mariadb_maxscale).
## Source Code
The source code of MaxScale is tagged at GitHub with a tag, which is identical
with the version of MaxScale. For instance, the tag of version X.Y.Z of MaxScale
is `maxscale-X.Y.Z`. Further, the default branch is always the latest GA version
of MaxScale.
The source code is available [here](https://github.com/mariadb-corporation/MaxScale).

View File

@ -5,7 +5,7 @@
set(MAXSCALE_VERSION_MAJOR "2" CACHE STRING "Major version")
set(MAXSCALE_VERSION_MINOR "3" CACHE STRING "Minor version")
set(MAXSCALE_VERSION_PATCH "10" CACHE STRING "Patch version")
set(MAXSCALE_VERSION_PATCH "12" CACHE STRING "Patch version")
# This should only be incremented if a package is rebuilt
set(MAXSCALE_BUILD_NUMBER 1 CACHE STRING "Release number")

View File

@ -83,7 +83,6 @@ typedef struct server_ref_t
/* Refresh rate limits for load users from database */
#define USERS_REFRESH_TIME_DEFAULT 30 /* Allowed time interval (in seconds) after last update*/
#define USERS_REFRESH_TIME_MIN 10 /* Minimum allowed time interval (in seconds)*/
/** Default timeout values used by the connections which fetch user authentication data */
#define DEFAULT_AUTH_CONNECT_TIMEOUT 3

View File

@ -679,6 +679,17 @@ session_dump_statements_t session_get_dump_statements();
*/
const char* session_get_dump_statements_str();
void session_set_session_trace(uint32_t value);
uint32_t session_get_session_trace();
const char* session_get_session_log(MXS_SESSION* pSession);
void session_append_log(MXS_SESSION* pSession, const char* log);
void session_dump_log(MXS_SESSION* pSession);
/**
* @brief Route the query again after a delay
*

View File

@ -142,6 +142,9 @@ module.exports = function() {
if (this.argv.tsv) {
// Based on the regex found in: https://github.com/jonschlinkert/strip-color
str = str.replace( /\x1B\[[(?);]{0,2}(;?\d)*./g, '')
// Trim trailing whitespace that cli-table generates
str = str.split(os.EOL).map(s => s.split('\t').map(s => s.trim()).join('\t')).join(os.EOL)
}
return str
}

View File

@ -59,7 +59,8 @@ const session_fields = [
{'Idle': 'attributes.idle'},
{'Connections': 'attributes.connections[].server'},
{'Connection IDs': 'attributes.connections[].protocol_diagnostics.connection_id'},
{'Queries': 'attributes.queries[].statement'}
{'Queries': 'attributes.queries[].statement'},
{'Log': 'attributes.log'}
]
const filter_fields = [

View File

@ -580,7 +580,7 @@
},
"commander": {
"version": "2.15.1",
"resolved": "https://registry.npmjs.org/commander/-/commander-2.15.1.tgz",
"resolved": "http://registry.npmjs.org/commander/-/commander-2.15.1.tgz",
"integrity": "sha512-VlfT9F3V0v+jr4yxPc5gg9s62/fIVWsd2Bk2iD435um1NlGMYdVCq+MjcXnhYq2icNOizHr1kK+5TI6H0Hy0ag==",
"dev": true
},
@ -1527,9 +1527,9 @@
}
},
"lodash": {
"version": "4.17.11",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz",
"integrity": "sha512-cQKh8igo5QUhZ7lg38DYWAxMvjSAKG0A8wGSVimP07SIUEK2UO+arSRKbRZWtelMtN5V0Hkwh5ryOto/SshYIg=="
"version": "4.17.14",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.14.tgz",
"integrity": "sha512-mmKYbW3GLuJeX+iGP+Y7Gp1AiGHGbXHCOh/jZmrawMmsE7MS4znI3RL2FsjbqOyMayHInjOeykW7PEajUk1/xw=="
},
"lodash-getpath": {
"version": "0.2.4",
@ -1628,14 +1628,14 @@
},
"minimist": {
"version": "0.0.8",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz",
"resolved": "http://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz",
"integrity": "sha1-hX/Kv8M5fSYluCKCYuhqp6ARsF0=",
"dev": true
},
"mixin-deep": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz",
"integrity": "sha512-8ZItLHeEgaqEvd5lYBXfm4EZSFCX29Jb9K+lAHhDKzReKBQKj3R+7NOF6tjqYi9t4oI8VUfaWITJQm86wnXGNQ==",
"version": "1.3.2",
"resolved": "https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.2.tgz",
"integrity": "sha512-WRoDn//mXBiJ1H40rqa3vH0toePwSsGb45iInWlTySa+Uu4k3tYUSxa2v1KqAiLtvlrSzaExqS1gtk96A9zvEA==",
"requires": {
"for-in": "^1.0.2",
"is-extendable": "^1.0.1"
@ -1653,7 +1653,7 @@
},
"mkdirp": {
"version": "0.5.1",
"resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-0.5.1.tgz",
"resolved": "http://registry.npmjs.org/mkdirp/-/mkdirp-0.5.1.tgz",
"integrity": "sha1-MAV0OOrGz3+MR2fzhkjWaX11yQM=",
"dev": true,
"requires": {
@ -2926,8 +2926,7 @@
},
"mixin-deep": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz",
"integrity": "sha512-8ZItLHeEgaqEvd5lYBXfm4EZSFCX29Jb9K+lAHhDKzReKBQKj3R+7NOF6tjqYi9t4oI8VUfaWITJQm86wnXGNQ==",
"resolved": "",
"dev": true,
"requires": {
"for-in": "^1.0.2",
@ -3369,8 +3368,7 @@
},
"set-value": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz",
"integrity": "sha512-hw0yxk9GT/Hr5yJEYnHNKYXkIA8mVJgd9ditYZCe16ZczcaELYYcfvaXesNACk2O8O0nTiPQcQhGUQj8JLzeeg==",
"resolved": "",
"dev": true,
"requires": {
"extend-shallow": "^2.0.1",
@ -4010,8 +4008,7 @@
},
"union-value": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/union-value/-/union-value-1.0.0.tgz",
"integrity": "sha1-XHHDTLW61dzr4+oM0IIHulqhrqQ=",
"resolved": "",
"dev": true,
"requires": {
"arr-union": "^3.1.0",
@ -4677,9 +4674,9 @@
"integrity": "sha1-BF+XgtARrppoA93TgrJDkrPYkPc="
},
"set-value": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz",
"integrity": "sha512-hw0yxk9GT/Hr5yJEYnHNKYXkIA8mVJgd9ditYZCe16ZczcaELYYcfvaXesNACk2O8O0nTiPQcQhGUQj8JLzeeg==",
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz",
"integrity": "sha512-JxHc1weCN68wRY0fhCoXpyK55m/XPHafOmK4UWD7m2CI14GMcFypt4w/0+NV5f/ZMby2F6S2wwA7fgynh9gWSw==",
"requires": {
"extend-shallow": "^2.0.1",
"is-extendable": "^0.1.1",
@ -5106,35 +5103,14 @@
}
},
"union-value": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/union-value/-/union-value-1.0.0.tgz",
"integrity": "sha1-XHHDTLW61dzr4+oM0IIHulqhrqQ=",
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/union-value/-/union-value-1.0.1.tgz",
"integrity": "sha512-tJfXmxMeWYnczCVs7XAEvIV7ieppALdyepWMkHkwciRpZraG/xwT+s2JN8+pr1+8jCRf80FFzvr+MpQeeoF4Xg==",
"requires": {
"arr-union": "^3.1.0",
"get-value": "^2.0.6",
"is-extendable": "^0.1.1",
"set-value": "^0.4.3"
},
"dependencies": {
"extend-shallow": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz",
"integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=",
"requires": {
"is-extendable": "^0.1.0"
}
},
"set-value": {
"version": "0.4.3",
"resolved": "https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz",
"integrity": "sha1-fbCPnT0i3H945Trzw79GZuzfzPE=",
"requires": {
"extend-shallow": "^2.0.1",
"is-extendable": "^0.1.1",
"is-plain-object": "^2.0.1",
"to-object-path": "^0.3.0"
}
}
"set-value": "^2.0.1"
}
},
"unset-value": {

View File

@ -20,7 +20,7 @@
"cli-table": "^0.3.1",
"globby": "^8.0.2",
"inquirer": "^6.2.1",
"lodash": "^4.17.11",
"lodash": "^4.17.14",
"lodash-getpath": "^0.2.4",
"readline-sync": "^1.4.9",
"request": "^2.88.0",

View File

@ -389,6 +389,10 @@ add_test_executable(csmon_test.cpp csmon_test csmon_test LABELS csmon CS_BACKEND
# END: ColumnStore tests #
############################################
############################################
# BEGIN: Normal tests #
############################################
# Test monitor state change events when manually clearing server bits
add_test_executable(false_monitor_state_change.cpp false_monitor_state_change replication LABELS mysqlmon REPL_BACKEND)
@ -972,6 +976,13 @@ add_test_executable(mxs2521_double_exec.cpp mxs2521_double_exec mxs2521_double_e
# MXS-2490: Direct execution doesn't work with MaxScale
add_test_executable(mxs2490_ps_execute_direct.cpp mxs2490_ps_execute_direct replication LABELS REPL_BACKEND readwritesplit)
# MXS-2621: Incorrect SQL if lower_case_table_names is used.
add_test_executable(mxs2621_lower_case_tables.cpp mxs2621_lower_case_tables mxs2621_lower_case_tables LABELS REPL_BACKEND)
############################################
# END: Normal tests #
############################################
############################################
# BEGIN: binlogrouter and avrorouter tests #
############################################

View File

@ -0,0 +1,26 @@
[maxscale]
threads=###threads###
###server###
[MySQL-Monitor]
type=monitor
module=mysqlmon
servers=###server_line###
user=maxskysql
password=skysql
monitor_interval=2000
[RW-Split-Router]
type=service
router=readwritesplit
servers=###server_line###
user=maxskysql
password=skysql
[RW-Split-Listener]
type=listener
service=RW-Split-Router
protocol=MySQLClient
port=4006
authenticator_options=lower_case_table_names=true

View File

@ -1,5 +1,6 @@
[maxscale]
threads=###threads###
users_refresh_time=0
[rwsplit-service]
type=service

View File

@ -0,0 +1,15 @@
/**
* MXS-2621: Incorrect SQL if lower_case_table_names is used.
* https://jira.mariadb.org/browse/MXS-2621
*/
#include "testconnections.h"
int main(int argc, char* argv[])
{
TestConnections test(argc, argv);
test.maxscales->connect();
test.try_query(test.maxscales->conn_rwsplit[0], "SELECT 123");
test.maxscales->disconnect();
return test.global_result;
}

View File

@ -56,7 +56,7 @@ json_t* get_json_data(TestConnections& test, const char* query)
char* output = test.maxscales->ssh_node_output(0, query, true, &exit_code);
if (output == NULL)
{
test.add_result(1, "Query '%s' execution error, no output.\ni", output);
test.add_result(1, "Query '%s' execution error, no output.", query);
}
else
{

View File

@ -6,113 +6,79 @@
#include "testconnections.h"
typedef struct
{
int exit_flag;
int thread_id;
long i;
int rwsplit_only;
TestConnections* Test;
} openclose_thread_data;
#include <atomic>
#include <vector>
void* query_thread1(void* ptr);
int threads_num = 20;
std::atomic<bool> run {true};
void query_thread(TestConnections& test, int thread_id)
{
uint64_t i = 0;
auto validate = [&](MYSQL* conn){
unsigned int port = 0;
const char* host = "<none>";
mariadb_get_infov(conn, MARIADB_CONNECTION_PORT, &port);
mariadb_get_infov(conn, MARIADB_CONNECTION_HOST, &host);
test.expect(mysql_errno(conn) == 0 || errno == EADDRNOTAVAIL,
"Error opening conn to %s:%u, thread num is %d, iteration %ld, error is: %s\n",
host, port, thread_id, i, mysql_error(conn));
if (conn && mysql_errno(conn) == 0)
{
test.try_query(conn, "USE test");
mysql_close(conn);
}
};
// Keep running the test until we exhaust all available ports
while (run && test.global_result == 0 && errno != EADDRNOTAVAIL)
{
validate(test.maxscales->open_rwsplit_connection(0));
validate(test.maxscales->open_readconn_master_connection(0));
validate(test.maxscales->open_readconn_slave_connection(0));
i++;
}
}
int main(int argc, char* argv[])
{
TestConnections* Test = new TestConnections(argc, argv);
int run_time = Test->smoke ? 10 : 300;
openclose_thread_data data[threads_num];
for (int i = 0; i < threads_num; i++)
{
data[i].i = 0;
data[i].exit_flag = 0;
data[i].Test = Test;
data[i].thread_id = i;
}
TestConnections test(argc, argv);
// Tuning these kernel parameters removes any system limitations on how many
// connections can be created within a short period
Test->maxscales->ssh_node_f(0,
test.maxscales->ssh_node_f(0,
true,
"sysctl net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 "
"net.core.somaxconn=10000 net.ipv4.tcp_max_syn_backlog=10000");
Test->repl->execute_query_all_nodes((char*) "set global max_connections = 50000;");
Test->repl->sync_slaves();
test.repl->execute_query_all_nodes((char*) "set global max_connections = 50000;");
test.repl->sync_slaves();
pthread_t thread1[threads_num];
std::vector<std::thread> threads;
constexpr int threads_num = 20;
/* Create independent threads each of them will execute function */
for (int i = 0; i < threads_num; i++)
{
pthread_create(&thread1[i], NULL, query_thread1, &data[i]);
threads.emplace_back(query_thread, std::ref(test), i);
}
Test->tprintf("Threads are running %d seconds \n", run_time);
constexpr int run_time = 10;
test.tprintf("Threads are running for %d seconds", run_time);
for (int i = 0; i < run_time && Test->global_result == 0; i++)
for (int i = 0; i < run_time && test.global_result == 0; i++)
{
sleep(1);
}
for (int i = 0; i < threads_num; i++)
run = false;
for (auto& a : threads)
{
data[i].exit_flag = 1;
pthread_join(thread1[i], NULL);
a.join();
}
Test->check_maxscale_alive(0);
int rval = Test->global_result;
delete Test;
return rval;
}
void* query_thread1(void* ptr)
{
openclose_thread_data* data = (openclose_thread_data*) ptr;
while (data->exit_flag == 0 && data->Test->global_result == 0)
{
MYSQL* conn1 = data->Test->maxscales->open_rwsplit_connection(0);
data->Test->add_result(mysql_errno(conn1),
"Error opening RWsplit conn, thread num is %d, iteration %li, error is: %s\n",
data->thread_id, data->i, mysql_error(conn1));
MYSQL* conn2 = data->Test->maxscales->open_readconn_master_connection(0);
data->Test->add_result(mysql_errno(
conn2),
"Error opening ReadConn master conn, thread num is %d, iteration %li, error is: %s\n",
data->thread_id,
data->i,
mysql_error(conn2));
MYSQL* conn3 = data->Test->maxscales->open_readconn_slave_connection(0);
data->Test->add_result(mysql_errno(
conn3),
"Error opening ReadConn master conn, thread num is %d, iteration %li, error is: %s\n",
data->thread_id,
data->i,
mysql_error(conn3));
// USE test here is a hack to prevent Maxscale from failure; should be removed when fixed
if (conn1 != NULL)
{
data->Test->try_query(conn1, (char*) "USE test");
mysql_close(conn1);
}
if (conn2 != NULL)
{
data->Test->try_query(conn2, (char*) "USE test");
mysql_close(conn2);
}
if (conn3 != NULL)
{
data->Test->try_query(conn3, (char*) "USE test");
mysql_close(conn3);
}
data->i++;
}
return NULL;
test.check_maxscale_alive(0);
return test.global_result;
}

View File

@ -85,6 +85,8 @@ typedef struct MXB_LOG_THROTTLING
*/
typedef size_t (* mxb_log_context_provider_t)(char* buffer, size_t len);
typedef void (* mxb_in_memory_log_t)(const char* buffer, size_t len);
/**
* @brief Initialize the log
*
@ -105,7 +107,8 @@ bool mxb_log_init(const char* ident,
const char* logdir,
const char* filename,
mxb_log_target_t target,
mxb_log_context_provider_t context_provider);
mxb_log_context_provider_t context_provider,
mxb_in_memory_log_t in_memory_log);
/**
* @brief Finalize the log
@ -150,6 +153,8 @@ const char* mxb_log_get_filename();
*/
bool mxb_log_set_priority_enabled(int priority, bool enabled);
bool mxb_log_get_session_trace();
/**
* Query whether a particular syslog priority is enabled.
*
@ -233,6 +238,14 @@ void mxb_log_get_throttling(MXB_LOG_THROTTLING* throttling);
*/
void mxs_log_redirect_stdout(bool redirect);
/**
* Set session specific in-memory log
*
* @param enabled True or false to enable or disable session in-memory logging
*/
void mxb_log_set_session_trace(bool enabled);
/**
* Log a message of a particular priority.
*
@ -278,7 +291,7 @@ int mxb_log_oom(const char* message);
* MXB_ERROR, MXB_WARNING, etc. macros instead.
*/
#define MXB_LOG_MESSAGE(priority, format, ...) \
(mxb_log_is_priority_enabled(priority) \
(mxb_log_is_priority_enabled(priority) || mxb_log_get_session_trace() \
? mxb_log_message(priority, MXB_MODULE_NAME, __FILE__, __LINE__, __func__, format, ##__VA_ARGS__) \
: 0)

View File

@ -30,7 +30,7 @@
*/
inline bool mxb_log_init(mxb_log_target_t target = MXB_LOG_TARGET_FS)
{
return mxb_log_init(nullptr, ".", nullptr, target, nullptr);
return mxb_log_init(nullptr, ".", nullptr, target, nullptr, nullptr);
}
namespace maxbase
@ -52,16 +52,17 @@ public:
const char* logdir,
const char* filename,
mxb_log_target_t target,
mxb_log_context_provider_t context_provider)
mxb_log_context_provider_t context_provider,
mxb_in_memory_log_t in_memory_log)
{
if (!mxb_log_init(ident, logdir, filename, target, context_provider))
if (!mxb_log_init(ident, logdir, filename, target, context_provider, in_memory_log))
{
throw std::runtime_error("Failed to initialize the log.");
}
}
Log(mxb_log_target_t target = MXB_LOG_TARGET_FS)
: Log(nullptr, ".", nullptr, target, nullptr)
: Log(nullptr, ".", nullptr, target, nullptr, nullptr)
{
}

View File

@ -78,7 +78,8 @@ public:
const char* zLogdir,
const char* zFilename,
mxb_log_target_t target,
mxb_log_context_provider_t context_provider);
mxb_log_context_provider_t context_provider,
mxb_in_memory_log_t in_memory_log);
/**
* @brief Initializes MaxBase and the MaxBase log.
@ -88,7 +89,7 @@ public:
* @throws std::runtime_error if the initialization failed.
*/
MaxBase(mxb_log_target_t target)
: MaxBase(nullptr, ".", nullptr, target, nullptr)
: MaxBase(nullptr, ".", nullptr, target, nullptr, nullptr)
{
}

View File

@ -386,10 +386,12 @@ struct this_unit
bool do_syslog; // Can change during the lifetime of log_manager.
bool do_maxlog; // Can change during the lifetime of log_manager.
bool redirect_stdout;
bool session_trace;
MXB_LOG_THROTTLING throttling; // Can change during the lifetime of log_manager.
std::unique_ptr<mxb::Logger> sLogger;
std::unique_ptr<MessageRegistry> sMessage_registry;
size_t (* context_provider)(char* buffer, size_t len);
void (* in_memory_log)(const char* buffer, size_t len);
} this_unit =
{
DEFAULT_LOG_AUGMENTATION, // augmentation
@ -397,6 +399,7 @@ struct this_unit
true, // do_syslog
true, // do_maxlog
false, // redirect_stdout
false, // session_trace
DEFAULT_LOG_THROTTLING, // throttling
};
@ -449,7 +452,8 @@ bool mxb_log_init(const char* ident,
const char* logdir,
const char* filename,
mxb_log_target_t target,
mxb_log_context_provider_t context_provider)
mxb_log_context_provider_t context_provider,
mxb_in_memory_log_t in_memory_log)
{
assert(!this_unit.sLogger && !this_unit.sMessage_registry);
@ -511,6 +515,7 @@ bool mxb_log_init(const char* ident,
if (this_unit.sLogger && this_unit.sMessage_registry)
{
this_unit.context_provider = context_provider;
this_unit.in_memory_log = in_memory_log;
openlog(ident, LOG_PID | LOG_ODELAY, LOG_USER);
}
@ -614,6 +619,16 @@ void mxs_log_redirect_stdout(bool redirect)
this_unit.redirect_stdout = redirect;
}
void mxb_log_set_session_trace(bool enabled)
{
this_unit.session_trace = enabled;
}
bool mxb_log_get_session_trace()
{
return this_unit.session_trace;
}
bool mxb_log_rotate()
{
bool rval = this_unit.sLogger->rotate();
@ -874,7 +889,19 @@ int mxb_log_message(int priority,
// Add a final newline into the message
msg.push_back('\n');
err = this_unit.sLogger->write(msg.c_str(), msg.length()) ? 0 : -1;
if (this_unit.session_trace)
{
this_unit.in_memory_log(msg.c_str(), msg.length());
}
if (mxb_log_is_priority_enabled(level))
{
err = this_unit.sLogger->write(msg.c_str(), msg.length()) ? 0 : -1;
}
else
{
err = 0;
}
}
}
}

View File

@ -92,14 +92,15 @@ MaxBase::MaxBase(const char* zIdent,
const char* zLogdir,
const char* zFilename,
mxb_log_target_t target,
mxb_log_context_provider_t context_provider)
mxb_log_context_provider_t context_provider,
mxb_in_memory_log_t in_memory_log)
: m_log_inited(false)
{
const char* zMessage = nullptr;
if (maxbase::init())
{
m_log_inited = mxb_log_init(zIdent, zLogdir, zFilename, target, context_provider);
m_log_inited = mxb_log_init(zIdent, zLogdir, zFilename, target, context_provider, in_memory_log);
if (!m_log_inited)
{

View File

@ -71,7 +71,7 @@ static void extract_file_and_line(char* symbols, char* cmd, size_t size)
const char* symname_start = filename_end + 1;
if (*symname_start != '+')
if (*symname_start != '+' && symname_start != symname_end)
{
// We have a string form symbol name and an offset, we need to
// extract the symbol address
@ -111,6 +111,17 @@ static void extract_file_and_line(char* symbols, char* cmd, size_t size)
}
else
{
if (symname_start == symname_end)
{
// Symbol is of the format `./executable [0xdeadbeef]`
if (!(symname_start = strchr(symname_start, '['))
|| !(symname_end = strchr(symname_start, ']')))
{
snprintf(cmd, size, "Unexpected symbol format");
return;
}
}
// Raw offset into library
symname_start++;
snprintf(symname, sizeof(symname), "%.*s", (int)(symname_end - symname_start), symname_start);
@ -126,13 +137,6 @@ static void extract_file_and_line(char* symbols, char* cmd, size_t size)
memmove(cmd, str, strlen(cmd) - (str - cmd) + 1);
}
// Strip the directory name from the symbols (could this be useful?)
if (char* str = strrchr(symbols, '/'))
{
++str;
memmove(symbols, str, strlen(symbols) - (str - symbols) + 1);
}
// Remove the address where the symbol is in memory (i.e. the [0xdeadbeef] that follows the
// (main+0xa1) part), we're only interested where it is in the library.
if (char* str = strchr(symbols, '['))

View File

@ -375,6 +375,19 @@ static bool load_ssl_certificates()
return rval;
}
static bool log_daemon_errors = true;
void admin_log_error(void* arg, const char* fmt, va_list ap)
{
if (log_daemon_errors)
{
char buf[1024];
vsnprintf(buf, sizeof(buf), fmt, ap);
trim(buf);
MXS_ERROR("HTTP daemon error: %s\n", buf);
}
}
bool mxs_admin_init()
{
struct sockaddr_storage addr;
@ -383,7 +396,7 @@ bool mxs_admin_init()
config_get_global_options()->admin_port,
&addr))
{
int options = MHD_USE_EPOLL_INTERNALLY_LINUX_ONLY;
int options = MHD_USE_EPOLL_INTERNALLY_LINUX_ONLY | MHD_USE_DEBUG;
if (addr.ss_family == AF_INET6)
{
@ -397,27 +410,20 @@ bool mxs_admin_init()
}
// The port argument is ignored and the port in the struct sockaddr is used instead
http_daemon = MHD_start_daemon(options,
0,
NULL,
NULL,
handle_client,
NULL,
MHD_OPTION_NOTIFY_COMPLETED,
close_client,
NULL,
MHD_OPTION_SOCK_ADDR,
&addr,
http_daemon = MHD_start_daemon(options, 0, NULL, NULL, handle_client, NULL,
MHD_OPTION_EXTERNAL_LOGGER, admin_log_error, NULL,
MHD_OPTION_NOTIFY_COMPLETED, close_client, NULL,
MHD_OPTION_SOCK_ADDR, &addr,
!using_ssl ? MHD_OPTION_END :
MHD_OPTION_HTTPS_MEM_KEY,
admin_ssl_key,
MHD_OPTION_HTTPS_MEM_CERT,
admin_ssl_cert,
MHD_OPTION_HTTPS_MEM_TRUST,
admin_ssl_cert,
MHD_OPTION_HTTPS_MEM_KEY, admin_ssl_key,
MHD_OPTION_HTTPS_MEM_CERT, admin_ssl_cert,
MHD_OPTION_HTTPS_MEM_TRUST, admin_ssl_cert,
MHD_OPTION_END);
}
// Silence all other errors to prevent malformed requests from flooding the log
log_daemon_errors = false;
return http_daemon != NULL;
}

View File

@ -160,6 +160,7 @@ const char CN_SERVER[] = "server";
const char CN_SERVICES[] = "services";
const char CN_SERVICE[] = "service";
const char CN_SESSIONS[] = "sessions";
const char CN_SESSION_TRACE[] = "session_trace";
const char CN_SESSION_TRACK_TRX_STATE[] = "session_track_trx_state";
const char CN_SKIP_PERMISSION_CHECKS[] = "skip_permission_checks";
const char CN_SOCKET[] = "socket";
@ -317,6 +318,7 @@ const MXS_MODULE_PARAM config_service_params[] =
{CN_RETRY_ON_FAILURE, MXS_MODULE_PARAM_BOOL, "true"},
{CN_SESSION_TRACK_TRX_STATE, MXS_MODULE_PARAM_BOOL, "false"},
{CN_RETAIN_LAST_STATEMENTS, MXS_MODULE_PARAM_INT, "-1"},
{CN_SESSION_TRACE, MXS_MODULE_PARAM_INT, "0"},
{NULL}
};
@ -2337,7 +2339,7 @@ static int handle_global_item(const char* name, const char* value)
return 0;
}
decltype(gateway.qc_cache_properties.max_size)max_size = int_value;
decltype(gateway.qc_cache_properties.max_size) max_size = int_value;
if (max_size >= 0)
{
@ -2515,15 +2517,6 @@ static int handle_global_item(const char* name, const char* value)
// but I just don't beleave the uptime will be that long.
users_refresh_time = INT32_MAX;
}
else if (users_refresh_time < USERS_REFRESH_TIME_MIN)
{
MXS_WARNING("%s is less than the allowed minimum value of %d for the "
"configuration option '%s', using the minimum value.",
value,
USERS_REFRESH_TIME_MIN,
CN_USERS_REFRESH_TIME);
users_refresh_time = USERS_REFRESH_TIME_MIN;
}
if (users_refresh_time > INT32_MAX)
{
@ -2611,6 +2604,21 @@ static int handle_global_item(const char* name, const char* value)
return 0;
}
}
else if (strcmp(name, CN_SESSION_TRACE) == 0)
{
char* endptr;
int intval = strtol(value, &endptr, 0);
if (*endptr == '\0' && intval >= 0)
{
session_set_session_trace(intval);
mxb_log_set_session_trace(true);
}
else
{
MXS_ERROR("Invalid value for '%s': %s", CN_SESSION_TRACE, value);
return 0;
}
}
else if (strcmp(name, CN_LOAD_PERSISTED_CONFIGS) == 0)
{
int b = config_truth_value(value);
@ -3743,12 +3751,11 @@ int create_new_service(CONFIG_CONTEXT* obj)
config_add_defaults(obj, config_service_params);
config_add_defaults(obj, module->parameters);
int error_count = 0;
Service* service = service_alloc(obj->object, router, obj->parameters);
if (service)
{
int error_count = 0;
for (auto& a : mxs::strtok(config_get_string(obj->parameters, CN_SERVERS), ","))
{
fix_object_name(a);
@ -3780,9 +3787,10 @@ int create_new_service(CONFIG_CONTEXT* obj)
else
{
MXS_ERROR("Service '%s' creation failed.", obj->object);
error_count++;
}
return service ? 0 : 1;
return error_count;
}
/**

View File

@ -427,6 +427,15 @@ static void sigfatal_handler(int i)
cnf->sysname,
cnf->release_string);
if (DCB* dcb = dcb_get_current())
{
if (dcb->session)
{
session_dump_statements(dcb->session);
session_dump_log(dcb->session);
}
}
auto cb = [](const char* symbol, const char* cmd) {
MXS_ALERT(" %s: %s", symbol, cmd);
};

View File

@ -97,7 +97,7 @@ public:
};
typedef std::deque<QueryInfo> QueryInfos;
using Log = std::deque<std::string>;
using FilterList = std::vector<SessionFilter>;
Session(SERVICE* service);
@ -121,8 +121,11 @@ public:
void book_server_response(SERVER* pServer, bool final_response);
void book_last_as_complete();
void reset_server_bookkeeping();
void append_session_log(std::string);
void dump_session_log();
json_t* queries_as_json() const;
json_t* log_as_json() const;
void link_backend_dcb(DCB* dcb)
{
@ -148,6 +151,7 @@ private:
int m_current_query = -1; /*< The index of the current query */
DCBSet m_dcb_set; /*< Set of associated backend DCBs */
uint32_t m_retain_last_statements; /*< How many statements be retained */
Log m_log; /*< Session specific in-memory log */
};
}

View File

@ -47,13 +47,22 @@ size_t mxs_get_context(char* buffer, size_t len)
return len;
}
void mxs_log_in_memory(const char* msg, size_t len)
{
MXS_SESSION* session = session_get_current();
if (session)
{
session_append_log(session, msg);
}
}
}
bool mxs_log_init(const char* ident, const char* logdir, mxs_log_target_t target)
{
mxb::Logger::set_ident("MariaDB MaxScale");
return mxb_log_init(ident, logdir, LOGFILE_NAME, target, mxs_get_context);
return mxb_log_init(ident, logdir, LOGFILE_NAME, target, mxs_get_context, mxs_log_in_memory);
}
namespace

View File

@ -743,7 +743,7 @@ std::unique_ptr<ResultSet> monitor_get_list()
for (MXS_MONITOR* ptr = allMonitors; ptr; ptr = ptr->next)
{
const char* state = ptr->state & MONITOR_STATE_RUNNING ? "Running" : "Stopped";
const char* state = ptr->state == MONITOR_STATE_RUNNING ? "Running" : "Stopped";
set->add_row({ptr->name, state});
}

View File

@ -62,11 +62,13 @@ struct
uint64_t next_session_id;
uint32_t retain_last_statements;
session_dump_statements_t dump_statements;
uint32_t session_trace;
} this_unit =
{
1,
0,
SESSION_DUMP_STATEMENTS_NEVER
SESSION_DUMP_STATEMENTS_NEVER,
0
};
static struct session dummy_session()
@ -909,6 +911,9 @@ json_t* session_json_data(const Session* session, const char* host)
json_t* queries = session->queries_as_json();
json_object_set_new(attr, "queries", queries);
json_t* log = session->log_as_json();
json_object_set_new(attr, "log", log);
json_object_set_new(data, CN_ATTRIBUTES, attr);
json_object_set_new(data, CN_LINKS, mxs_json_self_link(host, CN_SESSIONS, ss.str().c_str()));
@ -1111,6 +1116,32 @@ void session_dump_statements(MXS_SESSION* session)
pSession->dump_statements();
}
void session_set_session_trace(uint32_t value)
{
this_unit.session_trace = value;
}
uint32_t session_get_session_trace()
{
return this_unit.session_trace;
}
void session_append_log(MXS_SESSION* pSession, const char* log)
{
// Ignore dummy and listener sessions
if (pSession->state != SESSION_STATE_DUMMY
&& pSession->state != SESSION_STATE_LISTENER
&& pSession->state != SESSION_STATE_LISTENER_STOPPED)
{
static_cast<Session*>(pSession)->append_session_log(std::string(log));
}
}
void session_dump_log(MXS_SESSION* pSession)
{
static_cast<Session*>(pSession)->dump_session_log();
}
class DelayedRoutingTask
{
DelayedRoutingTask(const DelayedRoutingTask&) = delete;
@ -1365,6 +1396,18 @@ json_t* Session::queries_as_json() const
return pQueries;
}
json_t* Session::log_as_json() const
{
json_t* pLog = json_array();
for (const auto& i : m_log)
{
json_array_append_new(pLog, json_string(i.c_str()));
}
return pLog;
}
bool Session::setup_filters(Service* service)
{
for (const auto& a : service->get_filters())
@ -1724,3 +1767,28 @@ void Session::QueryInfo::reset_server_bookkeeping()
m_completed.tv_nsec = 0;
m_complete = false;
}
void Session::append_session_log(std::string log)
{
m_log.push_front(log);
if (m_log.size() >= this_unit.session_trace)
{
m_log.pop_back();
}
}
void Session::dump_session_log()
{
if (!(m_log.empty()))
{
std::string log;
for (const auto& s : m_log)
{
log += s;
}
MXS_NOTICE("Session log for session (%" PRIu64"): \n%s ", ses_id, log.c_str());
}
}

View File

@ -365,9 +365,9 @@
}
},
"lodash": {
"version": "4.17.11",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz",
"integrity": "sha512-cQKh8igo5QUhZ7lg38DYWAxMvjSAKG0A8wGSVimP07SIUEK2UO+arSRKbRZWtelMtN5V0Hkwh5ryOto/SshYIg=="
"version": "4.17.14",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.14.tgz",
"integrity": "sha512-mmKYbW3GLuJeX+iGP+Y7Gp1AiGHGbXHCOh/jZmrawMmsE7MS4znI3RL2FsjbqOyMayHInjOeykW7PEajUk1/xw=="
},
"mime-db": {
"version": "1.37.0",

View File

@ -21,6 +21,9 @@
#include <netdb.h>
#include <stdio.h>
#include <algorithm>
#include <vector>
#include <maxscale/alloc.h>
#include <maxscale/dcb.h>
#include <maxscale/log.h>
@ -136,7 +139,7 @@ const char* mariadb_users_query
// We only care about users that have a default role assigned
"WHERE t.default_role = u.user %s;";
static int get_users(SERV_LISTENER* listener, bool skip_local);
static int get_users(SERV_LISTENER* listener, bool skip_local, SERVER** srv);
static MYSQL* gw_mysql_init(void);
static int gw_mysql_set_timeouts(MYSQL* handle);
static char* mysql_format_user_entry(void* data);
@ -192,9 +195,9 @@ static char* get_users_query(const char* server_version, int version, bool inclu
return rval;
}
int replace_mysql_users(SERV_LISTENER* listener, bool skip_local)
int replace_mysql_users(SERV_LISTENER* listener, bool skip_local, SERVER** srv)
{
int i = get_users(listener, skip_local);
int i = get_users(listener, skip_local, srv);
return i;
}
@ -1025,17 +1028,17 @@ bool query_and_process_users(const char* query, MYSQL* con, sqlite3* handle, SER
return rval;
}
int get_users_from_server(MYSQL* con, SERVER_REF* server_ref, SERVICE* service, SERV_LISTENER* listener)
int get_users_from_server(MYSQL* con, SERVER* server, SERVICE* service, SERV_LISTENER* listener)
{
if (server_ref->server->version_string[0] == 0)
if (server->version_string[0] == 0)
{
mxs_mysql_update_server_version(con, server_ref->server);
mxs_mysql_update_server_version(con, server);
}
char* query = get_users_query(server_ref->server->version_string,
server_ref->server->version,
char* query = get_users_query(server->version_string,
server->version,
service->enable_root,
roles_are_available(con, service, server_ref->server));
roles_are_available(con, service, server));
MYSQL_AUTH* instance = (MYSQL_AUTH*)listener->auth_instance;
sqlite3* handle = get_handle(instance);
@ -1043,20 +1046,20 @@ int get_users_from_server(MYSQL* con, SERVER_REF* server_ref, SERVICE* service,
bool rv = query_and_process_users(query, con, handle, service, &users);
if (!rv && have_mdev13453_problem(con, server_ref->server))
if (!rv && have_mdev13453_problem(con, server))
{
/**
* Try to work around MDEV-13453 by using a query without CTEs. Masquerading as
* a 10.1.10 server makes sure CTEs aren't used.
*/
MXS_FREE(query);
query = get_users_query(server_ref->server->version_string, 100110, service->enable_root, true);
query = get_users_query(server->version_string, 100110, service->enable_root, true);
rv = query_and_process_users(query, con, handle, service, &users);
}
if (!rv)
{
MXS_ERROR("Failed to load users from server '%s': %s", server_ref->server->name, mysql_error(con));
MXS_ERROR("Failed to load users from server '%s': %s", server->name, mysql_error(con));
}
MXS_FREE(query);
@ -1084,6 +1087,28 @@ int get_users_from_server(MYSQL* con, SERVER_REF* server_ref, SERVICE* service,
return users;
}
// Sorts candidates servers so that masters are before slaves which are before only running servers
static std::vector<SERVER*> get_candidates(SERVICE* service, bool skip_local)
{
std::vector<SERVER*> candidates;
for (auto server = service->dbref; server; server = server->next)
{
if (SERVER_REF_IS_ACTIVE(server) && server_is_running(server->server)
&& (!skip_local || !server_is_mxs_service(server->server)))
{
candidates.push_back(server->server);
}
}
std::sort(candidates.begin(), candidates.end(), [](SERVER* a, SERVER* b) {
return (server_is_master(a) && !server_is_master(b))
|| (server_is_slave(a) && !server_is_slave(b) && !server_is_master(b));
});
return candidates;
}
/**
* Load the user/passwd form mysql.user table into the service users' hashtable
* environment.
@ -1092,7 +1117,7 @@ int get_users_from_server(MYSQL* con, SERVER_REF* server_ref, SERVICE* service,
* @param users The users table into which to load the users
* @return -1 on any error or the number of users inserted
*/
static int get_users(SERV_LISTENER* listener, bool skip_local)
static int get_users(SERV_LISTENER* listener, bool skip_local, SERVER** srv)
{
const char* service_user = NULL;
const char* service_passwd = NULL;
@ -1112,33 +1137,18 @@ static int get_users(SERV_LISTENER* listener, bool skip_local)
sqlite3* handle = get_handle(instance);
delete_mysql_users(handle);
SERVER_REF* server = service->dbref;
int total_users = -1;
bool no_active_servers = true;
auto candidates = get_candidates(service, skip_local);
for (server = service->dbref; !maxscale_is_shutting_down() && server; server = server->next)
for (auto server : candidates)
{
if (!SERVER_REF_IS_ACTIVE(server) || !server_is_active(server->server)
|| (skip_local && server_is_mxs_service(server->server))
|| !server_is_running(server->server))
if (MYSQL* con = gw_mysql_init())
{
continue;
}
no_active_servers = false;
MYSQL* con = gw_mysql_init();
if (con)
{
if (mxs_mysql_real_connect(con, server->server, service_user, dpwd) == NULL)
if (mxs_mysql_real_connect(con, server, service_user, dpwd) == NULL)
{
MXS_ERROR("Failure loading users data from backend "
"[%s:%i] for service [%s]. MySQL error %i, %s",
server->server->address,
server->server->port,
service->name,
mysql_errno(con),
mysql_error(con));
MXS_ERROR("Failure loading users data from backend [%s:%i] for service [%s]. "
"MySQL error %i, %s", server->address, server->port, service->name,
mysql_errno(con), mysql_error(con));
mysql_close(con);
}
else
@ -1148,6 +1158,7 @@ static int get_users(SERV_LISTENER* listener, bool skip_local)
if (users > total_users)
{
*srv = server;
total_users = users;
}
@ -1163,12 +1174,12 @@ static int get_users(SERV_LISTENER* listener, bool skip_local)
MXS_FREE(dpwd);
if (no_active_servers)
if (candidates.empty())
{
// This service has no servers or all servers are local MaxScale services
total_users = 0;
}
else if (server == NULL && total_users == -1)
else if (*srv == nullptr && total_users == -1)
{
MXS_ERROR("Unable to get user data from backend database for service [%s]."
" Failed to connect to any of the backend databases.",

View File

@ -787,7 +787,8 @@ static int mysql_auth_load_users(SERV_LISTENER* port)
first_load = true;
}
int loaded = replace_mysql_users(port, first_load);
SERVER* srv = nullptr;
int loaded = replace_mysql_users(port, first_load, &srv);
bool injected = false;
if (loaded <= 0)
@ -834,7 +835,9 @@ static int mysql_auth_load_users(SERV_LISTENER* port)
}
else if (loaded > 0 && first_load)
{
MXS_NOTICE("[%s] Loaded %d MySQL users for listener %s.", service->name, loaded, port->name);
mxb_assert(srv);
MXS_NOTICE("[%s] Loaded %d MySQL users for listener %s from server %s.",
service->name, loaded, port->name, srv->name);
}
return rc;

View File

@ -71,7 +71,7 @@ static const char mysqlauth_validate_user_query[] =
static const char mysqlauth_validate_user_query_lower[] =
"SELECT password FROM " MYSQLAUTH_USERS_TABLE_NAME
" WHERE user = '%s' AND ( '%s' = host OR '%s' LIKE host)"
" AND (anydb = '1' OR LOWER('%s') IN ('', 'information_schema') OR LOWER('%s') LIKE LOWER(db)"
" AND (anydb = '1' OR LOWER('%s') IN ('', 'information_schema') OR LOWER('%s') LIKE LOWER(db))"
" LIMIT 1";
/** Query that only checks if there's a matching user */
@ -198,10 +198,11 @@ bool dbusers_save(sqlite3* src, const char* filename);
*
* @param service The current service
* @param skip_local Skip loading of users on local MaxScale services
* @param srv Server where the users were loaded from (output)
*
* @return -1 on any error or the number of users inserted (0 means no users at all)
*/
int replace_mysql_users(SERV_LISTENER* listener, bool skip_local);
int replace_mysql_users(SERV_LISTENER* listener, bool skip_local, SERVER** srv);
/**
* @brief Verify the user has access to the database

View File

@ -94,9 +94,9 @@ void cache_config_reset(CACHE_CONFIG& config)
bool cache_command_show(const MODULECMD_ARG* pArgs, json_t** output)
{
mxb_assert(pArgs->argc == 1);
mxb_assert(MODULECMD_GET_TYPE(&pArgs->argv[1].type) == MODULECMD_ARG_FILTER);
mxb_assert(MODULECMD_GET_TYPE(&pArgs->argv[0].type) == MODULECMD_ARG_FILTER);
const MXS_FILTER_DEF* pFilterDef = pArgs->argv[1].value.filter;
const MXS_FILTER_DEF* pFilterDef = pArgs->argv[0].value.filter;
mxb_assert(pFilterDef);
CacheFilter* pFilter = reinterpret_cast<CacheFilter*>(filter_def_get_instance(pFilterDef));

View File

@ -903,6 +903,7 @@ static int gw_read_and_write(DCB* dcb)
if (auth_change_requested(read_buffer)
&& handle_auth_change_response(read_buffer, proto, dcb))
{
gwbuf_free(read_buffer);
return 0;
}
else

View File

@ -1585,6 +1585,11 @@ static int gw_client_hangup_event(DCB* dcb)
session_dump_statements(session);
}
if (session_get_session_trace())
{
session_dump_log(session);
}
// The client did not send a COM_QUIT packet
std::string errmsg {"Connection killed by MaxScale"};
std::string extra {session_get_close_reason(dcb->session)};

View File

@ -294,6 +294,7 @@ void RWBackend::process_packets(GWBUF* result)
auto it = buffer.begin();
MXB_AT_DEBUG(size_t total_len = buffer.length());
MXB_AT_DEBUG(size_t used_len = 0);
mxb_assert(dcb()->session->service->capabilities & (RCAP_TYPE_PACKET_OUTPUT | RCAP_TYPE_STMT_OUTPUT));
while (it != buffer.end())
{

View File

@ -415,8 +415,6 @@ json_t* RWSplit::diagnostics_json() const
for (const auto& a : all_server_stats())
{
mxb_assert(a.second.total == a.second.read + a.second.write);
ServerStats::CurrentStats stats = a.second.current_stats();
json_t* obj = json_object();

View File

@ -210,6 +210,17 @@ bool RWSplitSession::route_single_stmt(GWBUF* querybuf)
{
update_trx_statistics();
auto next_master = get_target_backend(BE_MASTER, NULL, MXS_RLAG_UNDEFINED);
if (should_replace_master(next_master))
{
MXS_INFO("Replacing old master '%s' with new master '%s'",
m_current_master ?
m_current_master->name() : "<no previous master>",
next_master->name());
replace_master(next_master);
}
if (m_qc.is_trx_starting() // A transaction is starting
&& !session_trx_is_read_only(m_client->session) // Not explicitly read-only
&& should_try_trx_on_slave(route_target)) // Qualifies for speculative routing
@ -350,7 +361,7 @@ bool RWSplitSession::route_single_stmt(GWBUF* querybuf)
succp = true;
MXS_INFO("Delaying routing: %s", extract_sql(querybuf).c_str());
}
else
else if (m_config.master_failure_mode != RW_ERROR_ON_WRITE)
{
MXS_ERROR("Could not find valid server for target type %s, closing "
"connection.", route_target_to_string(route_target));
@ -1052,15 +1063,6 @@ bool RWSplitSession::handle_master_is_target(SRWBackend* dest)
SRWBackend target = get_target_backend(BE_MASTER, NULL, MXS_RLAG_UNDEFINED);
bool succp = true;
if (should_replace_master(target))
{
MXS_INFO("Replacing old master '%s' with new master '%s'",
m_current_master ?
m_current_master->name() : "<no previous master>",
target->name());
replace_master(target);
}
if (target && target == m_current_master)
{
mxb::atomic::add(&m_router->stats().n_master, 1, mxb::atomic::RELAXED);

View File

@ -149,8 +149,8 @@ void RWSplitSession::process_sescmd_response(SRWBackend& backend, GWBUF** ppPack
{
if (cmd == MYSQL_REPLY_ERR && m_sescmd_responses[id] != MYSQL_REPLY_ERR)
{
MXS_INFO("Session command failed on slave '%s': %s",
backend->name(), extract_error(*ppPacket).c_str());
MXS_WARNING("Session command failed on slave '%s': %s",
backend->name(), extract_error(*ppPacket).c_str());
}
discard_if_response_differs(backend, m_sescmd_responses[id], cmd, sescmd);

View File

@ -395,6 +395,7 @@ static void log_unexpected_response(SRWBackend& backend, GWBUF* buffer, GWBUF* c
backend->current_command(),
sql.c_str());
session_dump_statements(backend->dcb()->session);
session_dump_log(backend->dcb()->session);
mxb_assert(false);
}
}
@ -963,7 +964,9 @@ void RWSplitSession::handleError(GWBUF* errmsgbuf,
MXS_INFO("Master '%s' failed: %s", backend->name(), extract_error(errmsgbuf).c_str());
/** The connection to the master has failed */
if (!backend->is_waiting_result())
bool expected_response = backend->is_waiting_result();
if (!expected_response)
{
/** The failure of a master is not considered a critical
* failure as partial functionality still remains. If
@ -985,7 +988,6 @@ void RWSplitSession::handleError(GWBUF* errmsgbuf,
{
// We were expecting a response but we aren't going to get one
mxb_assert(m_expected_responses > 0);
m_expected_responses--;
errmsg += " Lost connection to master server while waiting for a result.";
if (can_retry_query())
@ -1010,21 +1012,19 @@ void RWSplitSession::handleError(GWBUF* errmsgbuf,
if (!can_continue)
{
if (!backend->is_master() && !backend->server()->master_err_is_logged)
{
MXS_ERROR("Server %s (%s) lost the master status while waiting"
" for a result. Client sessions will be closed.",
backend->name(),
backend->uri());
backend->server()->master_err_is_logged = true;
}
else
{
int64_t idle = mxs_clock() - backend->dcb()->last_read;
MXS_ERROR("Lost connection to the master server, closing session.%s "
"Connection has been idle for %.1f seconds. Error caused by: %s",
errmsg.c_str(), (float)idle / 10.f, extract_error(errmsgbuf).c_str());
}
int64_t idle = mxs_clock() - backend->dcb()->last_read;
MXS_ERROR("Lost connection to the master server '%s', closing session.%s "
"Connection has been idle for %.1f seconds. Error caused by: %s",
backend->name(), errmsg.c_str(), (float)idle / 10.f,
extract_error(errmsgbuf).c_str());
}
// Decrement the expected response count only if we know we can continue the sesssion.
// This keeps the internal logic sound even if another query is routed before the session
// is closed.
if (can_continue && expected_response)
{
m_expected_responses--;
}
backend->close();

View File

@ -265,9 +265,12 @@ json_t* SchemaRouter::diagnostics_json() const
return rval;
}
static const uint64_t CAPABILITIES = RCAP_TYPE_CONTIGUOUS_INPUT | RCAP_TYPE_PACKET_OUTPUT
| RCAP_TYPE_RUNTIME_CONFIG;
uint64_t SchemaRouter::getCapabilities()
{
return RCAP_TYPE_CONTIGUOUS_INPUT | RCAP_TYPE_RUNTIME_CONFIG;
return schemarouter::CAPABILITIES;
}
}
@ -288,21 +291,21 @@ extern "C" MXS_MODULE* MXS_CREATE_MODULE()
MXS_ROUTER_VERSION,
"A database sharding router for simple sharding",
"V1.0.0",
RCAP_TYPE_CONTIGUOUS_INPUT | RCAP_TYPE_RUNTIME_CONFIG,
schemarouter::CAPABILITIES,
&schemarouter::SchemaRouter::s_object,
NULL, /* Process init. */
NULL, /* Process finish. */
NULL, /* Thread init. */
NULL, /* Thread finish. */
NULL,
NULL,
NULL,
NULL,
{
{"ignore_databases", MXS_MODULE_PARAM_STRING },
{"ignore_databases_regex", MXS_MODULE_PARAM_STRING },
{"max_sescmd_history", MXS_MODULE_PARAM_COUNT, "0"},
{"disable_sescmd_history", MXS_MODULE_PARAM_BOOL, "false"},
{"refresh_databases", MXS_MODULE_PARAM_BOOL, "true"},
{"refresh_interval", MXS_MODULE_PARAM_COUNT, DEFAULT_REFRESH_INTERVAL},
{"debug", MXS_MODULE_PARAM_BOOL, "false"},
{"preferred_server", MXS_MODULE_PARAM_SERVER },
{"ignore_databases", MXS_MODULE_PARAM_STRING },
{"ignore_databases_regex", MXS_MODULE_PARAM_STRING },
{"max_sescmd_history", MXS_MODULE_PARAM_COUNT, "0"},
{"disable_sescmd_history", MXS_MODULE_PARAM_BOOL, "false"},
{"refresh_databases", MXS_MODULE_PARAM_BOOL, "true"},
{"refresh_interval", MXS_MODULE_PARAM_COUNT, DEFAULT_REFRESH_INTERVAL},
{"debug", MXS_MODULE_PARAM_BOOL, "false"},
{"preferred_server", MXS_MODULE_PARAM_SERVER },
{MXS_END_MODULE_PARAMS}
}
};

View File

@ -1192,11 +1192,13 @@ char* get_lenenc_str(void* data)
return rval;
}
static const std::set<std::string> always_ignore = {"mysql", "information_schema", "performance_schema"};
bool SchemaRouterSession::ignore_duplicate_database(const char* data)
{
bool rval = false;
if (m_config->ignored_dbs.find(data) != m_config->ignored_dbs.end())
if (m_config->ignored_dbs.count(data) || always_ignore.count(data))
{
rval = true;
}
@ -1379,8 +1381,7 @@ void SchemaRouterSession::query_databases()
"LEFT JOIN information_schema.tables AS t ON s.schema_name = t.table_schema "
"WHERE t.table_name IS NULL "
"UNION "
"SELECT CONCAT (table_schema, '.', table_name) FROM information_schema.tables "
"WHERE table_schema NOT IN ('information_schema', 'performance_schema', 'mysql');");
"SELECT CONCAT (table_schema, '.', table_name) FROM information_schema.tables");
gwbuf_set_type(buffer, GWBUF_TYPE_COLLECT_RESULT);
for (SSRBackendList::iterator it = m_backends.begin(); it != m_backends.end(); it++)

View File

@ -1,5 +1,5 @@
[maxscale]
threads=4
threads=auto
libdir=@CMAKE_INSTALL_PREFIX@/@MAXSCALE_LIBDIR@
logdir=@CMAKE_INSTALL_PREFIX@/log/maxscale/
datadir=@CMAKE_INSTALL_PREFIX@/lib/maxscale
@ -8,13 +8,37 @@ language=@CMAKE_INSTALL_PREFIX@/lib/maxscale/
piddir=@CMAKE_INSTALL_PREFIX@/run/maxscale/
admin_auth=false
[server1]
type=server
address=127.0.0.1
port=3000
protocol=MariaDBBackend
[server2]
type=server
address=127.0.0.1
port=3001
protocol=MariaDBBackend
[server3]
type=server
address=127.0.0.1
port=3002
protocol=MariaDBBackend
[server4]
type=server
address=127.0.0.1
port=3003
protocol=MariaDBBackend
[MariaDB-Monitor]
type=monitor
module=mariadbmon
servers=server1,server2,server3,server4
user=maxuser
password=maxpwd
monitor_interval=10000
monitor_interval=5000
[RW-Split-Router]
type=service
@ -22,7 +46,6 @@ router=readwritesplit
servers=server1,server2,server3,server4
user=maxuser
password=maxpwd
max_slave_connections=100%
[SchemaRouter-Router]
type=service
@ -30,7 +53,6 @@ router=schemarouter
servers=server1,server2,server3,server4
user=maxuser
password=maxpwd
auth_all_servers=1
[RW-Split-Hint-Router]
type=service
@ -38,7 +60,6 @@ router=readwritesplit
servers=server1,server2,server3,server4
user=maxuser
password=maxpwd
max_slave_connections=100%
filters=Hint
[Read-Connection-Router]
@ -54,21 +75,6 @@ filters=QLA
type=filter
module=hintfilter
[recurse3]
type=filter
module=tee
service=RW-Split-Router
[recurse2]
type=filter
module=tee
service=Read-Connection-Router
[recurse1]
type=filter
module=tee
service=RW-Split-Hint-Router
[QLA]
type=filter
module=qlafilter
@ -77,10 +83,6 @@ append=false
flush=true
filebase=/tmp/qla.log
[CLI]
type=service
router=cli
[Read-Connection-Listener]
type=listener
service=Read-Connection-Router
@ -105,32 +107,12 @@ service=RW-Split-Hint-Router
protocol=MariaDBClient
port=4009
[CLI]
type=service
router=cli
[CLI-Listener]
type=listener
service=CLI
protocol=maxscaled
socket=default
[server1]
type=server
address=127.0.0.1
port=3000
protocol=MariaDBBackend
[server2]
type=server
address=127.0.0.1
port=3001
protocol=MariaDBBackend
[server3]
type=server
address=127.0.0.1
port=3002
protocol=MariaDBBackend
[server4]
type=server
address=127.0.0.1
port=3003
protocol=MariaDBBackend

View File

@ -1,5 +1,5 @@
[maxscale]
threads=4
threads=auto
libdir=@CMAKE_INSTALL_PREFIX@/@MAXSCALE_LIBDIR@
logdir=@CMAKE_INSTALL_PREFIX@/secondary/log/maxscale/
datadir=@CMAKE_INSTALL_PREFIX@/secondary/lib/maxscale
@ -10,13 +10,37 @@ persistdir=@CMAKE_INSTALL_PREFIX@/secondary/lib/maxscale/maxscale.cnf.d/
admin_auth=false
admin_port=8990
[server1]
type=server
address=127.0.0.1
port=3000
protocol=MariaDBBackend
[server2]
type=server
address=127.0.0.1
port=3001
protocol=MariaDBBackend
[server3]
type=server
address=127.0.0.1
port=3002
protocol=MariaDBBackend
[server4]
type=server
address=127.0.0.1
port=3003
protocol=MariaDBBackend
[MariaDB-Monitor]
type=monitor
module=mariadbmon
servers=server1,server2,server3,server4
user=maxuser
password=maxpwd
monitor_interval=10000
monitor_interval=5000
[RW-Split-Router]
type=service
@ -24,7 +48,6 @@ router=readwritesplit
servers=server1,server2,server3,server4
user=maxuser
password=maxpwd
max_slave_connections=100%
[SchemaRouter-Router]
type=service
@ -32,7 +55,6 @@ router=schemarouter
servers=server1,server2,server3,server4
user=maxuser
password=maxpwd
auth_all_servers=1
[RW-Split-Hint-Router]
type=service
@ -40,7 +62,6 @@ router=readwritesplit
servers=server1,server2,server3,server4
user=maxuser
password=maxpwd
max_slave_connections=100%
filters=Hint
[Read-Connection-Router]
@ -56,21 +77,6 @@ filters=QLA
type=filter
module=hintfilter
[recurse3]
type=filter
module=tee
service=RW-Split-Router
[recurse2]
type=filter
module=tee
service=Read-Connection-Router
[recurse1]
type=filter
module=tee
service=RW-Split-Hint-Router
[QLA]
type=filter
module=qlafilter
@ -79,10 +85,6 @@ append=false
flush=true
filebase=/tmp/qla2.log
[CLI]
type=service
router=cli
[Read-Connection-Listener]
type=listener
service=Read-Connection-Router
@ -107,32 +109,12 @@ service=RW-Split-Hint-Router
protocol=MariaDBClient
port=5009
[CLI]
type=service
router=cli
[CLI-Listener]
type=listener
service=CLI
protocol=maxscaled
socket=/tmp/maxadmin2.sock
[server1]
type=server
address=127.0.0.1
port=3000
protocol=MariaDBBackend
[server2]
type=server
address=127.0.0.1
port=3001
protocol=MariaDBBackend
[server3]
type=server
address=127.0.0.1
port=3002
protocol=MariaDBBackend
[server4]
type=server
address=127.0.0.1
port=3003
protocol=MariaDBBackend