Merge branch '2.0' into develop

This commit is contained in:
Markus Mäkelä 2017-02-01 09:35:13 +02:00
commit e64a641bcd
14 changed files with 314 additions and 78 deletions

View File

@ -237,6 +237,12 @@ and routed. Here is a list of the current limitations.
the query will be routed to the first available server. This possibly
returns an error about database rights instead of a missing database.
* The preparation of a prepared statement is routed to all servers. The
execution of a prepared statement is routed to the first available server or
to the server pointed by a routing hint attached to the query. As text
protocol prepared statements are relatively rare, prepared statements can't be
considered as supported in schemarouter
## Avrorouter limitations (avrorouter)
The avrorouter does not support the following data types and conversions.

View File

@ -11,6 +11,7 @@
as JSON objects (beta level functionality).
For more details, please refer to:
* [MariaDB MaxScale 2.0.4 Release Notes](Release-Notes/MaxScale-2.0.4-Release-Notes.md)
* [MariaDB MaxScale 2.0.3 Release Notes](Release-Notes/MaxScale-2.0.3-Release-Notes.md)
* [MariaDB MaxScale 2.0.2 Release Notes](Release-Notes/MaxScale-2.0.2-Release-Notes.md)
* [MariaDB MaxScale 2.0.1 Release Notes](Release-Notes/MaxScale-2.0.1-Release-Notes.md)

View File

@ -35,7 +35,21 @@ configuration directory is _/etc/maxscale.modules.d_.
#### `action`
This parameter is optional and determines what action is taken when a query matches a rule. The value can be either `allow`, which allows all matching queries to proceed but blocks those that don't match, or `block`, which blocks all matching queries, or `ignore` which allows all queries to proceed.
This parameter is optional and determines what action is taken when a query
matches a rule. The value can be either `allow`, which allows all matching
queries to proceed but blocks those that don't match, or `block`, which blocks
all matching queries, or `ignore` which allows all queries to proceed.
The following statement types will always be allowed through when `action` is
set to `allow`:
- COM_QUIT: Client closes connection
- COM_PING: Server is pinged
- COM_CHANGE_USER: The user is changed for an active connection
- COM_SET_OPTION: Client multi-statements are being configured
- COM_FIELD_LIST: Alias for the `SHOW TABLES;` query
- COM_PROCESS_KILL: Alias for `KILL <id>;` query
- COM_PROCESS_INFO: Alias for `SHOW PROCESSLIST;`
You can have both blacklist and whitelist functionality by configuring one filter
with `action=allow` and another one with `action=block`. You can then use

View File

@ -0,0 +1,57 @@
# MariaDB MaxScale 2.0.4 Release Notes
Release 2.0.4 is a GA release.
This document describes the changes in release 2.0.4, when compared to
release [2.0.3](MaxScale-2.0.3-Release-Notes.md).
If you are upgrading from release 1.4, please also read the release
notes of release [2.0.3](./MaxScale-2.0.3-Release-Notes.md),
release [2.0.2](./MaxScale-2.0.2-Release-Notes.md),
release [2.0.1](./MaxScale-2.0.1-Release-Notes.md) and
[2.0.0](./MaxScale-2.0.0-Release-Notes.md).
For any problems you encounter, please submit a bug report at
[Jira](https://jira.mariadb.org).
## Changed Features
- The dbfwfilter now rejects all prepared statements instead of ignoring
them. This affects _wildcard_, _columns_, _on_queries_ and _no_where_clause_
type rules which previously ignored prepared statements.
- The dbfwfilter now allows COM_PING and other commands though when
`action=allow`. See [../Filters/Database-Firewall-Filter.md](documentation)
for more details.
- The MariaDB Connector-C was upgraded to a preliminary release of version 2.3.3 (fixes MXS-951).
## Bug fixes
[Here](https://jira.mariadb.org/issues/?jql=project%20%3D%20MXS%20AND%20issuetype%20%3D%20Bug%20AND%20status%20%3D%20Closed%20AND%20fixVersion%20%3D%202.0.4)
is a list of bugs fixed since the release of MaxScale 2.0.3.
* [MXS-1111](https://jira.mariadb.org/browse/MXS-1111): Request Ping not allowed
* [MXS-1082](https://jira.mariadb.org/browse/MXS-1082): Block prepared statements
* [MXS-1080](https://jira.mariadb.org/browse/MXS-1080): Readwritesplit (documentation of max_slave_replication_lag)
* [MXS-951](https://jira.mariadb.org/browse/MXS-951): Using utf8mb4 on galera hosts stops maxscale connections
## Known Issues and Limitations
There are some limitations and known issues within this version of MaxScale.
For more information, please refer to the [Limitations](../About/Limitations.md) document.
## Packaging
RPM and Debian packages are provided for the Linux distributions supported
by MariaDB Enterprise.
Packages can be downloaded [here](https://mariadb.com/resources/downloads).
## Source Code
The source code of MaxScale is tagged at GitHub with a tag, which is derived
from the version of MaxScale. For instance, the tag of version `X.Y.Z` of MaxScale
is `maxscale-X.Y.Z`.
The source code is available [here](https://github.com/mariadb-corporation/MaxScale).

View File

@ -32,6 +32,9 @@ This feature is disabled by default.
This applies to Master/Slave replication with MySQL monitor and `detect_replication_lag=1` options set.
Please note max_slave_replication_lag must be greater than monitor interval.
This option only affects Master-Slave clusters. Galera clusters do not have a
concept of slave lag even if the application of write sets might have lag.
### `use_sql_variables_in`

View File

@ -16,10 +16,31 @@
#include <string.h>
#include <maxscale/log_manager.h>
static bool maxavro_read_sync(FILE *file, uint8_t* sync)
{
return fread(sync, 1, SYNC_MARKER_SIZE, file) == SYNC_MARKER_SIZE;
bool rval = true;
if (fread(sync, 1, SYNC_MARKER_SIZE, file) != SYNC_MARKER_SIZE)
{
rval = false;
if (ferror(file))
{
char err[STRERROR_BUFLEN];
MXS_ERROR("Failed to read file sync marker: %d, %s", errno,
strerror_r(errno, err, sizeof(err)));
}
else if (feof(file))
{
MXS_ERROR("Short read when reading file sync marker.");
}
else
{
MXS_ERROR("Unspecified error when reading file sync marker.");
}
}
return rval;
}
bool maxavro_verify_block(MAXAVRO_FILE *file)
@ -72,12 +93,24 @@ bool maxavro_read_datablock_start(MAXAVRO_FILE* file)
if (rval)
{
file->block_size = bytes;
file->records_in_block = records;
file->records_read_from_block = 0;
file->data_start_pos = ftell(file->file);
ss_dassert(file->data_start_pos > file->block_start_pos);
file->metadata_read = true;
long pos = ftell(file->file);
if (pos == -1)
{
rval = false;
char err[STRERROR_BUFLEN];
MXS_ERROR("Failed to read datablock start: %d, %s", errno,
strerror_r(errno, err, sizeof(err)));
}
else
{
file->block_size = bytes;
file->records_in_block = records;
file->records_read_from_block = 0;
file->data_start_pos = pos;
ss_dassert(file->data_start_pos > file->block_start_pos);
file->metadata_read = true;
}
}
else if (maxavro_get_error(file) != MAXAVRO_ERR_NONE)
{
@ -153,32 +186,52 @@ MAXAVRO_FILE* maxavro_file_open(const char* filename)
return NULL;
}
MAXAVRO_FILE* avrofile = calloc(1, sizeof(MAXAVRO_FILE));
bool error = false;
if (avrofile)
MAXAVRO_FILE* avrofile = calloc(1, sizeof(MAXAVRO_FILE));
char *my_filename = strdup(filename);
if (avrofile && my_filename)
{
avrofile->file = file;
avrofile->filename = strdup(filename);
char *schema = read_schema(avrofile);
avrofile->schema = schema ? maxavro_schema_alloc(schema) : NULL;
avrofile->filename = my_filename;
avrofile->last_error = MAXAVRO_ERR_NONE;
if (!schema || !avrofile->schema ||
!maxavro_read_sync(file, avrofile->sync) ||
!maxavro_read_datablock_start(avrofile))
char *schema = read_schema(avrofile);
if (schema)
{
MXS_ERROR("Failed to initialize avrofile.");
free(avrofile->schema);
free(avrofile);
avrofile = NULL;
avrofile->schema = maxavro_schema_alloc(schema);
if (avrofile->schema &&
maxavro_read_sync(file, avrofile->sync) &&
maxavro_read_datablock_start(avrofile))
{
avrofile->header_end_pos = avrofile->block_start_pos;
}
else
{
MXS_ERROR("Failed to initialize avrofile.");
maxavro_schema_free(avrofile->schema);
error = true;
}
free(schema);
}
else
{
error = true;
}
avrofile->header_end_pos = avrofile->block_start_pos;
free(schema);
}
else
{
error = true;
}
if (error)
{
fclose(file);
free(avrofile);
free(my_filename);
avrofile = NULL;
}
@ -248,19 +301,43 @@ void maxavro_file_close(MAXAVRO_FILE *file)
GWBUF* maxavro_file_binary_header(MAXAVRO_FILE *file)
{
long pos = file->header_end_pos;
fseek(file->file, 0, SEEK_SET);
GWBUF *rval = gwbuf_alloc(pos);
if (rval)
GWBUF *rval = NULL;
if (fseek(file->file, 0, SEEK_SET) == 0)
{
if (fread(GWBUF_DATA(rval), 1, pos, file->file) != pos)
if ((rval = gwbuf_alloc(pos)))
{
gwbuf_free(rval);
rval = NULL;
if (fread(GWBUF_DATA(rval), 1, pos, file->file) != pos)
{
if (ferror(file->file))
{
char err[STRERROR_BUFLEN];
MXS_ERROR("Failed to read binary header: %d, %s", errno,
strerror_r(errno, err, sizeof(err)));
}
else if (feof(file->file))
{
MXS_ERROR("Short read when reading binary header.");
}
else
{
MXS_ERROR("Unspecified error when reading binary header.");
}
gwbuf_free(rval);
rval = NULL;
}
}
else
{
MXS_ERROR("Memory allocation failed when allocating %ld bytes.", pos);
}
}
else
{
MXS_ERROR("Memory allocation failed when allocating %ld bytes.", pos);
char err[STRERROR_BUFLEN];
MXS_ERROR("Failed to read binary header: %d, %s", errno,
strerror_r(errno, err, sizeof(err)));
}
return rval;
}

View File

@ -120,26 +120,48 @@ MAXAVRO_SCHEMA* maxavro_schema_alloc(const char* json)
if (rval)
{
bool error = false;
json_error_t err;
json_t *schema = json_loads(json, 0, &err);
if (schema)
{
json_t *field_arr = NULL;
json_unpack(schema, "{s:o}", "fields", &field_arr);
size_t arr_size = json_array_size(field_arr);
rval->fields = malloc(sizeof(MAXAVRO_SCHEMA_FIELD) * arr_size);
rval->num_fields = arr_size;
for (int i = 0; i < arr_size; i++)
if (json_unpack(schema, "{s:o}", "fields", &field_arr) == 0)
{
json_t *object = json_array_get(field_arr, i);
char *key;
json_t *value_obj;
size_t arr_size = json_array_size(field_arr);
rval->fields = malloc(sizeof(MAXAVRO_SCHEMA_FIELD) * arr_size);
rval->num_fields = arr_size;
json_unpack(object, "{s:s s:o}", "name", &key, "type", &value_obj);
rval->fields[i].name = strdup(key);
rval->fields[i].type = unpack_to_type(value_obj, &rval->fields[i]);
for (int i = 0; i < arr_size; i++)
{
json_t *object = json_array_get(field_arr, i);
char *key;
json_t *value_obj;
if (object && json_unpack(object, "{s:s s:o}", "name", &key, "type", &value_obj) == 0)
{
rval->fields[i].name = strdup(key);
rval->fields[i].type = unpack_to_type(value_obj, &rval->fields[i]);
}
else
{
MXS_ERROR("Failed to unpack JSON Object \"name\": %s", json);
error = true;
for (int j = 0; j < i; j++)
{
free(rval->fields[j].name);
}
break;
}
}
}
else
{
MXS_ERROR("Failed to unpack JSON Object \"fields\": %s", json);
error = true;
}
json_decref(schema);
@ -147,12 +169,20 @@ MAXAVRO_SCHEMA* maxavro_schema_alloc(const char* json)
else
{
MXS_ERROR("Failed to read JSON schema: %s", json);
error = true;
}
if (error)
{
free(rval);
rval = NULL;
}
}
else
{
MXS_ERROR("Memory allocation failed.");
}
return rval;
}

View File

@ -8,8 +8,8 @@
set(MARIADB_CONNECTOR_C_REPO "https://github.com/MariaDB/mariadb-connector-c.git"
CACHE STRING "MariaDB Connector-C Git repository")
# Release 2.2.3 of the Connector-C
set(MARIADB_CONNECTOR_C_TAG "v2.3.2"
# Release 2.3.3 (preliminary) of the Connector-C
set(MARIADB_CONNECTOR_C_TAG "v2.3.3_pre"
CACHE STRING "MariaDB Connector-C Git tag")
ExternalProject_Add(connector-c

View File

@ -291,7 +291,9 @@ static void unpack_datetime2(uint8_t *ptr, uint8_t decimals, struct tm *dest)
dest->tm_hour = time >> 12;
dest->tm_mday = date % (1 << 5);
dest->tm_mon = yearmonth % 13;
dest->tm_year = yearmonth / 13;
/** struct tm stores the year as: Year - 1900 */
dest->tm_year = (yearmonth / 13) - 1900;
}
/** Unpack a "reverse" byte order value */

View File

@ -2304,6 +2304,24 @@ DBFW_USER* find_user_data(HASHTABLE *hash, const char *name, const char *remote)
return user;
}
static bool command_is_mandatory(const GWBUF *buffer)
{
switch (MYSQL_GET_COMMAND((uint8_t*)GWBUF_DATA(buffer)))
{
case MYSQL_COM_QUIT:
case MYSQL_COM_PING:
case MYSQL_COM_CHANGE_USER:
case MYSQL_COM_SET_OPTION:
case MYSQL_COM_FIELD_LIST:
case MYSQL_COM_PROCESS_KILL:
case MYSQL_COM_PROCESS_INFO:
return true;
default:
return false;
}
}
/**
* The routeQuery entry point. This is passed the query buffer
* to which the filter should be applied. Once processed the
@ -2340,6 +2358,13 @@ routeQuery(MXS_FILTER *instance, MXS_FILTER_SESSION *session, GWBUF *queue)
type = qc_get_type_mask(queue);
}
uint32_t type = 0;
if (modutil_is_SQL(queue) || modutil_is_SQL_prepare(queue))
{
type = qc_get_type(queue);
}
if (modutil_is_SQL(queue) && modutil_count_statements(queue) > 1)
{
GWBUF* err = gen_dummy_error(my_session, "This filter does not support "
@ -2349,6 +2374,17 @@ routeQuery(MXS_FILTER *instance, MXS_FILTER_SESSION *session, GWBUF *queue)
my_session->errmsg = NULL;
rval = dcb->func.write(dcb, err);
}
else if (QUERY_IS_TYPE(type, QUERY_TYPE_PREPARE_STMT) ||
QUERY_IS_TYPE(type, QUERY_TYPE_PREPARE_NAMED_STMT) ||
modutil_is_SQL_prepare(queue))
{
GWBUF* err = gen_dummy_error(my_session, "This filter does not support "
"prepared statements.");
gwbuf_free(queue);
free(my_session->errmsg);
my_session->errmsg = NULL;
rval = dcb->func.write(dcb, err);
}
else
{
GWBUF* analyzed_queue = queue;

View File

@ -36,7 +36,7 @@ producer = KafkaProducer(bootstrap_servers=[opts.kafka_broker])
while True:
try:
buf = sys.stdin.read(128)
buf = sys.stdin.readline()
if len(buf) == 0:
break
@ -48,6 +48,7 @@ while True:
data = decoder.raw_decode(rbuf.decode('ascii'))
rbuf = rbuf[data[1]:]
producer.send(topic=opts.kafka_topic, value=json.dumps(data[0]).encode())
producer.flush()
# JSONDecoder will return a ValueError if a partial JSON object is read
except ValueError as err:
@ -57,5 +58,3 @@ while True:
except Exception as ex:
print(ex)
break
producer.flush()

View File

@ -774,48 +774,54 @@ static bool avro_client_stream_data(AVRO_CLIENT *client)
char filename[PATH_MAX + 1];
snprintf(filename, PATH_MAX, "%s/%s", router->avrodir, client->avro_binfile);
bool ok = true;
spinlock_acquire(&client->file_lock);
if (client->file_handle == NULL)
if (client->file_handle == NULL &&
(client->file_handle = maxavro_file_open(filename)) == NULL)
{
client->file_handle = maxavro_file_open(filename);
ok = false;
}
spinlock_release(&client->file_lock);
switch (client->format)
if (ok)
{
case AVRO_FORMAT_JSON:
/** Currently only JSON format supports seeking to a GTID */
if (client->requested_gtid &&
seek_to_index_pos(client, client->file_handle) &&
seek_to_gtid(client, client->file_handle))
switch (client->format)
{
client->requested_gtid = false;
case AVRO_FORMAT_JSON:
/** Currently only JSON format supports seeking to a GTID */
if (client->requested_gtid &&
seek_to_index_pos(client, client->file_handle) &&
seek_to_gtid(client, client->file_handle))
{
client->requested_gtid = false;
}
read_more = stream_json(client);
break;
case AVRO_FORMAT_AVRO:
read_more = stream_binary(client);
break;
default:
MXS_ERROR("Unexpected format: %d", client->format);
break;
}
read_more = stream_json(client);
break;
case AVRO_FORMAT_AVRO:
read_more = stream_binary(client);
break;
if (maxavro_get_error(client->file_handle) != MAXAVRO_ERR_NONE)
{
MXS_ERROR("Reading Avro file failed with error '%s'.",
maxavro_get_error_string(client->file_handle));
}
default:
MXS_ERROR("Unexpected format: %d", client->format);
break;
/* update client struct */
memcpy(&client->avro_file, client->file_handle, sizeof(client->avro_file));
/* may be just use client->avro_file->records_read and remove this var */
client->last_sent_pos = client->avro_file.records_read;
}
if (maxavro_get_error(client->file_handle) != MAXAVRO_ERR_NONE)
{
MXS_ERROR("Reading Avro file failed with error '%s'.",
maxavro_get_error_string(client->file_handle));
}
/* update client struct */
memcpy(&client->avro_file, client->file_handle, sizeof(client->avro_file));
/* may be just use client->avro_file->records_read and remove this var */
client->last_sent_pos = client->avro_file.records_read;
}
else
{

View File

@ -73,6 +73,7 @@ int index_query_cb(void *data, int rows, char** values, char** names)
void avro_index_file(AVRO_INSTANCE *router, const char* filename)
{
MAXAVRO_FILE *file = maxavro_file_open(filename);
if (file)
{
char *name = strrchr(filename, '/');
@ -166,6 +167,10 @@ void avro_index_file(AVRO_INSTANCE *router, const char* filename)
maxavro_file_close(file);
}
else
{
MXS_ERROR("Failed to open file '%s' when generating file index.", filename);
}
}
/**

View File

@ -1,5 +1,5 @@
if(BUILD_TESTS)
add_executable(testbinlogrouter testbinlog.c ../blr.c ../blr_slave.c ../blr_master.c ../blr_file.c ../blr_cache.c)
target_link_libraries(testbinlogrouter maxscale-common ${PCRE_LINK_FLAGS} uuid)
add_test(TestBinlogRouter ${CMAKE_CURRENT_BINARY_DIR}/testbinlogrouter)
add_test(NAME TestBinlogRouter COMMAND ./testbinlogrouter WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
endif()