The DECIMAL type was not correctly converted to positive integers. The
0x80 bit was set only for negative numbers when it needed to be XOR-ed for
all values.
The DECIMAL value type is now properly handled in Avrorouter. It is
processed into an Avro double value when before it was ignored and
replaced with a zero integer.
Backported to the 2.0 branch.
In a configuration with multiple services, one with connection_timeout and
others without it, the connections to non-connection_timeout services
would get immediately closed due to integer overflow.
The backtick was copied to the field name and converted to an underscore
when the name was transformed into a valid Avro identifier. This caused
one extra character to appear in the field name in the Avro schema files.
As the cdc_kafka_producer script is an example, it should flush the
producer after every new record. This should make it easier to see that
events from MaxScale are sent to Kafka.
The firewall filter should allow COM_PING and other similar commands to
pass through as they are mainly used to check the status of the backend
server or to display statistics. The COM_PROCESS_KILL is the exception as
it affects the state of the backend server. This is better controlled with
permissions in the server than in the firewall filter.
Commands that require special grants aren't allowed to pass as they are
mainly for maintenance purposes and these should not be done through the
firewall.
There's no need to process the JSON twice as the Kafka producer is
expected to be used with the Python CDC client which already splits the
JSON with newlines.
Some of the JSON errors weren't handled which could cause problems when a
malformed schema definition is read.
Also added more error messages for situations when opening of the files
fails.
When a DCB error occurs, the handleError entry point of the routers is
called. The caller of this entry point expects that the error handler
marks the DCB as handled. The aforementioned behavior is wrong as the
error handler should not keep track of whether the handler was already
called.
Closing the DCB and the backend reference that uses it at the same time
makes the error handling code clearer and removes some of the assumptions
that the code made. It will cause the DCB to be closed in multiple places
but the logic of why a DCB is being closed is more visible from the code.
This change should remove all cases where a DCB is closed without a
tightly coupled backend reference.
If the `error_on_write` mode is used when a master loses the master state,
the backend would not get closed. This would allow masters that appear
back to be used which is not intended.
Added a check for the validity of the backend DCBs before they are
closed. This should guarantee that only valid DCBs are closed by
readwritesplit.
However, this is not the correct solution for the problem. The DCB should
not be in an invalid state in the first place and this fix just removes
the bad side effects of the double closing.
With the added logging in the readwritesplit error handler, more detailed
information should become available.
If a DCB is passed to the error handler for which we cannot find the
corresponding backend reference, it should not be closed.
Added extra logging for situations where the backend reference can't be
found or it is in the wrong state.
When the client connections were listed, the DCB state was not
inspected. Only DCBs in the polling state should be printed as they are
guaranteed to be in a valid state.
Previously, negative values were allowed for persistpoolmax and
persistmaxtime. Now they cause an error. Also, monitor_interval
allowed negative (or zero) values, which were then implicitly cast to
unsigned, causing unintended behaviour. Now this causes a warning
and the default value is used.
If the user running MaxScale could open the .secrets-file and the
file permissions were anything other than owner:read, the
secrets_readkeys() would fail with error message
"Ignoring secrets file <path>, invalid permissions." Now the
message is more accurate in stating the expected permissions.
MXS-1009. This commit adds a gwbuf_free after maxinfo_execute() to
free a buffer with an sql-query after it has been processed. Also,
the parse tree in maxinfo_execute_query() is now freed. The tree_free-
function was renamed to maxinfo_tree_free, since it is now globally
available.
This commit has additional changes (in relation to the 1.4.4 branch)
to remove errors caused by differences in the html and sql-sides of
MaxInfo.
This is not the optimal way to do error handling but it should solve all
problems that could rise from the multi-threaded model of MaxScale.
By taking a lock at the start of handleError, we'll be able to modify the
dcb error handling flag in a thread-safe manner. This should prevent
double error handling for all DCBs.
JSON does not have a concept of streams and a common way to stream JSON is
to separate each JSON object with a newline. Adding a newline makes it
easier to parse as JSON values do not natively contain newlines.
The prepared statements were router according to the real type instead of
being router to the master. This was caused by the change in the route
target function.
This commit adds a free() to null_auth_free_client_data, which plugs
the memory leak in maxinfo.
Also, this commit fixes some segfaults when multiple threads are
running status_row() or variable_row(). The functions use
statically allocated index variables, which often go out-of-bounds
in concurrent use. This fix changes the indexes to thread-specific
variables, with allocating and deallocating. This does seem to slow
the functions down somewhat.
If a master once had slaves and is in the stale status, it will not retain
this status after a restart. Without storing on-disk information, the
stale master status cannot be deduced by looking at the master
alone. Because of this, the user should be able to manually enable the
stale master status.
With the use_sql_variables_in=master option, readwritesplit should route
all user variable modifications and reads with user variables to the
master.
Previously, the modification of user variables was grouped into generic
system variables which caused all modifications to system variables to go
to the master only. The router requires a finer grained distiction between
normal system variable modifications and user variable modifications.
With the improvements to the query classifier, readwritesplit now properly
routes all user variable operations to the master and other system
variable modifications to all servers.
The listen() backlog is now set to INT_MAX which should guarantee that the
internal limit is always higher than the system limit. This means that the
length of the queue always follows /proc/sys/net/ipv4/tcp_max_syn_backlog.
When a connection to the master fails, readwritesplit should always treat
it the same way. Previously, if a connection to the master was lost but it
hadn't lost the master status, the failure would be treated like a slave
server failure.