As the cdc_kafka_producer script is an example, it should flush the
producer after every new record. This should make it easier to see that
events from MaxScale are sent to Kafka.
The firewall filter should allow COM_PING and other similar commands to
pass through as they are mainly used to check the status of the backend
server or to display statistics. The COM_PROCESS_KILL is the exception as
it affects the state of the backend server. This is better controlled with
permissions in the server than in the firewall filter.
Commands that require special grants aren't allowed to pass as they are
mainly for maintenance purposes and these should not be done through the
firewall.
There's no need to process the JSON twice as the Kafka producer is
expected to be used with the Python CDC client which already splits the
JSON with newlines.
When the creation of the Avro schema would fail for a file that is being
opened, the errors wouldn't be handled properly. Also free all allocated
memory on failure.
All errors that set errno are now properly logged with the error number
and message.
Some of the JSON errors weren't handled which could cause problems when a
malformed schema definition is read.
Also added more error messages for situations when opening of the files
fails.
The max_slave_replication_lag parameter for readwritesplit only works for
monitors that detect replication lag. As the MySQL monitor is the only one
that implements this functionality, the parameter only has meaning when
used with master-slave clusters.
Converting the README into Markdown format makes it a lot easier to
comprehent. Also cleaned up the formatting.
The 2.0 branch had a broken Travis configuration. Fixed it and changed
links to point to 2.0 branch.
When a DCB error occurs, the handleError entry point of the routers is
called. The caller of this entry point expects that the error handler
marks the DCB as handled. The aforementioned behavior is wrong as the
error handler should not keep track of whether the handler was already
called.
Closing the DCB and the backend reference that uses it at the same time
makes the error handling code clearer and removes some of the assumptions
that the code made. It will cause the DCB to be closed in multiple places
but the logic of why a DCB is being closed is more visible from the code.
This change should remove all cases where a DCB is closed without a
tightly coupled backend reference.
If the `error_on_write` mode is used when a master loses the master state,
the backend would not get closed. This would allow masters that appear
back to be used which is not intended.
The documentation listed the rules as a comma separated list when they
were parsed as a whitespace separated list. The match specifiers were also
defined as optional when in fact they were mandatory.
Added a check for the validity of the backend DCBs before they are
closed. This should guarantee that only valid DCBs are closed by
readwritesplit.
However, this is not the correct solution for the problem. The DCB should
not be in an invalid state in the first place and this fix just removes
the bad side effects of the double closing.
With the added logging in the readwritesplit error handler, more detailed
information should become available.
If a DCB is passed to the error handler for which we cannot find the
corresponding backend reference, it should not be closed.
Added extra logging for situations where the backend reference can't be
found or it is in the wrong state.
When the client connections were listed, the DCB state was not
inspected. Only DCBs in the polling state should be printed as they are
guaranteed to be in a valid state.
Previously, negative values were allowed for persistpoolmax and
persistmaxtime. Now they cause an error. Also, monitor_interval
allowed negative (or zero) values, which were then implicitly cast to
unsigned, causing unintended behaviour. Now this causes a warning
and the default value is used.
If the user running MaxScale could open the .secrets-file and the
file permissions were anything other than owner:read, the
secrets_readkeys() would fail with error message
"Ignoring secrets file <path>, invalid permissions." Now the
message is more accurate in stating the expected permissions.
MXS-1009. This commit adds a gwbuf_free after maxinfo_execute() to
free a buffer with an sql-query after it has been processed. Also,
the parse tree in maxinfo_execute_query() is now freed. The tree_free-
function was renamed to maxinfo_tree_free, since it is now globally
available.
This commit has additional changes (in relation to the 1.4.4 branch)
to remove errors caused by differences in the html and sql-sides of
MaxInfo.
This is not the optimal way to do error handling but it should solve all
problems that could rise from the multi-threaded model of MaxScale.
By taking a lock at the start of handleError, we'll be able to modify the
dcb error handling flag in a thread-safe manner. This should prevent
double error handling for all DCBs.
JSON does not have a concept of streams and a common way to stream JSON is
to separate each JSON object with a newline. Adding a newline makes it
easier to parse as JSON values do not natively contain newlines.
Various older systems use Upstart to control services. MaxScale should
provide both init.d scripts and Upstart configurations for systems that
don't support systemd.
The prepared statements were router according to the real type instead of
being router to the master. This was caused by the change in the route
target function.