The Avro C API fails to write bytes of size zero. A workaround is to write
a single zero byte for each NULL field of type bytes.
Also added an option to configure the Avro block size in case very large
records are written.
The rotations of binlogs weren't detected as the file names weren't
compared.
Moved the indexing of the binlogs to the end of the binlog
processing. This way the files can be flushed multiple times before they
are indexed.
The old DATETIME format wasn't processed properly which caused a
corruption of following events.
A BLOB type value could be non-NULL but still have no data. In this case,
the value should be stored as a null Avro value.
The DECIMAL type was not correctly converted to positive integers. The
0x80 bit was set only for negative numbers when it needed to be XOR-ed for
all values.
The DECIMAL value type is now properly handled in Avrorouter. It is
processed into an Avro double value when before it was ignored and
replaced with a zero integer.
Backported to the 2.0 branch.
An error was logged when the end of file was reached. The error should
only be logged when a partial sync marker is read and the end of file has
not been reached.
In a configuration with multiple services, one with connection_timeout and
others without it, the connections to non-connection_timeout services
would get immediately closed due to integer overflow.
The backtick was copied to the field name and converted to an underscore
when the name was transformed into a valid Avro identifier. This caused
one extra character to appear in the field name in the Avro schema files.
As the cdc_kafka_producer script is an example, it should flush the
producer after every new record. This should make it easier to see that
events from MaxScale are sent to Kafka.
The firewall filter should allow COM_PING and other similar commands to
pass through as they are mainly used to check the status of the backend
server or to display statistics. The COM_PROCESS_KILL is the exception as
it affects the state of the backend server. This is better controlled with
permissions in the server than in the firewall filter.
Commands that require special grants aren't allowed to pass as they are
mainly for maintenance purposes and these should not be done through the
firewall.
There's no need to process the JSON twice as the Kafka producer is
expected to be used with the Python CDC client which already splits the
JSON with newlines.
When the creation of the Avro schema would fail for a file that is being
opened, the errors wouldn't be handled properly. Also free all allocated
memory on failure.
All errors that set errno are now properly logged with the error number
and message.
Some of the JSON errors weren't handled which could cause problems when a
malformed schema definition is read.
Also added more error messages for situations when opening of the files
fails.
The max_slave_replication_lag parameter for readwritesplit only works for
monitors that detect replication lag. As the MySQL monitor is the only one
that implements this functionality, the parameter only has meaning when
used with master-slave clusters.
Converting the README into Markdown format makes it a lot easier to
comprehent. Also cleaned up the formatting.
The 2.0 branch had a broken Travis configuration. Fixed it and changed
links to point to 2.0 branch.