The detailed output of the new help command was a bit too densely packed
for some commands.
Added missing values for the `alter server` command help output.
Binlog server option ‘encryption_key_file=’ can now use the same key
file the MariaDB 10.1 server might have in my.cnf:
‘file_key_management_filename=‘
Note: the file content must be in clear, no key encryption.
If a slave fails while a non-critical read is being executed, the read is
retried on a different server. This is controlled by the new
`retry_failed_reads` option.
Only selects done that are done outside of a transaction and with
autocommit enabled are retried.
The binlogrouter needs to manipulate the protocol structures in order for
the resultset buffering to work correctly. If the state isn't manipulated
for COM_QUERY statements, the resultsets aren't buffered and will be
routed in separate buffers.
The backend MySQL protocol module now supports a new routing capability
which allows result sets to be gathered into one buffer before they are
routed onward. This should not be used by modules that expect large
result sets as the result set is buffered in memory.
Adding a limit on how large of a result set could be buffered would allow
relatively safe use of this routing capability without compromising the
stability of the system.
Adding a server to multiple monitors is forbidden. This should be detected
and reported to the end user.
The information provided by the config_runtime system to the client isn't
as detailed as it could be. Some sort of an error message stack should be
added so that client facing interfaces could properly report the reason
for the failure. Currently the only way to detect the reason of the
failure is to parse the log files.
Doing batch inserts though readwritesplit would stall due to the fact that
pending session commands were stored instead of executed immediately.
Session command responses that weren't complete also discarded the partial
event instead of storing it for later use.
Non-replication events were implicitly ignored but this was removed in a
recent change. The code that wasn't previously used didn't break the
replication event handling loop.
Events larger than 16MBytes are now encrypted when being saved.
Some changes to binlog event details report and maxbinlogcheck supports
-H option for replication header display
Storing the large events in memory allows checksum calculations to be done
in one step. This also makes the encryption of events easier as they
require the complete event in memory.
The backend protocol module can be requested to provide complete and
contiguous packets to the router module. This removes the need to process
the packets in binlogrouter.
Added a check for the validity of the backend DCBs before they are
closed. This should guarantee that only valid DCBs are closed by
readwritesplit.
However, this is not the correct solution for the problem. The DCB should
not be in an invalid state in the first place and this fix just removes
the bad side effects of the double closing.
With the added logging in the readwritesplit error handler, more detailed
information should become available.
The errors were detected but the code proceeded to call various functions
with bad pointers. This led to a crash if a bad server name was given to
'show server'.
If a DCB is passed to the error handler for which we cannot find the
corresponding backend reference, it should not be closed.
Added extra logging for situations where the backend reference can't be
found or it is in the wrong state.
The module commands operations are now listed as `commands` instead of
`functions`. The output was also formatted and an optional filtering was
added to the `list commands` call.
Maxadmin can now create and destroy monitors. The created monitors are not
started as they would be useless without added servers and configuration
parameters.
The code prevented scaling by imposing global spinlocks for the DCBs and
SESSIONs. Removing this list means that a thread-local list must be taken
into use to replace it.
Using internal DCBs for query routing wasn't needed as the client DCB
could be used. This could also be done by simply routing the query again
with routeQuery.
Binlog file is checked at max scale startup if encryption is enabled.
The check might fail while calculating next pos or verifying event type.
A message is reported
blr_read_binlog can now check the replication header after decryption,
for encrypted events.
Added a small fix for slave server requesting position of
START_ENCRYPTION_EVENT: new pos points to first encrypted event.
The modulecmd functionality allows the avrorouter to easily control the
conversion process with one command. The conversion can now be started and
stopped by the user.
This also fixes a bug where the conversion would stop if there were no
binlog files present when the service was started.
MXS-1009. This commit adds a gwbuf_free after maxinfo_execute() to
free a buffer with an sql-query after it has been processed. Also,
the parse tree in maxinfo_execute_query() is now freed. The tree_free-
function was renamed to maxinfo_tree_free, since it is now globally
available.
This commit has additional changes (in relation to the 1.4.4 branch)
to remove errors caused by differences in the html and sql-sides of
MaxInfo.
MXS-1009. This commit adds a gwbuf_free after maxinfo_execute() to
free a buffer with an sql-query after it has been processed. Also,
the parse tree in maxinfo_execute_query() is now freed. The tree_free-
function was renamed to maxinfo_tree_free, since it is now globally
available.
This commit has additional changes (in relation to the 1.4.4 branch)
to remove errors caused by differences in the html and sql-sides of
MaxInfo.
AES_CBC can be used for binlog files encryption
The AES_CBC could leaves some not handled bytes in the buffer and those
need a special encoding (ECB and XOR)
This way the output buffer of the whole encoding with AES_CBC will have
same size as the input (AES_CTR does it without any other step)
This is not the optimal way to do error handling but it should solve all
problems that could rise from the multi-threaded model of MaxScale.
By taking a lock at the start of handleError, we'll be able to modify the
dcb error handling flag in a thread-safe manner. This should prevent
double error handling for all DCBs.
JSON does not have a concept of streams and a common way to stream JSON is
to separate each JSON object with a newline. Adding a newline makes it
easier to parse as JSON values do not natively contain newlines.