After upgrades, it is usually useful to see which version of MaxScale is
running. By adding a command, we can see the actual version of the running
daemon process instead of the version of the current binary.
The routeQuery() in readconnroute now checks for maintenance mode. If
the server is in maintenance, the session is closed, since this router
has no backend swapping capability.
gwbuf_clone cloned only the first buffer of a chain of buffers,
which never can be the desired outcome, while gwbuf_clone_all
cloned all buffers.
Now, gwbuf_clone behaves the way gwbuf_clone_all used to behave
and gwbuf_clone_all has been removed.
The DECIMAL value type is now properly handled in Avrorouter. It is
processed into an Avro double value when before it was ignored and
replaced with a zero integer.
Readwritesplit should route all queries from cloned sessions to the
master. This allows batch statements to be safely routed.
Native readwritesplit sessions only support batched writes as batched
reads aren't very common. Once readwritesplit supports batched reads, the
special handling for cloned DCBs can be removed.
The detailed output of the new help command was a bit too densely packed
for some commands.
Added missing values for the `alter server` command help output.
Binlog server option ‘encryption_key_file=’ can now use the same key
file the MariaDB 10.1 server might have in my.cnf:
‘file_key_management_filename=‘
Note: the file content must be in clear, no key encryption.
If a slave fails while a non-critical read is being executed, the read is
retried on a different server. This is controlled by the new
`retry_failed_reads` option.
Only selects done that are done outside of a transaction and with
autocommit enabled are retried.
The binlogrouter needs to manipulate the protocol structures in order for
the resultset buffering to work correctly. If the state isn't manipulated
for COM_QUERY statements, the resultsets aren't buffered and will be
routed in separate buffers.
The backend MySQL protocol module now supports a new routing capability
which allows result sets to be gathered into one buffer before they are
routed onward. This should not be used by modules that expect large
result sets as the result set is buffered in memory.
Adding a limit on how large of a result set could be buffered would allow
relatively safe use of this routing capability without compromising the
stability of the system.
Adding a server to multiple monitors is forbidden. This should be detected
and reported to the end user.
The information provided by the config_runtime system to the client isn't
as detailed as it could be. Some sort of an error message stack should be
added so that client facing interfaces could properly report the reason
for the failure. Currently the only way to detect the reason of the
failure is to parse the log files.
When a DCB error occurs, the handleError entry point of the routers is
called. The caller of this entry point expects that the error handler
marks the DCB as handled. The aforementioned behavior is wrong as the
error handler should not keep track of whether the handler was already
called.
Doing batch inserts though readwritesplit would stall due to the fact that
pending session commands were stored instead of executed immediately.
Session command responses that weren't complete also discarded the partial
event instead of storing it for later use.
Non-replication events were implicitly ignored but this was removed in a
recent change. The code that wasn't previously used didn't break the
replication event handling loop.
Events larger than 16MBytes are now encrypted when being saved.
Some changes to binlog event details report and maxbinlogcheck supports
-H option for replication header display
Closing the DCB and the backend reference that uses it at the same time
makes the error handling code clearer and removes some of the assumptions
that the code made. It will cause the DCB to be closed in multiple places
but the logic of why a DCB is being closed is more visible from the code.
This change should remove all cases where a DCB is closed without a
tightly coupled backend reference.
If the `error_on_write` mode is used when a master loses the master state,
the backend would not get closed. This would allow masters that appear
back to be used which is not intended.
Storing the large events in memory allows checksum calculations to be done
in one step. This also makes the encryption of events easier as they
require the complete event in memory.
The backend protocol module can be requested to provide complete and
contiguous packets to the router module. This removes the need to process
the packets in binlogrouter.
Added a check for the validity of the backend DCBs before they are
closed. This should guarantee that only valid DCBs are closed by
readwritesplit.
However, this is not the correct solution for the problem. The DCB should
not be in an invalid state in the first place and this fix just removes
the bad side effects of the double closing.
With the added logging in the readwritesplit error handler, more detailed
information should become available.
The errors were detected but the code proceeded to call various functions
with bad pointers. This led to a crash if a bad server name was given to
'show server'.
If a DCB is passed to the error handler for which we cannot find the
corresponding backend reference, it should not be closed.
Added extra logging for situations where the backend reference can't be
found or it is in the wrong state.
The module commands operations are now listed as `commands` instead of
`functions`. The output was also formatted and an optional filtering was
added to the `list commands` call.
Maxadmin can now create and destroy monitors. The created monitors are not
started as they would be useless without added servers and configuration
parameters.
The code prevented scaling by imposing global spinlocks for the DCBs and
SESSIONs. Removing this list means that a thread-local list must be taken
into use to replace it.