When the listeners were created twice, MaxScale would crash when the users
were loaded. This is caused by the fact that the DCB for the listener is
NULL for failed listeners.
If a slave fails while a non-critical read is being executed, the read is
retried on a different server. This is controlled by the new
`retry_failed_reads` option.
Only selects done that are done outside of a transaction and with
autocommit enabled are retried.
It is possible that a session is in the dummy state (a transient state)
when a backend write occurs. The check for the client protocol NULL-ness
should extend to the client DCB itself.
1. MaxAdmin now defaults to Emacs-mode. To activate Vim-mode, give
the option 'i' when starting.
2. "history" and "source"-commands are parsed correctly.
3. Added a check to DoSource() so that empty commands are not sent.
MaxScale doesn't respond to empty commands which would cause MaxAdmin
to wait forever.
MXS-873 To prevent monitors and MaxAdmin from interfering with each other,
changes to the server status flags now happen under a lock. To avoid
interfering with monitor logic, the monitors now acquire locks to all
of their servers at the start of the monitor loop and release them
before sleeping.
The binlogrouter needs to manipulate the protocol structures in order for
the resultset buffering to work correctly. If the state isn't manipulated
for COM_QUERY statements, the resultsets aren't buffered and will be
routed in separate buffers.
The backend MySQL protocol module now supports a new routing capability
which allows result sets to be gathered into one buffer before they are
routed onward. This should not be used by modules that expect large
result sets as the result set is buffered in memory.
Adding a limit on how large of a result set could be buffered would allow
relatively safe use of this routing capability without compromising the
stability of the system.
A destroyed monitor should not have any servers added to it. This allows
the servers that once were monitored by a destroyed monitor to be added to
new monitors.
Adding a server to multiple monitors is forbidden. This should be detected
and reported to the end user.
The information provided by the config_runtime system to the client isn't
as detailed as it could be. Some sort of an error message stack should be
added so that client facing interfaces could properly report the reason
for the failure. Currently the only way to detect the reason of the
failure is to parse the log files.
All changes made to the configuration should be traceable. This makes the
process of understanding the life-cycle of a specific MaxScale
installation easier.
Aligning the statistics object indices to cache line size reduces the CPU
overhead of gathering the statistics. This allows the statistics to more
accurately measure the polling system without the measurement affecting
the outcome.
Now everything needed for cleanly transfer the control between the
C filter API of MaxScale and a C++ filter implementation is handled
automatically. Nothing of the earlier boiler-plate code is needed.
cache_storage_api.h contains pure C declarations, while
cache_storage_api.hh contains C++ declarations. Functions dealing
with CACHE_KEY moved to these headers.
C++ header files have the .hh-suffix to distinguish them from C
header files, but also to allow a C++ header file (e.g. xyz.hh) for
an existing C header file (e.g. xyz.h).
- cpp.hh : Basic C++ stuff. Introduces the namespace "maxscale"
and its synonym "mxs".
- filter.[hh|cc]: Template class Filter and companion class FilterSession
to be used when implementing filters in C++.
- spinlock.h : Wrapper and lock guard class for SPINLOCK.
qc_get_table_names() and qc_get_database_names() now always set
the number items, also in case of error. That is, it is now always
safe to start iterating using the number of items without first
checking the returned pointer.
Router and filter instances cannot be destroyed before all worker
threads have exited. Otherwise there is a risk that data gets deleted
while someone might still be accessing it. Further, since all router
and filter instances are created by the main-thread it is better that
they are deleted by that thread as well (and not by whichever thread
happens to execute service_shutdown()). That will reduce the risk that
some unknown assumptions are violated.
Listeners are added to the list multiple times due to how DCBs are removed
from the list. This requires that an additional check is made so that we
are sure a DCB will not be added to the list twice.
Doing batch inserts though readwritesplit would stall due to the fact that
pending session commands were stored instead of executed immediately.
Session command responses that weren't complete also discarded the partial
event instead of storing it for later use.
When a session command was received, any trailing data was lost even
though an attempt to split is was made.
With this change, each session command reply will be routed individually
and any trailing data is routed separately.
Non-replication events were implicitly ignored but this was removed in a
recent change. The code that wasn't previously used didn't break the
replication event handling loop.
Initial POC implementation that is capable of outputting the
rules of the cache.
In order to do that conveniently, the json object is retained for
the lifetime of the CACHE_RULES object.
There was no real need to have two separate functions for getting the
stored buffer and the target server. Combining them into one allows it to
be handled in a nicer way.
Events larger than 16MBytes are now encrypted when being saved.
Some changes to binlog event details report and maxbinlogcheck supports
-H option for replication header display