Added extra debugging to query_classifier to assist in issue resolution regarding to optimized MaxScale builds and pthread_mutex_lock in sql/sql_class.h
Each monitor loops 10 times/second (sleep 100ms) and perform monitoring checks only when monitor's interval is spent. Monitors notice faster if the shutdown flag is set and thus overall shutdown is faster.
hint.c:added missing header
Changed interval from unsigned long to size_t which is guaranteed to be of same size also in windows (if possible).
Fix for broken replication has been added to mysql_monitor.
Both Slave_IO and Slave_SQL threads must be running in order to assign
the SERVER_SLAVE status but If only Slave_IO is running let’s assign
the master_id to current server and continue building the replication
tree; if no slaves at all the master will be still available.
The “detect_stale_master” option has been added, its default is 0.
If set to 1 the monitor will keep the last detected master even if the
replication setup is completely not working, i.e. both Slave_IO and
Slave_SQL threads are not running: this applies only to the server that
was master before.
After monitor or MaxScale are restarted and the replication is still
stopped or not configured there will be no master because it’s not
possible to compute the replication topology tree.
* ignore typical backup files created by common editors
* move general ignore rules like "*.o" or "depend.mk" to top level gitignore
* ignore executables and test directories in target dir gitignore
as these are local and there's no general catch-all pattern for them
Added depth level 0 for each cluster node, this way the algorithm for
root master selection will the same as in mysql replication:
the root master is server at lowest replication depth with MASTER bit
set
Here for Galera we assume all the servers are at the same level, that’s
0
Rwsplit handles ERRACT_NEW_CONNECTION by clearing backend reference, removing callbacks and associating backend reference with new backend server. If it succeeds and the router session can continue, handleError returns true. Otherwise false. When ever false is returned it means that session must be closed.
Rwsplit now tolerates backend failures in a way that it searches new backends when monitor, backend, or client operation fails due to backend failure.