If a master_failure_mode was set to error_on_write, a reconnection to the
old master would happen after the following events:
- Master server fails and the connection is closed
- The master server recovers
- A slave fails and the connection is closed
- A replacement for the slave is searched
If these events took place, the master would be taken back into use with
an inconsistent session state.
This function returns more detailed information about the fields
of a statement. Supersedes qc_get_affected_fields() that will
be deprecated and removed.
Note that this function now introduced new kind of behaviour; the
returned data belongs to the GWBUF and remains valid for as long as
the GWBUF is alive. That means that unnecessary copying need not
be done.
When a persistent connection is taken from the pool, the state is reset
with a COM_CHANGE_USER on the next write. This allows reuse of persistent
connections without having to worry about the state of the MySQL session.
Binlog server is already configured: if there is no pending transaction
a new binlog file is created after CHANGE MASTER.
If as START SLAVE is issued replication starts as usuale.
If maxscale is restarted the replication starts using the new created
file.
While configuring binlog server for the first time, master.ini not
existent, the specified MASTER_LOG_FILE is created in the $binlogdir.
If START SLAVE command is not issued the replication can start after
restarting maxscale as the binlog file exists.
When a COM_CHANGE_USER statement was executed, the new user credentials
were copied after the authentication message was sent. This caused the
COM_CHANGE_USER to always succeed the first time as it used the current
credentials. The user credentials would always lag behind by one.
The operation of the statement to be prepared is no longer
reported as the operation of the PREPARE statement.
Instead, when the type of the statement is
QUERY_TYPE_PREPARE_NAMED_STMT, the operation can be obtained
using qc_get_prepare_operation().
The qc_mysqlembedded implementation will be provided in a
subsequent commit.
Given a config file "config.cnf", we look for the directory
"config.cnf.d" and recursively in that hierarhcy load all files
whose suffix is ".cnf"; other files are ignored.
Currently duplicate sections are checked on a file by file basis.
That will be changed so that duplicate sections are not allowed
across all the files.
When checksum is in use and there is an error in replication stream
master connection the blr_terminate_master_replication has no effect.
MXS-961: The checksum detection calls
blr_master_delayed_connect(router); and connection is scheduled again.
The fix will break the main loop as soon as the error indicator byte is
seen and no other computation will be done (such as checksum)
- First <maxscale/cdefs.h>
- Then all system, c-runtime, OS include files in alphabetical order.
- Then include files for "3rd-party" software in a loose order of
importance.
- Then maxscale headers ordered alphabetically.
The MAXROWS_DISCARDING_RESPONSE is handled differently: the OK packet
is sent only after an EOF is seen in a reply even with multiple packet
transmission from the backend
The operation of a PREPARE statement will be that of the preparable
statement. That will make it possible to know whether an EXECUTEd
prepared statement e.g. is a SELECT or an UPDATE.
max rows filter first implementation
Resultsets with more than max_resultset_rows
will be skipped and empty result set is returned to client.
Not yet tested with multi_statements
[MaxRows]
type=filter
module=maxrows
max_resultset_rows=10
debug=15