When dcb_close() is called, the DCB is only marked for closing
and the actual closing takes place only after all event handlers
have been called. That way, the state of the DCB will not change
during event processing but only after.
From a handler perspective this should now be just like it was
when the zombie queue was present.
TODO: There are far too many state variables or variables akin to
state variables - dcb_role, state, persistentstart, n_close -
in DCB. A cleanup is warranted.
As COM_QUIT would terminate the connection, there's no need to initiate
the session reset process. Also make sure all buffers are empty before
putting the DCB into the pool.
Added extra debug assertions for parts of the code that are related to the
COM_CHANGE_USER processing.
The function that printed all sessions assumed that all client DCBs had
valid, non-dummy sessions. It is possible that a client with a dummy
session is the list. These sessions should be ignored.
The EOF packet calculation function in modutil.cc didn't handle the case
where the payload exceeded maximum packet size and could mistake binary
data for a ERR packet.
The state of a multi-packet payload is now exposed by the
modutil_count_signal_packets function. This allows proper handling of
large multi-packet payloads.
Added minor improvements to mxs1110_16mb to handle testing of this change.
As each client/server connection will be handled by a specific
thread, all closing activity can take place directly when the
connection is closed and not later when the zombie queue is
processed.
In a subsequent commit the zombie queue will be removed.
The default users are now inserted into the admin users files if no
existing files are found. This removes the hard-coded checks for admin
user names and simplifies the admin user logic.
The default interface for the admin interface is the IPv6 address '::'
which corresponds to the IPv4 address '0.0.0.0'. If the system doesn't
support IPv6, then an attempt to bind on IPv4 should be made.
The uses_function type rule matches when any of the columns given as
values uses a function. With this, columns can be denied from being used
with a function.
When a persistent connection is reused, a COM_CHANGE_USER command is
executed to reset the session state. If the reused connection was closed
before the response to the COM_CHANGE_USER was received and taken into use
by another connection, another COM_CHANGE_USER would be sent to, again,
reset the session state. Due to the fact that the first response is still
on its way, it will appear as if two responses are generated for a single
COM_CHANGE_USER.
The way to fix this is to avoid putting connections that haven't been
successfully reset into the connection pool.
If MaxScale Binlog Server is detected by MySQL monitor, then Server
status is ‘Relay Master, Running’ without ‘Slave’: this way Binlog
Server cannot be used by any statement routing.
Additionally SELECT for replication lag is skipped as well.
MXS-1344: Binlog server reports the real master id in SHOW SLAVE STATUS
| SHOW ALL SLAVES STATUS, no matter the value of ‘master_id’ identity
parameter.
Binlog server report its own server id or the identity value of
‘master_id’ in MySQL monitor query SELECT @@global.server_id,
@@read_only;
Note: SELECT @@global.server_id (no other fields) still reports the
real master server id or the value set in ‘master_id’
In qc_sqlite the fields that a particular function refers to are now
collected and reported. Qc_mysqlembedded needs to be updated accordingly
and also the compare utility. For subsequent commits.
The internal connections of the binlogrouter should be closed in the same
thread that created them. This should be the "main" thread, i.e. thread 0,
that starts the original binlogrouter service.
When a session is being closed in a controlled manner, i.e. a COM_QUIT is
received from the client, it is possible to deduce from this fact that the
backend connections are very likely to be idle. This can be used as an
additional qualification that must be met by all connections before they
can be candidates for connection pooling.
This assumption will not hold with batched and asynchronous queries. In
this case it is possible that the COM_QUIT is received from the client
before even the first result from the backend is read. For this to work,
the protocol module would need to track the number and state of expected
responses.