Before this change, only db2.t2 was reported as table name for a
select like
select * from db1.t1 union select * from db2.t2
With this change, db1.t1 and db2.t2 are reported.
The response to the COM_CHANGE_USER should always be turned into a
contiguous buffer of complete packets. This guarantees that the code that
processes it functions properly.
As COM_QUIT would terminate the connection, there's no need to initiate
the session reset process. Also make sure all buffers are empty before
putting the DCB into the pool.
Added extra debug assertions for parts of the code that are related to the
COM_CHANGE_USER processing.
The function that printed all sessions assumed that all client DCBs had
valid, non-dummy sessions. It is possible that a client with a dummy
session is the list. These sessions should be ignored.
When a persistent connection is reused, a COM_CHANGE_USER command is
executed to reset the session state. If the reused connection was closed
before the response to the COM_CHANGE_USER was received and taken into use
by another connection, another COM_CHANGE_USER would be sent to, again,
reset the session state. Due to the fact that the first response is still
on its way, it will appear as if two responses are generated for a single
COM_CHANGE_USER.
The way to fix this is to avoid putting connections that haven't been
successfully reset into the connection pool.
When a session is being closed in a controlled manner, i.e. a COM_QUIT is
received from the client, it is possible to deduce from this fact that the
backend connections are very likely to be idle. This can be used as an
additional qualification that must be met by all connections before they
can be candidates for connection pooling.
This assumption will not hold with batched and asynchronous queries. In
this case it is possible that the COM_QUIT is received from the client
before even the first result from the backend is read. For this to work,
the protocol module would need to track the number and state of expected
responses.
The buffer used to store the hexadecimal string was one byte too
short. This caused the trailing null terminator to be written into
unallocated memory.
The pointer manipulation in modutil_count_statements assumed that if a
semicolon is found, it is not the last character in the buffer. It also
assumed that the buffer contained at least one readable character.
If the binlog has binlog checksums enabled, the extra checksum bytes are
removed from the end of the event. The avrorouter assumes that whatever
caused the binlogs to appear in the first place already checked that the
checksums are OK.
Also removed one extra byte being added to the length of all query events.
Having bugs and issues listed separately has its benefits over listing
everything. The same output can still be achieved by running both scripts
and concatenating their output.
A new option ‘slave_hostname’ allows the setting of hostname in
COM_REGISTER_SLAVE.
SHOW SLAVES HOSTS; in master server can show the hostname set in binlog
router:
MariaDB [(none)]> SHOW SLAVE HOSTS;
+-----------+-----------------------------+------+-----------+
| Server_id | Host | Port | Master_id |
+-----------+-----------------------------+------+-----------+
| 93 | maxscale-blr-1.mydomain.net | 8808 | 10124 |
+-----------+-----------------------------+------+-----------+
If a connection has not been fully established (i.e. authentication has
been completed) then it should not be considered as a connection pool
candidate.
When a buffer is cloned and then the original buffer parsed and freed, the
freeing of the cloned buffer will not release the memory that was
allocated when the original buffer is parsed.
This is a side-effect of how the buffer objects are stored in the buffer
and not in the shared memory buffer. The creation of a buffer object after
cloning will cause the buffer object to be lost as the cloned buffer
didn't have a pointer to the buffer object that was created later.
By moving the buffer objects into the shared memory buffer, the memory
leak is fixed.
The replication needs to be stopped before the binlogrouter is started. If
the replication is stopped after this, it is possible that two servers
with the same value of server_id attempt to register as slaves which
causes the later of them to fail.
Enabling the option hinders the use of maintenance mode with the root
master node in most use-cases.
This behavior occurs due to the fact that the maintenance mode causes a
server to be treted as if it was down. The Galera monitor waits for the
cluster to reorganize before assigning a new master node. This is correct
(but very unexpected) behavior for single instance use-cases.
The replication_manager is only designed for systems that have yum
installed which means it will always fail on non-RHEL/CentOS systems.
The query threads in mxs1323_stress were not checking whether the test had
ended while they were executing the queries. This caused test timeouts as
the queries can take a relatively long time.