The qlafilter now has an option to log all messages to a single
file instead of session-specific files. Session ids are printed
to the beginning of the line when using this mode. Documentation
updated to match. Also, added an option to flush after every
write.
When persistent connections were used, it was possible that the injection
of COM_CHANGE_USER statements caused a crash when a DCB in the wrong state
was accessed.
For MySQL protocol modules, the `data` member of the client DCB points to
the shared session data, a MYSQL_session struct, but for sessions in the
persistent pool, it points to NULL. The boolean, `was_persistent`, tells
whether a DCB was just taken from the pool or it has been in use.
The `was_persistent` status wasn't properly reset for connections that
were put into the pool which caused a COM_CHANGE_USER statement to be
injected for stale connections in the pool which caused a crash when the
NULL `data` member was accessed.
Previously the session_id incrementation was done after creating
filters, giving the filters a constant zero value for session_id.
Now the incrementation happens before filter creation.
This allows safer lock-free reads to be done on lists that never shrink in
size. The main use-case for this is to allow servers to be added to a
service without locking the service each time a new session is created.
Synchronizing the memory before adding new components into a list
guarantees that if a session reads from the list and sees the new list
item, the memory pointed by the item is valid.
Function is no longer used and it was quite unoptimal, so now
removed.
qc_get_prepare_name, qc_get_prepare_operation and qc_get_field_info
that were missing from qc_dummy added at the same time.
The filter can detect SERVER_MORE_RESULTS_EXIST which means the server
is sending more result sets: example:
DROP PROCEDURE IF EXISTS multi;
DELIMITER $$
CREATE PROCEDURE multi() BEGIN
SELECT 1;
SELECT id FROM t2 limit 40;
set @a=4;
SELECT 2;
END$$
DELIMITER ;
MySQL> call multi()
The check for the success of the configuration file always resulted in a
successful return value even if the loading failed.
In addition to this, a log message referred to the active configuration
when the active configuration was set only after the processing was
complete. Since configuration failures are always fatal, there's no harm
in preemptively setting the active configuration to the one currently
being processed.
If a Galera node has a nonpositive priority, the node will never be chosen
as the master. This gives the user more control over how the master is
chosen.
The luafilter didn't use a format string with dcb_printf which can lead to
unexpected results if the returned string contains printf special
characters.
This function returns more detailed information about the fields
of a statement. Supersedes qc_get_affected_fields() that will
be deprecated and removed.
Note that this function now introduced new kind of behaviour; the
returned data belongs to the GWBUF and remains valid for as long as
the GWBUF is alive. That means that unnecessary copying need not
be done.
When a persistent connection is taken from the pool, the state is reset
with a COM_CHANGE_USER on the next write. This allows reuse of persistent
connections without having to worry about the state of the MySQL session.
Given a config file "config.cnf", we look for the directory
"config.cnf.d" and recursively in that hierarhcy load all files
whose suffix is ".cnf"; other files are ignored.
Currently duplicate sections are checked on a file by file basis.
That will be changed so that duplicate sections are not allowed
across all the files.
- First <maxscale/cdefs.h>
- Then all system, c-runtime, OS include files in alphabetical order.
- Then include files for "3rd-party" software in a loose order of
importance.
- Then maxscale headers ordered alphabetically.
The MAXROWS_DISCARDING_RESPONSE is handled differently: the OK packet
is sent only after an EOF is seen in a reply even with multiple packet
transmission from the backend
max rows filter first implementation
Resultsets with more than max_resultset_rows
will be skipped and empty result set is returned to client.
Not yet tested with multi_statements
[MaxRows]
type=filter
module=maxrows
max_resultset_rows=10
debug=15
Now that a filter can express that the transaction state is tracked,
the cache implementation can be simplified. We do not need to cater
for the case that a "too short" or "too long" a packet would be
delivered.
Further, since the autocommit mode and transaction state of the session
are tracked, the filter can cache data when it is safe to do so. In
practice that means when either AUTOCOMMIT is ON and no explicit
transaction is active or when a READ ONLY transaction is active,
irrespective of the autocommit state.
In principle it would be possible to tentatively cache data during
a transaction, and if the transaction is committed successfully
flush the tentatively cached data to the actual cache, but that will
be for another day.