The qlafilter now has an option to log all messages to a single
file instead of session-specific files. Session ids are printed
to the beginning of the line when using this mode. Documentation
updated to match. Also, added an option to flush after every
write.
When persistent connections were used, it was possible that the injection
of COM_CHANGE_USER statements caused a crash when a DCB in the wrong state
was accessed.
For MySQL protocol modules, the `data` member of the client DCB points to
the shared session data, a MYSQL_session struct, but for sessions in the
persistent pool, it points to NULL. The boolean, `was_persistent`, tells
whether a DCB was just taken from the pool or it has been in use.
The `was_persistent` status wasn't properly reset for connections that
were put into the pool which caused a COM_CHANGE_USER statement to be
injected for stale connections in the pool which caused a crash when the
NULL `data` member was accessed.
The filter can detect SERVER_MORE_RESULTS_EXIST which means the server
is sending more result sets: example:
DROP PROCEDURE IF EXISTS multi;
DELIMITER $$
CREATE PROCEDURE multi() BEGIN
SELECT 1;
SELECT id FROM t2 limit 40;
set @a=4;
SELECT 2;
END$$
DELIMITER ;
MySQL> call multi()
If a Galera node has a nonpositive priority, the node will never be chosen
as the master. This gives the user more control over how the master is
chosen.
The luafilter didn't use a format string with dcb_printf which can lead to
unexpected results if the returned string contains printf special
characters.
When a persistent connection is taken from the pool, the state is reset
with a COM_CHANGE_USER on the next write. This allows reuse of persistent
connections without having to worry about the state of the MySQL session.
The MAXROWS_DISCARDING_RESPONSE is handled differently: the OK packet
is sent only after an EOF is seen in a reply even with multiple packet
transmission from the backend
max rows filter first implementation
Resultsets with more than max_resultset_rows
will be skipped and empty result set is returned to client.
Not yet tested with multi_statements
[MaxRows]
type=filter
module=maxrows
max_resultset_rows=10
debug=15
Now that a filter can express that the transaction state is tracked,
the cache implementation can be simplified. We do not need to cater
for the case that a "too short" or "too long" a packet would be
delivered.
Further, since the autocommit mode and transaction state of the session
are tracked, the filter can cache data when it is safe to do so. In
practice that means when either AUTOCOMMIT is ON and no explicit
transaction is active or when a READ ONLY transaction is active,
irrespective of the autocommit state.
In principle it would be possible to tentatively cache data during
a transaction, and if the transaction is committed successfully
flush the tentatively cached data to the actual cache, but that will
be for another day.
The transaction state only reflects explicitly started transactions.
Thus, by looking at the autocommit mode and the transaction state a
component can figure out whether the current statement will be committed
or not.
The transaction state must be updated after a buffer has been split
into buffer containing individual packets.
NOTE: The actual updating of the transaction state and the autocommit
mode is currently wrong, but will be updated in a subsequent change.
The authentication checks make sure that a user has all the required
grants to access the database. This prevents the creation of unnecessary
backend connections reducing the overall load on the database.
Doing preliminary authentication in MaxScale enables the creation of more
informative error messages.
Some of the tests depended on a working installation where modules are all
located at the default paths. These tests now explicitly set the module
directory which fixes the immediate problem.
Disabled the starting of services in the service test as this will fail
with real modules. The dummy internal modules aren't build and should be
removed in a later commit. In general, it might be better to do service
level testing outside the internal test suite.
The HTTPD protocol mistakenly assumed that the `authenticator` parameter
of a listener would be NULL if the default authenticator is used.
Recent changes modified it so that the value is never NULL and
`NullAuthDeny` would be used for protocols which did not implement the
auth_default entry point.
Common capabilities are now defined in routing.h. The common
capabilities can be defined using bits 0 - 15.
Router capabilities are defined using bits 16-31 and filter
capabilities (should there ever be such) using bits 32-47.
So, to find out the capabilities of a service you only need to
OR the capabilities of the router and all filters together.
For instance, if a single filter needs statement based routing,
then that is what is done.
Doing the checksum matching after memory is allocated and all the work is
done is not very efficient. A simpler solution is to always replace the
users when we reload them.
Replacing the users every time the service users are reloaded will not
cause a degradation in performance because the previous implementation
already does all the extra work but then just discards it.
A faster solution would be to first query the server and request some sort
of a checksum based on the result set the users query would
create. Currently, this can be done inside a stored procedure but it is
not very convenient for the average user. Another option would be to
generate a long string with GROUP_CONCAT but it is highly likely that some
internal buffer limit is hit before the complete value is calculated.