Now it is possible to control the soft and hard ttl of the
cache on a session basis. That is, it is possible to use
different TTLs for different SELECTs.
As the TTL is checked at lookup time, it need not be hardwired
when the storage instance is created. With this changed it is
possible to introduce @maxscale.cache.(soft|hard)_ttl user
variables using which a client can control what TTL should be
applied for a particular kind of data, which is requested by
MXS-1475.
In case the entry in the cache can not be used because the hard
TTL has kicked in, we fetch the data and update the cache
irrespected of the value if @maxscale.cache.populate. That way
an entry that once was put in the cache, will remain in the cache
(as long as there is space).
The earlier @maxscale.cache.enabled has now been replaced with
@maxscale.cache.populate and @maxscale.cache.use that provide
for more flexibility.
With the former it is possible to control in what circumstances
the cache is populated and with the latter one when it is used.
Together they can be used for having a completely client driven
caching.
With 'enabled' it can be specified whether the cache should initially
be enabled or disabled. Useful as it is now possible to enable/disable
the cache dynamically.
With the changes in this commit it is possible to add and remove
MaxScale specific user variables. A MaxScale specific user variable
is a user variable that is interpreted by MaxScale and that
potentially changes the behaviour of MaxScale.
MaxScale specific user variables are of the format "@maxscale.x.y"
where "@maxscale" is a mandatory prefix, x a scope identifying the
component that handles the variable and y the component specific
variable. So, a variable might be called e.g. "@maxscale.cache.enabled".
The scope "core" is reserved (although not enforced yet) to MaxScale
itself.
The idea is that although MaxScale catches these, they are passed
through to the server. The benefit of this is that we do not need to
detect e.g. "SELECT @maxscale.cache.enabled", but can let the result
be returned from the server.
The interpretation of a provided value is handled by the component that
adds the variable. In a subsequent commit, it will be possible for a
component to reject a value, which will then cause an error to be
returned to the client.
There are 3 new functions:
- session_add_variable() using which a variable is added,
- session_remove_variable() using which a variable is removed, and
- session_set_variable_value().
The two former ones are to be called by components, the last one by
the protocol that catches the "set @maxscale..." statements.
Given the value in a statement like "SET SQL_MODE=..." this parser
is capable of deducing whether SQL_MODE is set to DEFAULT or ORACLE
or something else.
SetParser is capable of returning the exact variable and value
of a "SET X=Y" statement, in the cases where X is of a specific
set of variables; currently "SQL_MODE" and "@MAXSCALE...".
The actual value of the SET statement also needs to be parsed in
the case of SQL_MODE, but it becomes unnecessary convoluted if that
information somehow should conditionally be expressable in a return
value.
So, the value will be parsed separately.
If two services referred to the same filter instance, it would
cause the filter to deleted twice at MaxScale shutdown with a
crash as the result.
Now when the services are deleted we just collect the unique
filter instances and then delete them after all services have
been deleted.
When the master changes mid-session, the temporary tables are inevitably
lost. This could be avoided by routing temporary table creation to all
servers.
If the connection to the backend where a read-only transaction is being
performed fails, the Backend object should be closed for it. This fixes a
debug assertion in readwritesplit.cc:check_and_log_backend_state which
asserts that the failed connection must not be in use after the error
handling is done.
Also reordered the failing assertion and the accompanying error message so
that the error is logged first.
Earlier, if a service had multiple listeners you would have had
MaxScale> show dbusers MyService
User names: alice@% ...
User names: bob@% ...
That is, no indication of which listener is reporting what. With
this commit the result will be
User names (MyListener1): alice@% ...
User names (MyListener2): bob@% ...
Further, the diagnostics function of an authenticator is now expected
to write the list of users to the provided DCB, without performing any
other formatting. The formatting (printing "User names" and appending
a line-feed) is now handled by the handler for the MaxAdmin command
"show dbusers".
This commit introduces changes that fix the relay master detection that
was broken by the merge from 2.1 into 2.2 by commit
1ecd791887994209eb29e56e1271f8c407cd0cdf.
In 2.2, the master server ID is used to detect whether a slave is actually
replicating from a master. The value is still displayed even if the slave
is not actively replicating from a master. The commit in 2.1 causes this
value to be stored unconditionally if it is available. By checking the
value of Last_IO_Errno and comparing it to a list of known error codes, we
know whether the slave is replicating properly.
The slave detection in 2.2 correctly identifies a broken slave with a
stopped IO thread. Due to this, the test case must be modified to check
that the relay master is not a slave if the IO thread is stopped.
If local address has been specified, then all connections created
using mxs_mysql_real_connect() will use that same local address as
well.
A system test has not been created as our VMs do not have more than
one usable IP-address. Locally it has been verified to work as
expected.
When MaxScale thinks that the master has failed, it tries to verify it by
seeing if the slave server is receiving events. There was a missing IO
thread status check in the slave_receiving_events function which caused
the failover to wait until the verification timed out.
The relay master detection logic also lacked a check for the slave SQL
thread status. The code should check the state of the SQL thread to
determine whether the server is actually a functional slave to a master.