The two operations return different types of results and need to be
treated differently in order for them to be handled correctly in 2.2.
This fixes the unexpected internal state errors that happened in all 2.2
versions due to a wrong assumption made by readwritesplit. This fix is not
necessary for newer versions as the LOAD DATA LOCAL INFILE processing is
done with a simpler, and more robust, method.
Single spot where an existing hint ptr was overwritten. Removed gwbuf_add_hint()
because it was adding hints at the opposite end compared to functions in hint.h.
Added hint_splice() to replace.
testrules.cc had a signed to unsigned comparison and it used lambda
functions (which are not supported in CentOS 6).
The keywords struct in hintparser.cc needed to be declared static in order
for it to compile.
By storing a link to the backend DCBs in the session object itself, we can
reach all related objects from the session. This removes the need to
iterate over all DCBs to find the set of related DCBs.
Using delayed_call rather than usleep. This caused a fair amount of changes to
the timing ascpects (or delaying). Also some other small changes; more config
and all durations in milliseconds.
Multi-statement SELECTs were properly detected and handled,
but e.g. multi-statement UPDATESs were not, with the result
that erronous warnings were logged.
Now the responses are detected and handled properly.
The possibility to have multiple cache rules in a cache
configuration file is now handled throughout the cache
filter.
The major difference is that while you earlier directly
queried the Cache whether data should be stored to the
cache and whether data in the cache should be used, you
now query the Cache whether data should be stored to the
cache and, if so, get a CacheRules object from which you
subsequently query whether data from the cache should
be used.
It's now possible to have a rules file with an array of rule
objects, e.g.
[
{
store: [ ... ],
use: [ ... ]
},
{
store: [ ... ],
use: [ ... ]
}
]
This commit only contains the low-level modifications for
supporting that; the upper-level modifications are made in
another commit.
The `error` variable was never used. Also added a more convenient typedef
for both the downstream and upstream functions and updated filter API
version.
The tasks themselves now control whether they are executed again. To
compare it to the old system, oneshot tasks now return `false` and
repeating tasks return `true`.
Letting the housekeeper remove the tasks makes the code simpler and
removes the possibility of the task being removed while it is being
executed. It does introduce a deadlock possibility if a housekeeper
function is called inside a housekeeper task.
- session_set_response() made const correct
- set_response() function added to mxs::FilterSession; calls
session_set_response().
- Cache uses set_response() for delivering the cache result
to the client.
If a table/database rule has been provided then if the resultset
does not contain table/database names, then we consider it a match
(subject to the column obviously).
Otherwise a rule like
{
"replace": {
"table": "info",
"column": "email"
},
"with": {
"fill": "*"
}
}
could be bypassed with a statement like
SELECT * FROM info UNION SELECT * from info
as the resultset in that case will not indicate that the column emain
is from info, which it will if the statement is
SELECT * FROM info;
Exposing the canonicalization code in the luafilter allows it to be used
on the Lua side of things. This should enable some pretty cool stuff to be
done with it.
Now it is possible to control the soft and hard ttl of the
cache on a session basis. That is, it is possible to use
different TTLs for different SELECTs.
As the TTL is checked at lookup time, it need not be hardwired
when the storage instance is created. With this changed it is
possible to introduce @maxscale.cache.(soft|hard)_ttl user
variables using which a client can control what TTL should be
applied for a particular kind of data, which is requested by
MXS-1475.
In case the entry in the cache can not be used because the hard
TTL has kicked in, we fetch the data and update the cache
irrespected of the value if @maxscale.cache.populate. That way
an entry that once was put in the cache, will remain in the cache
(as long as there is space).
The earlier @maxscale.cache.enabled has now been replaced with
@maxscale.cache.populate and @maxscale.cache.use that provide
for more flexibility.
With the former it is possible to control in what circumstances
the cache is populated and with the latter one when it is used.
Together they can be used for having a completely client driven
caching.
With 'enabled' it can be specified whether the cache should initially
be enabled or disabled. Useful as it is now possible to enable/disable
the cache dynamically.