The testing framework extended the public struct, not the private
one. Also moved the internal Session class inside the mxs namespace to
prevent conflicts with the mock testing Session class.
The Session class now contains all of the C++ objects that were previously
in the MXS_SESSION struct. It is also allocated with new but all
initialization is still done outside of the Session in session_alloc_body.
This commit will not compile as it is a part of a set of commits that make
parts of the session private.
The service now uses a std::vector<SFilterDef> to store the filters it
uses. Most internal parts deal with the SFilterDef but debugcmd.cc still
moves raw pointers around (needs to be changed).
Replaced the previous RESULTSET with the new implementation. As the new
ResultSet doesn't have a JSON streaming capability, the MaxInfo JSON
interface has been removed. This should not be a big problem as the REST
API offers the same information in a more secure and structured way.
The LocalClient micro-client required a reference to the session that was
valid at construction time. This is the reason why the previous
implementation used dcb_foreach to first gather the targets and then
execute queries on them. By replacing this reference with pointers to the
raw data it requires, we lift the requirement of the orignating session
being alive at construction time.
Now that the LocalClient no longer holds a reference to the session, the
killing of the connection does not have to be done on the same thread that
started the process. This prevents the deadlock that occurred when
concurrect dcb_foreach calls were made.
Replaced the unused dcb_foreach_parallel with a version of dcb_foreach
that allows iteration of DCBs local to this worker. The dcb_foreach_local
is the basis upon which all DCB access outside of administrative tasks
should be built on.
This change will introduce a regression in functionality: The client will
no longer receive an error if no connections match the KILL query
criteria. This is done to avoid having to synchronize the workers after
they have performed the killing of their own connections.
Now takes a structure that, if present, enables the query
classification caching and specifies the properties of the
cache.
For the time being no actual properties are yet available.
It is now possible to prevent the masking filter from rejecting
statements using functions in conjunction with fields to be
masked. So now it is possible to not use the blanket rejection
of the masking filter and replace it with more detailed firewall
rules.
The masking filter works only on the result-set. However, if
functions are used, the column names will not be available in
the result-set, and hence masking will not take place.
Now, the statement is checked and if functions are used in
conjunction with columns that should be masked, the statement
is rejected. Thus, functions can no longer be used for bypassing
the masking. That was possible earlier as well, but required
manually setting up the firewall filter.
Parameters that accept whitespace-only values need to have their default
values quoted if they contain only whitespace characters. In 2.2 the
qlafilter is the only module that did not do this.
Also change the following defaults:
- "selects": Was "verify_cacheable", is now "assume_cacheable"
- "cached_data": Was "shared", is now "thread_specific"
In case an array of cache rules is provided, we will only store
references to the objects in the array. Consequently, the counts of
the borrewed references to the objects must be increased, and the
reference count of the array itself decreased.
The only way to cleanly separate the maxutils library from the MaxScale
CMake project is to make it a standalone CMake project. With the help of
ExternalProject, it should be relatively easy to use.
The cache filter walks through the resultset in order to detect
when the resultset ends. That is, it reads each packet header as
they arrive.
In case the resultset is large, the cache will have to read several
packet headers. That it does using gwbuf_copy_data(). However, as that
was done using the first received GWBUF as the starting point, it meant
that in gwbuf_copy_data() the buffer chain was walked over and over
and over again, with a significant performance hit as the result.
Now we separetely store the last buffer received, and the the starting
offset of it. That way there will be no buffer chain walking.
As this is a common problem, GWBUF could cache the offset of the tail,
thus removing the performance penalty if you read from an offset that
happens to be in the tail. However, it's better to do that as a part
of a general overhaul of GWBUF.
The two operations return different types of results and need to be
treated differently in order for them to be handled correctly in 2.2.
This fixes the unexpected internal state errors that happened in all 2.2
versions due to a wrong assumption made by readwritesplit. This fix is not
necessary for newer versions as the LOAD DATA LOCAL INFILE processing is
done with a simpler, and more robust, method.
Single spot where an existing hint ptr was overwritten. Removed gwbuf_add_hint()
because it was adding hints at the opposite end compared to functions in hint.h.
Added hint_splice() to replace.
testrules.cc had a signed to unsigned comparison and it used lambda
functions (which are not supported in CentOS 6).
The keywords struct in hintparser.cc needed to be declared static in order
for it to compile.
By storing a link to the backend DCBs in the session object itself, we can
reach all related objects from the session. This removes the need to
iterate over all DCBs to find the set of related DCBs.
Using delayed_call rather than usleep. This caused a fair amount of changes to
the timing ascpects (or delaying). Also some other small changes; more config
and all durations in milliseconds.
Multi-statement SELECTs were properly detected and handled,
but e.g. multi-statement UPDATESs were not, with the result
that erronous warnings were logged.
Now the responses are detected and handled properly.
The possibility to have multiple cache rules in a cache
configuration file is now handled throughout the cache
filter.
The major difference is that while you earlier directly
queried the Cache whether data should be stored to the
cache and whether data in the cache should be used, you
now query the Cache whether data should be stored to the
cache and, if so, get a CacheRules object from which you
subsequently query whether data from the cache should
be used.
It's now possible to have a rules file with an array of rule
objects, e.g.
[
{
store: [ ... ],
use: [ ... ]
},
{
store: [ ... ],
use: [ ... ]
}
]
This commit only contains the low-level modifications for
supporting that; the upper-level modifications are made in
another commit.