By default, only the essentials - the type and the operation - of
a statement will be collected and only if fields, tables, functions
and databases are explicitly asked for, will they be collected.
However, a statement will be parsed at most twice; if parsing is
needed a second time then all information will be collected.
If it is known that some particular information is needed, then
qc_parse() can be called explicitly to ensure it is collected
at first parsing.
It is now possible to specify what information the caller is interested
in. With this the cost for collecting information during the query parsing
that nobody is interested in can be avoided.
The match data needs to be unique for each thread, so for the time
being it is created whenever it is needed. A more performant (although
possibly to a negigible amount) solution would be to have a separate
match data for each thread, but that will have to wait for 2.2.
- Non-GCC intrinsics alternative implementation removed. Let's worry
about the absence of the intrinsics once/if that becomes relevant.
- Spinlock release now performed using __sync_lock_release, as per
svoj's advice.
- while-looping on the variable used as lock removed, so it no longer
need to be volatile.
- Boolean function returns bool.
- Size of profiling counters increased.
- Risk for division-by-zero removed.
- Documentation moved from implementation to header.
When MaxScale is being started and the users are loaded, the MySQL
authenticator should not load the database users for internal services
abstracted as servers.
The loading of users at startup for internal services is avoided because
the startup is done in a single thread context and the internal services
have not yet been started.
The delayed loading of users will cause the authentication to fail when
the first client connect. This triggers the reloading of the users and the
second attempt at authentication will succeed. All of this is hidden from
the end user.
If a server points to a local MaxScale listener, the permission checks for
that server are skipped. This allows permission checks to be used with a
mix of external servers and internal services.
- Selects are picked out using custom parsing, so if a statement is
anything else but a SELECT, the cache will never cause the statement
to be parsed.
- The setting of of the cache parameter `selects` is taken into account.
If it is `assume_cacheable` then the statement will also not be parsed
even if it is a SELECT.
The original approach was made for RocksDB where it is beneficial
to keep keys of stuff related to each other close to each other.
However, as RocksDB is no longer the primary focus, it just causes
additional cost to dig out the table names.
The key is a 64-bit integer, but crc32 only gives us a 32-bit one.
We create an other 32-bit value by running crc32 over the same SQL,
using the first crc value as adler.
I think that further reduces the chance for clashes:
uint32_t crc0 = crc32(0, Z_NULL, 0);
uint32_t crc1;
uint32_t crc2;
crc1 = crc32(crc0, "codding", 7) => 1774765869
crc2 = crc32(crc1, "codding", 7) => 1409592046
crc1 = crc32(crc0, "gnu", 3) => 1774765869
crc2 = crc32(crc1, "gnu", 3) => 1213798908
Note that the first value is the same, but the second is not.
When a standalone master server is detected, it should receive the stale
status to prevent it from losing the master status if another server is
started and allow_cluster_recovery is enabled.
The json_stringn function should be used instead of the json_string to
allow null characters as well as non-null terminated strings to be
embedded in the JSON values.
The CDC example Python programs now decode the data as UTF-8 instead of
ASCII.
The libaio is not required by MaxScale so the check for it is no longer
needed.
Updated documentation to match the current requirements to build MaxScale.
For the general case, regex matching simply will not do. The
regex becomes so hairy so it turns write-only, i.e. unmaintainable.
Regex matching is also slower than a handwritten custom parser.
TheBoundaryMatcher is not updated as it is likely it will be removed
altogether. A regex that accepts comments in all relevant places becomes
so hairy it is unmaintainable. It seems that the only working solution
would be to first remove all comments and then perform the regex.