Since it's a cache, we do not need to retain the data from one
MaxScale invocation to the next. Consequently, we can turn off
write ahead logging completely if we simply wipe the RocksDB
database at each startup. In addition, the location of the
cache directory can now be specified explicitly, so it can be
placed, for instance, on a RAM disk.
The code that checked whether a server was added to a service did not
check whether the server reference was active. This caused problems when
an old server was added again to a service that once had used it.
This commit adds a free() to null_auth_free_client_data, which plugs
the memory leak in maxinfo.
Also, this commit fixes some segfaults when multiple threads are
running status_row() or variable_row(). The functions use
statically allocated index variables, which often go out-of-bounds
in concurrent use. This fix changes the indexes to thread-specific
variables, with allocating and deallocating. This does seem to slow
the functions down somewhat.
Sqlite3 performs some lazy initialization, during which it internally
parses some SQL statements of its own. Earlier there was detection code
for noticing that, but it was costly and errorprone.
Now, sqlite3 is forced to perform the initialization at startup so that
we no longer need any detection code.
With 2.0.1 or earlier, if a statement contains a trailing NULL,
the statement will inside qc_sqlite.c incorrectly be assumed not
to be the one to be classified with a crash being indirectly the
result.
The code for the utility is now stored in a separate file. This also
removes the need to include testing headers from other directories.
Also added a function to reload rules that uses the newly modified rule
parsing mechanism. This can be used later on to update the rules at
runtime.
The maxadmin interface to add servers to objects now allows a maximum of
11 objects to be listed. This will make it simpler to add a server to both
a monitor and a service in one command.
The object names in the dbfwfilter weren't very descriptive and were not
that easy to comprehend. The same goes for function names which had
somewhat cryptic names.
If a master once had slaves and is in the stale status, it will not retain
this status after a restart. Without storing on-disk information, the
stale master status cannot be deduced by looking at the master
alone. Because of this, the user should be able to manually enable the
stale master status.
This fix allows the gap detection and the writing of an IGNORABLE event
only if master_event_state == BLR_EVENT_DONE.
Note: The hole is not being created if the event is bigger than 16MB
With the use_sql_variables_in=master option, readwritesplit should route
all user variable modifications and reads with user variables to the
master.
Previously, the modification of user variables was grouped into generic
system variables which caused all modifications to system variables to go
to the master only. The router requires a finer grained distiction between
normal system variable modifications and user variable modifications.
With the improvements to the query classifier, readwritesplit now properly
routes all user variable operations to the master and other system
variable modifications to all servers.
The listen() backlog is now set to INT_MAX which should guarantee that the
internal limit is always higher than the system limit. This means that the
length of the queue always follows /proc/sys/net/ipv4/tcp_max_syn_backlog.
With the advent of qc_get_field_info, columns can now be matched.
However, there is still some undeterminism caused by the table
information not containing contextual information (exactly where
is the table used).
Further, suppose table X contains the column A and table Y contains
the column B, then given a statement like
SELECT a, b from X, Z;
we cannot know whether a is in X or Z, or b in X or Z, without being
aware of the schema, which we currently are not.
Consequently, as long as MaxScale is not aware of the schema, some
heuristics must be applied. For instance, if exactly one table is
referred to, then we can assume that columns that are not explicitly
qualified are from that table.
The rule tests are currently rather rudimentary and need to be
expanded.
When a connection to the master fails, readwritesplit should always treat
it the same way. Previously, if a connection to the master was lost but it
hadn't lost the master status, the failure would be treated like a slave
server failure.
When a server is created in server_create, it sets the port to the default
of 3306 if no explicit port is defined. The code that called this function
still expected a minimum of three arguments: name, address and port.
When a service is added or removed from a service, a supplementary
configuration file is created. This allows MaxScale to survive restars and
unexpected downtime even if runtime changes to the servers of a service
have been made.
With these changes, it is possible to start MaxScale without any servers,
create servers, add the created servers to services and monitors and
restart Maxscale without losing the runtime configuration changes.
When a server is added to a monitor, an supplementary configuration file
is generated to persist this information. This will allow dynamic
modifications to server lists which will survive restarts and unexpected
downtime.
The monitor will only add new servers to its list of monitored
servers. This prevents duplicate entries in the list and makes it safe to
persist all used servers to the supplementary configuration file instead of only the ones that are not listed in the main configuration.
Servers created at runtime can now be configured to use SSL. The
configuration is only possible if the server is not in use.
The `alter server` command in maxadmin now takes a list of `key=value`
strings. This allows the user to define multiple alter operations with one
command.
The functions allow simple operations on configuration context
objects. This makes it easier to understand what the code does and allows
reuse of the configuration processing code.
The backend references now use a common closing function so that all
variables are reset to proper states. The stored queries weren't always
freed and they would leak memory if left open.
Together with the field names, now qc_get_field_info also returns
field usage information, that is, in what context a field is used.
This allows, for instance, the cache to take action if a a particular
field is selected (SELECT a FROM ...), but not if it is used in a
GROUP BY clause (...GROUP BY a).
This caused a significant modifications of qc_mysqlembedded that
earlier did not walk the parse-tree, but instead looped over of a
list of st_select_lex instances that, the name notwithstanding,
also contain information about other things but SELECTs. The former
approach lost all contextual information, so it was not possible
to know where a particular field was used.
Now the parse tree is walked, which means that the contextual
information is known, and thus the field usage can be updated.