By logging the connection ID for each created connection, failures can be
traced back from the backend server all the way up to the client
application.
If a transaction replay has to be executed twice due to a failure of the
original candidate master, the query queue could contain replayed
queries. The replayed queries would be placed into the queue if a new
connection needs to be created before the transaction replay can start.
Backported the changes that convert the query queue in readwritesplit into
a proper queue. This changes combines both
5e3198f8313b7bb33df386eb35986bfae1db94a3 and
6042a53cb31046b1100743723567906c5d8208e2 into one commit.
By passing the raw password deeper into the authentication code, it can be
used to verify the user can access some systems. Right now, this is not
required by the simple salted password comparison done in MaxScale.
When debugging you occasionally want to find out what a packet
contains (e.g. delivered to clientReply). Manually looking into
the packet works, but is tedious. With this function you when
the execution has been stopped in GDB examine a protocol packet.
E.g.
Thread 3 "maxscale" hit Breakpoint 6, RWSplitSession::clientReply (this=0x7fffe401ed20, writebuf=0x7fffe401e910, backend_dcb=0x7fffe401dbe0) at /home/wikman/MariaDB/MaxScale/server/modules/routing/readwritesplit/rwsplitsession.cc:567
567 DCB* client_dcb = backend_dcb->session->client_dcb;
(gdb) p dbg_decode_response(writebuf)
$30 = 0x7ffff0d40d54 "Packet no: 1, Payload len: 44, Command : ERR, Code: 1146, Message : Table 'test.blahasdf' doesn't exist"
The load_persisted_configs parameter now controls whether persisted
runtime changes are loaded on startup. The changes are still generated as
it persists the current state of MaxScale making problem analysis easier.
By storing the queries in the query queue and routing it once the
transaction replay is done, we prevent two problems:
* Multiple transaction replays would overwrite the m_interrupted_query
buffer that was used to store any queries executed during the
transaction replay.
* Incorrect ordering of queries when the query queue is not empty and a
new query is executed during transaction replay.
If the session starts with no master but later one becomes available, when
a transaction is started the code would unconditionally use the master's
name in a log message.
By allowing transactions to the master to end even if the server is in
maintenance mode makes it possible to terminate connections at a known
point. This helps prevent interrupted transactions which can help reduce
errors that are visible to the clients.
The new `force=yes` option closes all connections to the server that is
being put into maintenance mode. This will immediately close all open
connections to the server without allowing results to return.
In this context master should be interpreted as "can be read
and written to".
Marking them as master requires less changes in RWS to make it
usable with a Clustrix cluster.
If set to true and if any of the other blocking related parameters
is true, then a statement that cannot be fully parsed will be blocked.
Default is true.
That URL will now return information about the statements in
the query classifier cache. The information is collected using
the same map in a serial manner from all routing workers (that
each have their own cache). Since all caches will contains the
same statements, collecting the information in a serial manner
means that the overall memory consumption will be lower than
what it would be if the information was collected in parallel.
With the addition of SO_REUSEPORT support, it is no longer possible to
rely on the network stack to prevent multiple listeners from listening on
the same port. Without explicitly checking for the ports it would be
possible for two listeners from two different services to listen on the
same port in which case the service would be almost randomly chosen.
If SO_REUSEPORT is available and the kernel supports it, listeners will
now listen on separate file descriptors. This removes the need for
cross-worker communication when in normal operation which should make
MaxScale scale better.
By storing the file descriptor inside a worker-local variable, it is
possible to handle both unique file descriptors (created with
SO_REUSEPORT) and shared file descriptors with the same code. The way in
which the file descriptor is stored in the rworker_local object determines
the way the listener behaves.
The function allocated a constant-sized chunk of memory for all messages
which was excessive as well as potentially dangerous when used with large
strings.
The creation of a monitor from JSON relied on the non-JSON version for the
addition of default parameters but it proceeded to check the validity of
the parameters before it. Whenever parameters are checked, the default
parameters should be present.
If the last server was removed, the parameter would be rejected due to it
being empty. To remove the parameter, the
MonitorManager::reconfigure_monitor should be used. Also fixed the
unnecessary serialization after a failure to remove server from a monitor
and the fact that some errors were logged instead of written to the caller
of the command.
If a server with zero weight was chosen as the only candidate, it was
possible that the starting minimum value was smaller than the server
score. This would mean that a candidate wouldn't be chosen if the score
was too high. To preven this, the values are capped to a value smaller
than the initial minimum score.
This way the state is encapsulated in the object and the required changes
are done in one place. This makes the code reusable across all functions
making it easier to implement better monitor alteration code.
Queries such as SHOW TABLES FROM db1 are now routed to the backend with db1.
This gives the correct result as long as db1 is not sharded to multiple
backends.