The TARBALL variable controls if a special tar.gz package is built when
packages are generated. This package has a different directory structure
compared to the RPM/DEB packages.
If RPM/DEB packages are built, tar.gz packages are not built. This makes
RPM/DEB generation faster and allows tarballs to be built separately with
a proper directory structures.
Session command responses with multiple packets could be spread across
multiple, non-contiguous buffers. If a buffer contained a complete packet
and some extra data but it wasn't contiguous, the debug assertion in
gwbuf_clone_portion would fail. With release builds, it would cause
eventual out-of-bounds memory access when the response would be sent to
the client.
When a backend causes an error and it should be sent to the client, the
backend reference was closed but the waiting results state was not
cleared. This caused a debug assertion to be hit.
The active operation counters are now closed every time a backend referece
is taken out of use. This should fix a few debug assertions that were hit
in tests.
The `detect_stale_slave` functionality used to only work when MaxScale had
the knowledge that a master server has existed and that replication was
working at some point in time. This might be a "safe" way to do it in
regards to staleness of the data but in practice it is preferrable to
always allow slave to be used for reads.
This change adds the missing functionality to the monitor by assigning
slave status to all servers which are configured as replication slaves
when no master can be found.
The new member variable that was added to the SERVER should be removed in
2.1 where the server_info offers the same functionalty without "polluting"
the SERVER type.
The CHK_LOGFILE macro first asserts that the values being checked are
valid and then proceeds to evaluate it again. The result of this
evaluation was not assigned to anything and it caused GCC 6.1.1 to produce
a warning.
When a client executes commands which do not return results (for example
inserting BLOB data via the C API), readwritesplit expects a result for
each sent packet. This is a somewhat of a false assumption but it clears
itself out when the session is closed normally. If the session is closed
due to an error, the counter is not decremented.
Each sesssion should only increase the number of active operation on a
server by one operation. By checking that the session is not already
executing an operation before incrementing the active operation count the
runtime operation count will be correct.
If systemd restarts MaxScale when an abnormal exit is detected, it is
likely to happen again. This leads into a loop which causes multiple
maxscale processes on the same machine. One example of this behavior is
when systemd times MaxScale out when it is starting.
If the installation directory is something else than /usr,
then the directories
<install-dir>/var/cache/maxscale
<install-dir>/var/log/maxscale
<install-dir>/var/run/maxscale
will be created at installation time.
The document is now split into module type sections. Added documentation on the
limitations on multiple monitors monitoring the same servers and filters not
receiving complete packets when used with readconnroute.
Although claimed in the output of "--help", the long option
"--execdir" was not supported. Support for that now added.
The long options have now also been sorted in the same order
as the options are displayed by the help, to make it easy to
check that everything is there.
Further, the description column of the output of --help has
been aligned.
In the configuration section of services and monitors, the
password to be used can now be specified using 'password'
in addition to 'passwd'.
If both are provided, then the value of 'passwd' is used. That
way there cannot be any surprises, should someone for whatever
reason currently (in 1.4.3 an invalid parameter will not prevent
MaxScale from starting) have a 'password' entry in his config file.
In the next release 'passwd' can be deprecated and in the release
after that removed.
The service start retry mechanism mistakenly returned an error when a
service failed to start but a retry was queued. This caused MaxScale to
stop whenever a service failed to start.
dcb_count_by_usage did not iterate the list properly and would get stuck on the
first inactive DCB. Since this function is only called by maxinfo, it would be
the only one to get stuck.
It's now possible to use both a Unix domain socket and host/port
when connecting with MaxAdmin to MaxScale.
By default MaxAdmin will attempt to use the default Unix domain
socket, but if host and/or port has been specified, then an inet
socket will be used.
maxscaled will authenticate the connection attempt differently
depending on whether a Unix domain socket is used or not. If
a Unix domain socket is used, then the Linux user id will be
used for the authorization, otherwise the 1.4.3 username/password
handshake will be performed.
adminusers has now been extended so that there is one set of
functions for local users (connecting locally over a Unix socket)
and one set of functions for remote users (connecting locally
or remotely over an Inet socket).
The local users are stored in the new .../maxscale-users and the
remote users in .../passwd. That is, the old users of a 1.4
installation will work as such in 2.0.
One difference is that there will be *no* default remote user.
That is, remote users will always have to be added manually using
a local user.
The implementation is shared; the local and remote alternatives
use common functions to which the hashtable and filename to be
used are forwarded.
The commands "[add|remove] user" behave now exactly like they did
in 1.4.3, and also all existing users work out of the box.
In addition there is now the commands "[enable|disable] account"
using which Linux accounts can be enabled for MaxAdmin usage.
The change in readwritesplit routing priorities, where hints have the
highest priority, gives users more options to control how readwritesplit
acts.
For example, this allows read-only stored procedures to be routed to
slaves by adding a hint to the query:
CALL myproc(); -- maxscale route to slave
The readwritesplit documentation also warns the user not to use routing
hints unless they can be absolutely sure that no damage will be done.
Local admins are the ones accessing MaxScale on the same host
over a Unix domain socket, and who are strongly identified), and
optional remote admins are the ones accessing MaxScale potentially
over a tcp socket (potentially over the network), and who are
weakly identified.
These are completely separate and a different set of functions
will be needed for managing them. This initial change merely
renames the functions.
With this change, if two master servers both have equal depths but
different weights, the one with the higher weight is used. If the depths
and weights are equal, the first master listed in the configuration is
used.
When a server goes into maintenance, the current state is set to
Maintenance and the previous state is left unmodified. The function which
checks for state changes uses the current and previous values and simply
compares them. Since servers in maintenance mode aren't monitored, the
function always returned true when servers were in maintenance mode.
When the state change to or from maintenance is ignored, the state change
function works. With this fix, users can safely put servers into
maintenance without having to worry about the scripts being executed. This
also allows the scripts themselves to put servers into maintenance.
Currently, if the address/port information of a maxscaled protocol
listener is not updated to socket when MaxScale is upgraded to 2.0,
maxscaled would not start, with the effect of a user loosing maxadmin
after an upgrade.
After this change, if address/port information is detected, a warning
is logged and the default socket path is used. That way, maxadmin will
still be usable after an upgrade, even if the address/port information
is not updated.
The CDC_users.md document was not linked to and was in the wrong place. It
should reside in the protocol directory since it relates to the CDC protocol.