Relaced router_options with configuration parameters in the createInstance
router entry point. The same needs to be done for the filter API as barely
any filters use the feature.
Some routers (binlogrouter) still support router_options but using it is
deprecated. This had to be done as their use wasn't deprecated in 2.2.
Readconnroute can now be configured at runtime. The changes to
configuration processing allow the removal of router_options now that the
parameters are parsed inside the router.
The code that selects the candidate backend always returned the root
master if the server bitmask contained the master bit. This should only be
done if the master bit is the only bit in the bitmask and when there are
other bits, the normal candidate selection code should be used.
Also added a query to the expanded test case to make sure the connection
actually works.
Up until 2.1.12, if it in the configuration file said
'router_options=slave', the master was used if there were no
slaves at session creation time.
That broke in 2.1.13 as a side-effect of MXS-1516 that checks
at routing time whether the server initially selected as master
still is the master.
Now the required server status is stored separately for each
session, so that if the master was chosen, even though we have
'router_options=slave', we can turn on the SERVER_MASTER bit.
That allows us to handle the case correctly in connection_is_valid().
The individual servers were missing a statistic that would give an
estimated query count. As there is no simple way to count queries for all
modules, counting the number of routed protocol packets is a suitable
substitute.
The use of `router_options=master,slave` was not working as expected. This
was mostly caused by the master bit checks using a bitwise AND instead of
comparing equality. In addition to this, the master would not be
considered a valid candidate if both slaves and masters were available.
If a server is removed from a service, readconnroute will not verify that
the server it is connected to is still the same root master. This fixes
the regression of MXS-1418.
A subset of the checks done at connection creation time need to be done at
query routing time. This guarantees that the connection is closed if the
server no longer qualifies as a valid candidate.
Added teset case that checks that a change in the replication topology
correctly breaks the connection.
The removal of a server from a service is intended to affect only new
sessions.
Added a test that checks that the connections are kept open even if the
server is removed from the service.
The enums exposed by the connector are not intended to be used by the
users of the library. The fact that the protocol, and other, modules used
it was in violation of how the library is intended to be used.
Adding an internal mapping into MaxScale also removes some of the
dependencies that the core has on the connector.
The `user`, `password`, `version_string` and `weightby` values should be
allocated as a part of the service structure. This allows them to be
modified at runtime without having to worry about memory allocation
problems.
Although this removes the problem of reallocation, it still does not make
the updating of the strings thread-safe. This can cause invalid values to
be read from the service strings.
All routers except the binlogrouter now fully implement the JSON
diagnostic entry point. The binlogrouter needs to be handled in a separate
commit as it produces a large amount of diagnostic output.
The static capabilities declared in getCapabilities allows certain
capabilities to be queried before instances are created. The intended use
of this capability is to remove the need for the `is_internal_service`
function.
The routers no longer need to track the number of errors each DCB
receives. This is now done by the protocol modules.
The type of the DCB no longer needs to be checked in the handleError
implementation as the function is only called when a backend DCB fails.
The highwater and lowwater callbacks were never registered for the client
DCBs in the binlogrouter.
The DCB hangup callbacks were never called by the core and were replaced
with fake hangup events in an earlier version.