Merge branch '2.3' into 2.4

This commit is contained in:
Esa Korhonen
2020-07-28 16:00:02 +03:00
943 changed files with 12 additions and 11 deletions

1151
system-test/CMakeLists.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,15 @@
## This file should be placed in the root directory of your project.
## Then modify the CMakeLists.txt file in the root directory of your
## project to incorporate the testing dashboard.
##
## # The following are required to submit to the CDash dashboard:
## ENABLE_TESTING()
## INCLUDE(CTest)
set(CTEST_PROJECT_NAME "MaxScale")
set(CTEST_NIGHTLY_START_TIME "01:00:00 UTC")
set(CTEST_DROP_METHOD "http")
set(CTEST_DROP_SITE "maxscale-jenkins.mariadb.com")
set(CTEST_DROP_LOCATION "/CDash/submit.php?project=MaxScale")
set(CTEST_DROP_SITE_CDASH TRUE)

View File

@ -0,0 +1,312 @@
# Maxscale Continuous Integration and Test Automation
## Basics
Jenkins at http://max-tst-01.mariadb.com:8089/
is the main tool for providing build and test services for Maxscale,
but note that it _cannot_ be accessed directly (see further down).
A set of virtual machines (VMs) is in use for all building and testing.
The VMs are controlled by [Vagrant](https://www.vagrantup.com/)
and [MDBCI](https://github.com/OSLL/mdbci/).
3 types of VMs are supported:
* Libvirt/qemu
* AWS
* Docker containers
All Jenkins jobs are in
[Jenkins job Builder](https://docs.openstack.org/infra/jenkins-job-builder/)
YAML format and are available
[here](https://github.com/mariadb-corporation/maxscale-jenkins-jobs).
VMs for every build or test run are created from clean Vagrant box.
To speed up regular builds and tests there are a set of constantly running VMs:
* Build machine _centos\_7\_libvirt\_build_
* Test VMs set _centos\_7\_libvirt-mariadb-10.0-permanent_
Build and test Jenkins jobs generate e-mails with results report.
In addition, test run logs are stored [here](http://max-tst-01.mariadb.com/LOGS/).
## Accessing _max-tst-01_
Jenkins running on _max-tst-01_ can only be accessed from localhost. In order
to be able to access it from your computer, an ssh tunnel needs to be setup.
* Ensure that your public key is available on _max-tst-01_.
* Then setup the ssh tunnel:
```
$ ssh -f -N -L 8089:127.0.0.1:8089 vagrant@max-tst-01.mariadb.com
```
After that, Jenkins on _max-tst-01_ can be accessed by browsing to
http://localhost:8089.
In the following it is assumed that the above has been done.
## Build
* [Build](http://localhost:8089/job/build/) builds Maxscale
and creates a binary repository for one Linux distribution.
* [Build_all](http://localhost:8089/job/build_all/) builds
Maxscale and creates binary repository for _all_ supported Linux
distributions.
The list of supported distributions (Vagrant boxes) is
[here](https://github.com/mariadb-corporation/maxscale-jenkins-jobs/blob/master/maxscale_jobs/include/boxes.yaml).
### Main _Build_ and _Build_all_ jobs parameters
To start a build, click on _Build with Parameters_ to the left in
the Jenkins panel.
#### scm_source
The place in Maxscale source code.
* for branch - put branch name here
* for commit ID - put Git Commit ID here
* for tag - put `refs/tags/<tagName>` here
#### box
The name of Vagrant box to create VM.
Box name consists of the name of distribution, distribution version and the name of VM provider.
Whole list of supported boxes can be found [here](https://github.com/mariadb-corporation/maxscale-jenkins-jobs/blob/master/maxscale_jobs/include/boxes_all.yaml)
#### target
Then name of binary repository. Ensure that the name is reasonably
unique so that there can be no unintended overwrites. If you are
building a personal branch, then using the same value for _target_
as is the value for _scm_source_ should be fine.
Binaries will go to:
`http://max-tst-01.mariadb.com/ci-repository/<target>/mariadb-maxscale/<distribution_name>/<version>`
#### cmake_flags
Debug build:
```
-DBUILD_TESTS=Y -DCMAKE_BUILD_TYPE=Debug -DFAKE_CODE=Y -DBUILD_AVRO=Y -DBUILD_CDC=Y
```
Release build:
```
-DBUILD_TESTS=N -DFAKE_CODE=N -DBUILD_AVRO=Y -DBUILD_CDC=Y
```
Build scripts automatically add all other necessary flags (e.g. -DPACKAGE=Y)
#### do_not_destroy_vm
If `yes`, then the VM will not be destroyed after the build. Choose that **if** you,
for some specific reason (e.g. debugging), intend to log into the VM later.
#### repo_path
The location for binary repositories on _max-tst-01_. Usually no need to change.
#### try_alrady_running
If `yes` constantly running VM will be used for build instead of bringing clean
machines from scratch. The name of VM is `<box_name>_build`.
If this VM is not running it will be created from clean box. If new build
dependency is introduced and new package have to be installed (and build
script is modified to do it) before the build it is necessary to destroy
constantly running VM - in this case a new VM will be created and all stuff
will be installed.
#### run_upgrade_test
If `yes` upgrade test will be executed after build.
Upgrade test is executed on a separate VM.
* First, old version of Maxsale is installed (parameter **old_target**),
* then upgrade to recently built version is executed,
* then Maxscale is started with minimum configuration (only CLI service is
configured, no backend at all), and
* finally, _maxadmin_ is executed to check that Maxscale is running and available.
### Regular automatic builds
File `~/jobs_branches/build_all_branches.list` on _max-tst-02.mariadb.com contains
a list of regexes. If branch name matches one of the regex from this file
_build_all_ job will be executed for this branch every night.
Regular builds are executed with _run\_upgrade\_test=yes_
## Test execution
* [run_test](http://localhost:8089/view/test/job/run_test/) creates a set of
VMs: 1 VM for Maxscale, 4 VMs for Master/Slave setup and VMs for Galera cluster.
### Main _Run_test_ parameters
Before this job can be run, a
[build](#build)
job must have been executed first.
#### target
The name of Maxscale binary repository.
The value used here, should be the same as the value used in [target](#target).
#### box
The name of Vagrant box to create VM.
The value used here, must be the same as the value used in
[box](#box)
when the used **target** was built.
#### product
MariaDB or MySQL to be installed to all backend machines.
#### version
Version of MariaDB or MySQL to be installed on all MariaDB machines.
The list of versions contains all MariaDB and MySQL major version.
Selecting wrong version (e.g. 5.7 for MariaDB or 10.0 for MySQL) causes
VM creation error and job failure.
#### do_not_destroy_vm
If `yes` all VMs will run until manually destoroy by **Destroy** job.
It can be used for debugging. **Do not forget to destroy** VMs after debugging
is finished.
#### name
The name of test run. The names of the VMs will be:
* `<name>_maxscale`,
* `<name>_node_000`, ..., `<name>_node_003` and
* `<name>_galera_000`, ..., `<name>_galera_003`-
The **name** can be used to access VMs:
```bash
. ~/build-scripts/test/set_env_vagrant.sh <name>
ssh -i $<vm_name>_keyfile $<vm_name>_whoami@$<vm_name>_network
```
where `<vm_name>` can be 'maxscale', 'node_XXX' or 'galera_XXX'.
#### test_branch
The name of _MaxScale_ repository branch to be used in the test run.
It is usually the same as the one used in [scm_source](#scm_source).
#### slave_name
Host server to for VMs and _maxscale-system-test_ execution
|Name|Server|
|-------|:---------------------|
|master |localhost|
|maxtst2|max-tst-02.mariadb.com|
|maxtst3|max-tst-03.mariadb.com|
#### test_set
Defines tests to be run. See ctest documentation
https://cmake.org/cmake/help/v3.7/manual/ctest.1.html
for details.
Most common cases:
|Arguments|Description|
|-------------|:--------------------------------------------------|
|-I 1,5,,45,77| Execute tests from 1 to 5 and tests 45 and 77|
|-L HEAVY|Execute all tests with 'HEAVY' label|
|-LE UNSTABLE|Execute all tests except tests with 'UNSTABLE' label|
### Run_test_snapshot
This job uses already running VMs. The set of VMs have to have snapshot
(by default snapshot name is 'clean').
Instead of bringing up all VMs from scratch this job only reverts VMs to
the 'clean' state.
The name of running VMs set should be:
`<box>-<product>-<version>-permanent`
If there is no VMs set with such name it will be created automatically.
If case of broken VMs it can be destroyd with help of **Destroy** job
and **Run_test_snapshot** should be restarted after it to create a new
VMs set.
Only one VMs set on every server can be started for particular **box**,
**product** and **version** combination.
If two **Run_test_snapshot** jobs are runnig for the same
**box**, **product** and **version**
the second one will be waiting until the first job run ios finished.
(job sets 'snapshot_lock').
If case if locak is not removed automatically (for example
due to Jenkins crash) it can be removed manually:
```bash
rm ~/mdbci/centos_7_libvirt-mariadb-10.0-permanent_snapshot_lock
```
### Run_test_labels and Run_test_snapshot_labels
The only difference from **Run_test** and
**Run_test_snapshot** is the way to define set of tests to execute.
**\*_labels** jobs use checkboxes list of test labels.
Labels list have to be maintained manually.
It can be created by
https://github.com/mariadb-corporation/maxscale-jenkins-jobs/blob/master/create_labels_jobs.sh
script.
### Regular test runs
#### Test runs by timer
File
```
~/jobs_branches/run_test_branches.list
```
on max-tst-02.mariadb.com
contains a list of regexes. If branch name matches one of regexes
tests are executed for this branch every day.
The test set is defined also in this file.
Job **print_branches_which_matches_regex_in_file**
http://localhost:8089/view/All/job/print\_branches\_which\_matches\_regex\_in\_file/build
can be used to check the list of branches that match
regexes.
Job **weekly_smoke_run_test_matrix** triggers **build_regular**
and **run_test_matrix**
http://localhost:8089/view/test/job/run_test_matrix/
every week for 'develop' branch
with test set '-L LIGHT' (all tests with label 'LIGHT')
#### Test runs by GIT push
File
```
~/jobs_branches/on_push_maxscale_branches.list
```
on max-tst-02.mariadb.com
contains a list of regexes. If branch name matches one of regexes
tests are executed for this branch after every push.
Job **print_branches_which_matches_regex_in_file**
http://localhost:8089/view/All/job/print\_branches\_which\_matches\_regex\_in\_file/build
can be used to check the list of branches that match
regexes (select _on\_push\_maxscale\_branches.list_
file for _branches\_list_ parameter)
## Debugging
For regular debugging contsntly running set of VMs is recommended.
See [documentation here](DEBUG_ENVIRONMENT.md).
Another way is to use **do_not_destroy=yes** parameter of **run_test** job.
After **run_test** job executing VMs stay running and
can be accessed from the host server. See [here](LOCAL_DEPLOYMENT.md#accessing-vms)
for details.

View File

@ -0,0 +1,113 @@
# Debug environment
## Create ssh tunnel to Jenkins server
```bash
ssh -f -N -L 8089:127.0.0.1:8089 vagrant@max-tst-01.mariadb.com
```
## Create environment for debugging
To create virtual machines for debugging please
use Jenkins job 'create_env'
http://127.0.0.1:8089/view/env/job/create_env/build
This Jenkins job creates backend VMs
(4 Master/Slave and 4 Galera) and
Maxscale development machine.
Maxscale development machine will contain all
build tools and build dependencies as well as
Maxscale source Git.
Source is located in:
```
~/MaxScale/
```
## Environmental variables setup
```bash
. ~/build-scripts/test/set_env_vagrant.sh <name>
```
Example:
```bash
. ~/build-scripts/test/set_env_vagrant.sh debug_env
```
## Access to Maxscale VM
```bash
ssh -i $maxscale_sshkey $maxscale_whoami@maxscale_network
```
```bash
scp -i $maxscale_sshkey <stuff_to_copy> $maxscale_whoami@maxscale_network:/home/$maxscale_whoami/
```
```bash
scp -i $maxscale_sshkey $maxscale_whoami@maxscale_network:/home/$maxscale_whoami/<stuff_to_copy> .
```
## Executing tests
Clone https://github.com/mariadb-corporation/maxscale
and build tests
```bash
cd MaxScale/maxscale-system/test
cmake .
make
```
and then run
```bash
ctest -VV
```
or manually any test executable from _maxscale-system-test_
It is recommended to run
```bash
./check_backend
```
before manual testing to be sure Master/Slave and Galera setups are
in order (_check_backend_ also fixes broken replication or Galera)
## Restoring broken setup
Just use http://127.0.0.1:8089/view/snapshot/job/restore_snapshot/build
Manual snapshot reverting:
```bash
~/mdbci/mdbci snapshot revert --path-to-nodes debug_env --snapshot-name clean
```
## Destroying
Use http://127.0.0.1:8089/view/axilary/job/destroy/build
with _name=debug_env_
or _clean_vms.sh_ script
```bash
cd ~/mdbci/scripts
./clean_vms.sh debug_env
```
## Notes
Please check _slave_name_ parameter when executing any Jenkins job.
All jobs are executed only for defined slave (or for master).
i.e. VM set with the same name can be running on different slaves at the same time.

View File

@ -0,0 +1,166 @@
# Local deployment
## Prepare MDBCI environment
_Libvirt_, _Docker_ and _Vagrant_, with a set of plugins, are needed for MDBCI.
Please follow the instructions [here](https://github.com/mariadb-corporation/mdbci/blob/integration/PREPARATION_FOR_MDBCI.md).
## Creating VMs for local tests
[test/create_local_config.sh](test/create_local_config.sh) script creates a set of virtual machines
(1 maxscale VM, 4 Master/Slave and 4 Galera).
Direct execution of create_local_config.sh requires manuall paramters setting.
It is easiler to ese create_local_config_libvirt.sh and create_local_config_docker.sh
Script usage:
```bash
. ~/build-scripts/test/create_local_config.sh <target> <name>
```
where
target - Maxscale binary repository name
name - name of virtual machines set
Note: '.' before command allows script to load all environmental variables (needed for 'maxscale-system-test').
All other parameters have to be defined as environmental variables before executing the script.
Examples of parameters definition can be found in the following scripts:
[test/create_local_config_libvirt.sh](test/create_local_config_libvirt.sh)
[test/create_local_config_docker.sh](test/create_local_config_docker.sh)
```bash
. ~/build-scripts/test/create_local_config_libvirt.sh <target> <name>
```
```bash
. ~/build-scripts/test/create_local_config_docker.sh <target> <name>
```
## Execute test
Clone and compile https://github.com/mariadb-corporation/MaxScale
Build tests:
```bash
cd Maxscale/maxscale-system-test
cmake .
make
```
if environmental variables are not set:
```bash
. ~/build-scripts/test/set_env_vagrant.sh <name>
```
Execute single test by starting test binary or use ctest to run all or selected tests (see https://cmake.org/cmake/help/v3.8/manual/ctest.1.html)
To see test output:
```bash
ctest -VV
```
## Destroying VMs
```bash
cd ~/mdbci/scripts
./clean_vms.sh <name>
```
## Reverting default snapshot
create_local_config.sh script takes one snapshot of recently created set of VMs. Snapshot name is 'clean'
If VMs are damaged during testing process it is easy to restore them:
```bash
~/mdbci/mdbci snapshot revert --path-to-nodes <name> --snapshot-name clean
```
If needed, more snapshots can be created:
```bash
cd ~/mdbci
./mdbci snapshot take --path-to-nodes <name> --snapshot-name <snapshot_name>
```
## Accessing VMs
```bash
cd ~/mdbci/<name>
vagrant ssh <vm_name>
```
where <vm_name> can be 'maxscale', 'node_XXX' or 'galera_XXX'.
```bash
. ~/build-scripts/test/set_env_vagrant.sh <name>
ssh -i $<vm_name>_keyfile $<vm_name>_whoami@$<vm_name>_network
```
examples:
```bash
ssh -i $node_002_keyfile $node_002_whoami@$node_002_network
ssh -i $maxscale_keyfile $maxscale_whoami@$maxscale_network
```
### Own VM configuration template
By default scripts use
~/build-scripts/test/template.libvirt.json
and
~/build-scripts/test/template.docker.json
These templates can be used as examples to create your own templates.
To use own template:
put your template file to ~/build-scripts/test/templates/
and define 'template_name' variable
```bash
export template_name=<your_template_filename>
. ~/build-scripts/test/create_local_config_libvirt.sh <target> <name>
```
## Troubleshooting
### More info about libvirt and vagrant-libvirt plugin
https://help.ubuntu.com/lts/serverguide/libvirt.html
https://github.com/vagrant-libvirt/vagrant-libvirt#installation
### Random VM creation failures
Plese check the amount of free memory and amount of running VMs
```bash
virsh list
docker ps
```
and remove all VMs and containers you do not need
### Wrong time on host server
If server time is wrong it can cause random problems. Please do time sync: use ntp or
```bash
sudo date -s "$(curl -s --head http://google.com | grep ^Date: | sed 's/Date: //g')"
```
### Info from OSLL wiki
#### Libvirt DNS resolving problem quick fix
https://dev.osll.ru/projects/mdbci/wiki/Libvirt_DNS_resolving_problem_quick_fix

View File

@ -0,0 +1,91 @@
# build-scripts-vagrant
Build and test scripts to work with Vagrant-controlled VMs do follwoing:
* create VM for Maxscale build
* create a set of VMs (test environment) for running Maxscale tests
Test environment consists of:
* 'maxscale' machine
* 'nodeN' machines for Master/Slave setup (node0, node1, node2, node3)
* 'galeraN' machines for Galera cluster (galera0, galera1, galera2, galera3)
## Main files
File|Description
----|-----------
[prepare_and_build.sh](prepare_and_build.sh)|Create VM for build and execute build, publish resulting repository
build.\<provider\>.template.json|templates of MDBCI configuration description (build environment description) of build machines|
[test-setup-scripts/setup_repl.sh](test-setup-scripts/setup_repl.sh)|Prepares repl_XXX machines to be configured into Master/Slave
[test-setup-scripts/galera/setup_galera.sh](test-setup-scripts/galera/setup_galera.sh)|Prepares galera_XXX machines to be configured into Galera cluster
[test-setup-scripts/cnf/](test-setup-scripts/cnf/)|my.cnf files for all backend machines
test/template.\<provider\>.json|Templates of MDBCI configuration description (test environment description) of test machines|
[test/run_test.sh](test/run_test.sh)|Creates test environment, build maxscale-system-tests from source and execute tests using ctest
[test/set_env_vagrant.sh](test/set_env_vagrant.sh)|set all environment variables for existing test machines using MDBCI to get all values
[test/create_env.sh](test/create_env.sh)|Creates test environment, copy Maxscale source code to 'maxscale' machine in this environment, build Maxscale
## [prepare_and_build.sh](prepare_and_build.sh)
Following variables have to be defined before executing prepare_and_build.sh
|Variable|Meaning|
|--------|--------|
|$MDBCI_VM_PATH|Path to duirectory to store VM info and Vagrant files (by default $HOME/vms)|
|$box|name of MDBCI box (see [MDBCI docs](https://github.com/OSLL/mdbci#terminology))|
|$target|name of repository to put result of build|
|$scm_source|reference to the place in Maxscale source GIT repo|
|$cmake_flags|additional cmake flags|
|$do_not_destroy_vm|if 'yes' build VM won't be destroyed after the build. NOTE: do not forget destroy it manually|
|$no_repo|if 'yes' repository won't be built|
Scripts creates MDBCI configuration build_$box-<current data and time>.json, bringing it up (the directory build_$box-<current data and time> is created)
Resulting DEB or RPM first go to ~/pre-repo/$target/$box
NOTE: if ~/pre-repo/$target/$box contains older version they will also go to repostory
Resulting repository goes to ~/repository/$target/mariadb-maxscale/
It is recommeneded to publish ~/repository/ directory on a web server
## [test/run_test.sh](test/run_test.sh)
Following variables have to be defined before executing run_test.sh
|Variable|Meaning|
|--------|--------|
|$box|name of MDBCI box for Maxscale machine (see [MDBCI docs](https://github.com/OSLL/mdbci#terminology))|
|$name|unique name of test setup|
|$product|'mariadb' or 'mysql'|
|$version|version of backend DB|
|$target|name of Maxscale repository|
|$ci_url|URL of repostory web site, Maxscale will be installed from $ci_url/$target/mariadb-maxscale/
|$do_not_destroy_vm|if 'yes' build VM won't be destroyed after the build. NOTE: do not forget to destroy it manually|
|$smoke|if 'yes' all tests are executed in 'quick' mode (less iterations, skip heavy operations)|
|$test_set|definition of set of tests to run in ctest notation|
|test_set_labels|list of ctest labels. If it is not define 'test_set' will be used as a plain string|
## Test environment operations
### Accessing nodes
<pre>
cd $MDBCI_VM_PATH/$name/
vagrant ssh $node_name
</pre>
where $node_name - 'maxscale', 'node0', ..., 'node3', ..., 'nodeN', 'galera0', ..., 'galera3', ..., 'galeraN'
### Getting IP address and access keys
<pre>
~/mdbci/mdbci show network $name
~/mdbci/mdbci show network $name/$node_name
~/mdbci/mdbci show keyfile $name/$node_name
</pre>
### Destroying environemnt
<pre>
cd $MDBCI_VM_PATH/$name/
vagrant destroy -f
</pre>
### Set variables by [test/set_env_vagrant.sh](test/set_env_vagrant.sh)
<pre>
. ../build-scripts/test/set_env_vagrant.sh $name
</pre>

View File

@ -0,0 +1,263 @@
# Release process
## Pre-release Checklist
* Make sure all bugs that have been fixed are also closed on Jira and have the
correct fixVersion
For major releases:
* Create new release notes and add all fixed bugs, use a previous one as a
template
For bug fix releases:
* Run the `Documentation/Release-Notes/generate_release_notes.sh` script to
auto-generate release notes
Finally:
* Add link to release notes and document major changes in Changelog.md
## 1. Tag
Release builds are always made using a tag and a separate branch. However, the
used tag is a _tentative_ tag, to ensure that there never is a need to _move_
any tag, should the release have to be modified after it has been tagged. All
that is needed is to create a new tentative tag.
The source for release `x.y.z` is tagged with `maxscale-x.y.z-ttN` where `N` is
1 for the first attempt and incremented in case the `x.y.z` source still needs
to be modified and the release rebuilt.
The final tag `maxscale-x.y.z` is created _after_ the packages have been
published and we are certain they will not be rebuilt, without also the version
number being changed.
To create the tag and the branch from the main _x.y_ branch:
```
git checkout x.y
git checkout -b x.y.z
git tag maxscale-x.y.z-ttN
git push -u origin x.y.z
git push origin refs/tags/maxscale-x.y.z-ttN
```
**A note on fixing bugs while doing a release:**
A separate branch is used to guarantee that no commits are added once the
release proceedings have started. If any fixes to code or documentation need to
be done, do them on the _x.y.z_ branch. If a fix has been made, create a new tag
by incrementing the `-ttN` suffix by one and push both the branch and the new
tag to the repo.
**NOTE** The tentative suffix - `-ttN` - is **only** used when
specifying the tag for the build, it must **not** be present in
any other names or paths.
## 2. Build and upgrade test
The BuildBot [build_for_release](https://maxscale-ci.mariadb.com/#/builders/38)
job should be used for building the packages. Use your GitHub account to log in
to actually see the job. Click the blue _Build for release_ button in the top
right corner to start it.
### Parameters to define
#### `branch`
This is the tag that is used to build the release.
```
refs/tags/maxscale-x.y.z-ttN
```
#### `The version number of this release in x.y.z format`
The version number of this release in x.y.z format. This will create two packages; maxscale-x.y.z-release and maxscale-x.y.z-debug.
```
x.y.z
```
#### `Old target`
The previous released version, used by upgrade tests. Set it to the previous
release e.g. for 2.2.19 set it to 2.2.18. For GA releases, set it to the latest
release of the previous major release e.g. for 2.3.0 set it to 2.2.19.
```
x.y.z
```
## 3. Copying to code.mariadb.com
ssh to `code.mariadb.com` with your LDAP credentials.
Create directories and copy repositories files. Replace `x.y.z` with the
correct version.
```bash
cd /home/mariadb-repos/mariadb-maxscale/
mkdir x.y.z
mkdir x.y.z-debug
cd x.y.z
rsync -avz --progress --delete -e ssh vagrant@max-tst-01.mariadb.com:/home/vagrant/repository/maxscale-x.y.z-release/mariadb-maxscale/ .
cd ../x.y.z-debug
rsync -avz --progress --delete -e ssh vagrant@max-tst-01.mariadb.com:/home/vagrant/repository/maxscale-x.y.z-debug/mariadb-maxscale/ .
```
Once the code has been uploaded, update the symlink for the current major
release.
```bash
cd /home/mariadb-repos/mariadb-maxscale/
rm x.y
ln -s x.y.z x.y
```
If this is the GA release of a new major version, update the `latest` symlink to
point to `x.y`.
## 4. Email webops-requests@mariadb.com
Email example:
Subject: `MaxScale x.y.z release`
```
Hi,
Please publish Maxscale x.y.z binaries on web page.
Repos are on code.mariadb.com at /home/mariadb-repos/mariadb-maxscale/x.y.z
Br,
YOUR NAME HERE
```
Replace `x.y.z` with the correct version.
**NOTE** Sometimes - especially at _big_ releases when the exact release
date is fixed in advance - the following steps 5, 6 and 7 are done right
after the packages have been uploaded, to ensure that the steps 4 and 8
can be done at the same time.
## 5. Create the final tag
Once the packages have been made available for download, create
the final tag
```bash
git checkout maxscale-x.y.z-ttN
git tag -a -m "Tag for MaxScale x.y.z" maxscale-x.y.z
git push origin refs/tags/maxscale-x.y.z
```
and remove the tentative tag(s)
```bash
git tag -d maxscale-x.y.z-ttN
git push origin :refs/tags/maxscale-x.y.z-ttN
```
## 7. Update the release date
Once the branch `x.y.z` has been created and the actual release
date of the release is known, update the release date in the
release notes.
```bash
git checkout x.y.z
# Update release date in .../MaxScale-x.y.z-Release-Notes.md
git add .../MaxScale-x.y.z-Release-Notes.md
git commit -m "Update release date"
git push origin x.y.z
```
**NOTE** The `maxscale-x.y.z` tag is **not** moved. That is, the
release date is _not_ available in the documentation marked with
the tag `maxscale-x.y.z` but _is_ available in the branch marked
with `x.y.z`.
Merge `x.y.z` to `x.y`.
```
git checkout x.y
git merge x.y.z
```
At this point, the last commits on branches `x.y` and `x.y.z`
should be the same and the commit should be the one containing the
update of the release date. Further, that commit should be the only
difference between the branches and the tag `maxscale-x.y.z`.
**Check that indeed is the case**.
## 8. Update documentation
Email webops-requests@mariadb.com with a mail containing the following. Replace
`x.y.z` with the correct version, also in the links.
Subject: `Please update MaxScale x.y knowledge base`
```
Hi,
Please update https://mariadb.com/kb/en/mariadb-enterprise/mariadb-maxscale-XY/
from https://github.com/mariadb-corporation/MaxScale/tree/x.y.z/Documentation
Br,
YOUR NAME HERE
```
## 9. Send release email to mailing list
After the KB has been updated and the binaries are visible on the downloads
page, email maxscale@googlegroups.com with a mail containing the
following. Replace `x.y.z` with the correct version.
Subject: `MariaDB MaxScale x.y.z available for download`
```
Hi,
We are happy to announce that MariaDB MaxScale x.y.z GA is now available for download. This is a bugfix release.
The Jira list of fixed issues can be found here(ADD LINK HERE).
* [MXS-XYZ] BUG FIX DESCRIPTION HERE
Binaries:
https://mariadb.com/downloads/mariadb-tx/maxscale
Documentation:
https://mariadb.com/kb/en/mariadb-enterprise/maxscale/
Release notes:
KB LINK TO RELEASE NOTES HERE
Source code:
https://github.com/mariadb-corporation/MaxScale/releases/tag/maxscale-x.y.z
Please report any issues on Jira:
https://jira.mariadb.org/projects/MXS/issues
On behalf of the entire MaxScale team,
YOUR NAME HERE
```
## 10. Update the version number for the next release
Increment the `MAXSCALE_VERSION_PATCH` value in the `VERSIONxx.cmake` file
in the source root where `xx` is the major and minor release number. For
example, with 2.2 releases, update the `VERSION22.cmake` file.
If the `MAXSCALE_BUILD_NUMBER` is not 1, set it to 1. This is only
incremented if the packages have to be rebuilt after a release has been
made.
Make sure the `VERSION.cmake` points to the latest `VERSIONxx.cmake` file
so that updates in older releases won't affect newer releases.

View File

@ -0,0 +1,105 @@
# How to run test
## Prerequirements
Installed [MDBCI](https://github.com/mariadb-corporation/mdbci) (with dependencies, see
[MDBCI doc](https://github.com/mariadb-corporation/mdbci#mariadb-continuous-integration-infrastructure-mdbci)),
[build-scripts](https://github.com/mariadb-corporation/build-scripts-vagrant)
Componets should be in the following directories:
[build-scripts](https://github.com/mariadb-corporation/build-scripts-vagrant) - in ~/build-scripts/
[mdbci](https://github.com/mariadb-corporation/mdbci) - in ~/mdbci/
## Creating test environment and running tests
[run_test.sh](test/run_test.sh) generates MDBCI description of configuration, bring all VMs up, setup DB on all backends,
prapare DB for creating Master/Slave and Galera setups, build [maxscale-system-test](https://github.com/mariadb-corporation/MaxScale/tree/develop#maxscale-system-test)
package, execute ctest. Source code of
[maxscale-system-test](https://github.com/mariadb-corporation/MaxScale/tree/develop/maxscale-system-test#maxscale-system-test)
have to be in current directory before execution [run_test.sh](test/run_test.sh)
Environmental variables have to be defined before executing [run_test.sh](test/run_test.sh)
For details see [description](README.md#run_testsh)
Example:
<pre>
export export MDBCI_VM_PATH=$HOME/vms
export name="my-centos7-release-1.3.0-test"
export box="centos7"
export product="mariadb"
export version="5.5"
export target="develop"
export ci_url="http://max-tst-01.mariadb.com/ci-repository/"
export do_not_destroy_vm="yes"
export test_set="1,10,,20,30,95"
~/build-scripts/test/run_test.sh
</pre>
After the test, all machines can be accessed:
<pre>
cd $MDBCI_VM_PATH/$name
vagrant ssh \<machine_name\>
</pre>
where \<machine_name\> is 'maxscale', 'node0', ..., 'node3', ..., 'nodeN', 'galera0', ..., 'galera3', ..., 'galeraN'
http://max-tst-01.mariadb.com/ci-repository/develop/mariadb-maxscale/ have to contain Maxscale repository
## Running tests with existing test environment
[set_env_vagrant.sh](test/set_env_vagrant.sh) script sets all needed environmental variables for
[maxscale-system-test](https://github.com/mariadb-corporation/MaxScale/tree/develop/maxscale-system-test)
See [maxscale-system-test documentation](https://github.com/mariadb-corporation/MaxScale/tree/develop/maxscale-system-test#environmental-variables) for details regarding variables.
Example:
<pre>
export name="running_conf_name"
. ../build-scripts/test/set_env_vagrant.sh $name
set +x
git clone https://github.com/mariadb-corporation/maxscale.git
cd MaxScale/maxscale-system-test
cmake .
make
./test_executable_name
</pre>
or use ctest to run several tests
## Creating environment for Maxscale debugging
[create_env.sh](test/create_env.sh) script generates MDBCI description of configuration, bring all VMs up,
setup DB on all backends, prapare DB for creating Master/Slave and Galera setups, copy source code of
[Maxscale](https://github.com/mariadb-corporation/MaxScale) to 'maxscale' VM and build it.
Note: script does not install Maxscale, it have to be done manually.
Following variables have to be defined:
'name', 'box', 'product', 'version'
(see [run_test.sh documentation](https://github.com/mariadb-corporation/build-scripts-vagrant/blob/master/README.md#run_testsh))
'source', 'value'
(see
[prepare_and_build.sh documentation](https://github.com/mariadb-corporation/build-scripts-vagrant/blob/master/README.md#prepare_and_buildsh))
Example:
<pre>
export MDBCI_VM_PATH=$HOME/vms
export name="my-centos7-release-1.3.0-test"
export box="centos7"
export product="mariadb"
export version="5.5"
export source="BRANCH"
export value="develop"
~/build-scripts/test/create_env.sh
</pre>
**Note**: do not forget to destroy test environment by vagrant destroy:
<pre>
cd $MDBCI_VM_PATH/$name/
vagrant destroy -f
</pre>

289
system-test/ENV_SETUP.md Normal file
View File

@ -0,0 +1,289 @@
# Build and test environment setup
### Full build and test environment setup
<pre>
# install ruby
sudo apt-get install ruby
# install all needed libraries
sudo apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev
# install vagrant
# it is also possible to install Vagrant from distribution repository, but in case of problems please use 1.7.2
wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.2_x86_64.deb
sudo dpkg -i vagrant_1.7.2_x86_64.deb
# install Vagrant plugins
vagrant plugin install vagrant-aws vagrant-libvirt vagrant-mutate
# get MDBCI, build scripts, descriptions of MDBCI boxes and keys from GitHub
git clone https://github.com/OSLL/mdbci.git
git clone git@github.com:mariadb-corporation/mdbci-repository-config.git
git clone git@github.com:mariadb-corporation/build-scripts-vagrant.git
git clone git@github.com:mariadb-corporation/mdbci-boxes
# Copy scripts and boxes to proper places
mv build-scripts-vagrant build-scripts
scp -r mdbci-boxes/* mdbci/
# set proper access rights for ssh keys (for ppc64 machines)
chmod 400 mdbci/KEYS/*
# install all the stuff for test package build
sudo apt-get install cmake gcc g++ libssl-dev
sudo apt-get install mariadb-client shellcheck
# install MariaDB development library
sudo apt-get install libmariadbclient-dev
# Ubuntu repos can contain the sa,e package with different name 'libmariadb-client-lgpl-dev'
# but it can not be used to build maxscale-system-test; please use mariadb.org repositories
# https://downloads.mariadb.org/mariadb/repositories/
# Do not forget to remove all other MariaDB and MySQL packages!
# install qemu (more info https://en.wikibooks.org/wiki/QEMU/Installing_QEMU)
sudo apt-get install qemu qemu-kvm libvirt-bin
# install virt-manager (if you prefer UI)
sudo apt-get install virt-manager
# install docker (if needed) - see https://docs.docker.com/engine/installation/
# if cmake from distribution repository is too old it is recommended to build it from latest sources
wget https://cmake.org/files/v3.4/cmake-3.4.1.tar.gz # replace 3.4.1 to latest version
tar xzvf cmake-3.4.1.tar.gz
cd cmake-3.4.1
./bootstrap
make
sudo make install
cd
# sysbench 0.5 should be in sysbench_deb7 directory; it can be built from source:
git clone https://github.com/akopytov/sysbench.git
cd sysbench
./autogen.sh
./configure
make
cd ..
mv sysbench sysbench_deb7
# for OVH servers it is needed to move 'docker' and 'libvirt' working directories to /home
# (replace 'vagrant' to your home directory name)
cd /var/lib/
sudo mv docker /home/vagrant/
sudo ln -s /home/vagrant/docker docker
cd libvirt
sudo mv images /home/vagrant/
sudo ln -s /home/vagrant/images images
cd
# (HACK) in case of problem with building sysbench:
scp -r vagrant@maxscale-jenkins.mariadb.com:/home/vagrant/sysbench_deb7 .
# (HACK) in case of problem with 'dummy' box (problem is caused by MDBCI bug):
scp -r vagrant@maxscale-jenkins.mariadb.com:/home/vagrant/.vagrant.d/boxes/dummy .vagrant.d/boxes/
# MariaDBManager-GPG* files are needed for Maxscale builds in the home directory
# put AWS keys to aws-config.yml (see https://github.com/OSLL/mdbci/blob/master/aws-config.yml.template)
# add curent user to the group 'libvirtd'
sudo usermod -a -G user_name libvirtd
# start libvirt default pool
virsh pool-start default
</pre>
### Setup VMs manually
#### Empty virtual machine
Following template can be used to create empty VM (for qemu machines):
<pre>
{
"cookbook_path" : "../recipes/cookbooks/",
"build" :
{
"hostname" : "default",
"box" : "###box###",
"product" : {
"name" : "packages"
}
}
}
</pre>
for AWS machines:
<pre>
{
"cookbook_path" : "../recipes/cookbooks/",
"aws_config" : "../aws-config.yml",
"build" :
{
"hostname" : "build",
"box" : "###box###"
}
}
</pre>
Following boxes are availabe:
* qemu: debian_7.5_libvirt, ubuntu_trusty_libvirt, centos_7.0_libvirt, centos_6.5_libvirt
* AWS: rhel5, rhel6, rhel7, sles11, sles12, fedora20, fedora21, fediora22, ubuntu_wily, ubuntu_vivid, centos7, deb_jessie
#### Maxscale and backend machines creation
* Generation of Maxscale repository description
It is necessary to generate descriptions of MariaDB and Maxscale repositories before bringin up Maxscale machine with Vagrant
<pre>
export ci_url="http://my_repository_site.com/repostory/"
~/mdbci-repository-config/generate_all.sh $repo_dir
~/mdbci-repository-config/maxscale-ci.sh $target $repo_dir
</pre>
where
<pre>
$repo_dir - directory where repository descriptions will be created
$target - directory with MaxScale packages in the repositoy
</pre>
example:
<pre>
export ci_url="http://max-tst-01.mariadb.com/ci-repository/"
~/mdbci-repository-config/generate_all.sh repo.d
~/mdbci-repository-config/maxscale-ci.sh develop repo.d
</pre>
More information can be found in the [MDBCI documentation](https://github.com/OSLL/mdbci#repod-files) and in the [mdbci-repository-config documentaion](https://github.com/mariadb-corporation/mdbci-repository-config#mdbci-repository-config)
* Preparing configuration description
Virtual machines should be described in JSON format. Example template can be found in the [build-scripts package](https://github.com/mariadb-corporation/build-scripts-vagrant/blob/master/test/template.libvirt.json).
MariaDB machine description example:
<pre>
"node0" :
{
"hostname" : "node0",
"box" : "centos_7.0_libvirt",
"product" : {
"name": "mariadb",
"version": "10.0",
"cnf_template" : "server1.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
}
</pre>
"cnf_template" defines .cnf file which will be places into MariaDB machine. [build-scripts package](https://github.com/mariadb-corporation/build-scripts-vagrant/tree/master/test-setup-scripts/cnf) contains examples of .cnf files.
MariaDB Galera machine description example:
<pre>
"galera0" :
{
"hostname" : "galera0",
"box" : "centos_7.0_libvirt",
"product" : {
"name": "galera",
"version": "10.0",
"cnf_template" : "galera_server1.cnf",
"cnf_template_path": "~/build-scripts/test-setup-scripts/cnf"
}
}
</pre>
For Galera machines MDBCI automatically puts following information into .cnf file:
|field|description|
|------|----|
|###NODE-ADDRESS###|IP address of the node (for AWS - private IP)|
|###NODE-NAME###|Replaces by node name ("node0" in this example)|
|###GALERA-LIB-PATH###|Path to the Galera library file (.so file)|
Example of Maxscale machine description:
<pre>
"maxscale" :
{
"hostname" : "maxscale",
"box" : "centos_7.0_libvirt",
"product" : {
"name": "maxscale"
}
}
</pre>
#### Generation configuration and bringing machines up
After creation machines description JSON two steps are needed.
1. Generate configuration
<pre>
./mdbci --override --template $template_name.json --repo-dir $repo_dir generate $name
</pre>
where
|variable|description|
|----|----|
|$template_name|name of machines descripiton JSON file|
|$repo_dir|directory with repositories description generated by mdbci-repository-config (repo.d)|
|$name|name of test configuration; will be used as directory name for Vagrant files|
2. Bringing machines up
<pre>
./mdbci up $name
</pre>
#### Configuring DB users
Automatic DB users is not implemented yet, so it have to be done manually. See [setup_repl.sh](https://github.com/mariadb-corporation/build-scripts-vagrant/blob/master/test-setup-scripts/setup_repl.sh) and [setup_galera.sh](https://github.com/mariadb-corporation/build-scripts-vagrant/blob/master/test-setup-scripts/galera/setup_galera.sh) for details.
Any test from 'maxscale-system-test' checks Master/Slave and Galera configurations and restores them if they are broken, but it works only if DB users are created.
TODO: add it into 'maxscale-system-test'
### Access VMs
MDBCI provides a number of commands to get information about running vrtial machines. See [MDBCI documentation](https://github.com/OSLL/mdbci#mdbci-syntax) for details.
[set_env_vagrant.sh script](https://github.com/mariadb-corporation/build-scripts-vagrant/blob/master/test/set_env_vagrant.sh) defines environmental variables needed by 'maxscale-system-test'. The same variables can be used to access VMs manually.
Script have to be executed fro 'mbdci' directory. Do not forget '.':
<pre>
cd ~/mdbci/
. ../build-scripts/test/set_env_vagrant.sh $name
</pre>
After it virual machines can be accessed via ssh, for example:
<pre>
ssh -i $maxscale_sshkey $maxscale_access_user@$maxscale_IP
</pre>
Another way is to use 'vagrant ssh':
<pre>
cd ~/mdbci/$name/
vagrant ssh &lt;node_name&gt;
</pre>
MDBCI can give IP address, path to ssh key:
<pre>
./mdbci show network &lt;configuration_name&gt;/&lt;node_name&gt; --silent
./mdbci show keyfile &lt;configuration_name&gt;/&lt;node_name&gt; --silent
./mdbci ssh --command 'whoami' &lt;configuration_name&gt;/&lt;node_name&gt; --silent
</pre>
Node name for build machine is 'build'
Nodes names for typical test setup are node0, ..., node3, galera0, ..., galera3, maxscale
Example:
<pre>
./mdbci show network centos6_vm01/build --silent
./mdbci show keyfile centos6_vm01/build --silent
./mdbci ssh --command 'whoami' centos6_vm01/build --silent
</pre>
### Destroying configuration
<pre>
cd ~/mdbci/$name
vagrant destroy -f
</pre>

View File

@ -0,0 +1,204 @@
# Creating a test case
This document describes basic principles of test case creation and provides list of basic usefull functions and properties.
For detailed function and properties description and thier full list please see documetation generated by Doxygen.
## Test case basics
For every test case following should be created:
- test executable
- record in the 'templates' file
- Maxscale configuration template (if test requires special Maxscale configuration)
- [CMakeLists.txt](CMakeLists.txt) record:
- add_test_executable(<source.cpp> <binary_name> <cnf_template_name>)
- 'set_tests_properties' if test should be added to the group or bigger timeout should be defined (> default 1800s)
## 'templates' file
'templates' file contains information about Maxscale configuration template for every test in plain text format:
\<test_executable_name\> \<suffix_of_cnf_template\>
Template itself should be:
cnf/maxscale.cnf.template.\<suffix_of_cnf_template\>
## Maxscale configuration template
All templates are in cnf/ directory:
cnf/maxscale.cnf.template.\<suffix_of_cnf_template\>
Template can contain following varables:
|Variable|Maeaning|
|--------|--------|
|###threads###| Number of Maxscale treads|
|###node_server_IP_N###|IP of Master/Slave node N|
|###node_server_port_N###|port of Master/Slave node N|
|###galera_server_IP_N###|IP of Galera node N|
|###galera_server_port_N###|port of Galera node N|
## Test creation principles
* start from initializing of an object of TestConnections class
* set timeout before every operation which can got stuck, do not forget to disable timeout before long sleep()
* use TestConnections::tprintf function to print test log
* use TestConnections::add_result() to idicate test failure and print explanation message
* execute TestConnections::copy_all_logs at the end of test
* return TestConnections::global_result value
* do not leave any node blocked by firewall
## Class TestConnections
This class contains all information about Maxscale node and about all backend nodes as well as a set of functions
to handle Maxscale and backends, interact with Maxscale routers and Maxadmin.
Here is only list of main functions, for all details see Doxygen comments in [testconnections.h](testconnections.h)
Currently two backend sets are supported (represented by Mariadb_nodes class objects): 'repl' and 'galera'
- contains all info and operations for Master/Slave and Galera setups
(see Doxygen comments in [mariadb_nodes.h](mariadb_nodes.h) )
It is assumed that following routers+listers are configured
|Router|Port|
|------|----|
|RWSplit|4006|
|ReadConn master|4008|
|ReadConn Slave|4009|
|binlog|5306|
|test case -specific|4016|
### Most important fuctions and variables
Please check Doxygen comments for details
#### TestConnections(int argc, char *argv[]);
* reads all information from environmental variables
* checks backends, if broken - does one attempt to restore
* create maxscale.cnf out of template and copy it to Maxscale node
* create needed directories, set access righs for them, cleanup logs, coredumps
* start Maxscale
* initialize internal structures
#### Timeout functions
int set_timeout(int timeout_seconds)
stop_timeout()
If after set_timeout() a new call of set_timeout() or stop_timeout() is not done the test execution terminated,
logs from Maxscale are copied to host.
#### Open connection functions
|Function|Short description|
|----|---|
| int connect_maxscale();<br> int connect_rwsplit();<br> int connect_readconn_master();<br> int connect_maxscale_slave();|store MYSQL handler in TestConnections object (only one cnnection can be created by them, second call leads to MYSQL handler leak)|
|MYSQL * open_rwsplit_connection() <br> MYSQL * open_readconn_master_connection() <br> MYSQL * open_readconn_slave_connection() |returns MYSQL handler (can be used to create a number of connections to each router)|
| int create_connections(int conn_N) |- open and then close N connections to each router|
A number of useful wrappers for mysql_real_connect() are not included into TestConnections class, but
they are availve from [mariadb_func.h](mariadb_func.h)
#### Backend check and setup functions
|Function|Short description|
|----|---|
|start_replication()|Configure nodes from 'repl' object as Master/Slave|
|start_galera()|Configure nodes from 'galera'|
|start_binlog()|Configure nodes from 'repl' in following way: node0 - Master, node1 - slave of node0, all others - slaves of Maxscale binlog router|
|start_mm()|Configure nodes from 'repl' in multimuster setup|
#### Result reporting functions
|Function|Short description|
|----|---|
|add_result()|failure printing, increase global_result|
|tprint()| printing with timestamp|
|copy_logs()|copy Maxscale log, maxscale.cnf file and core dump from Maxscale machine to current directory|
#### Different checks functions
|Function|Short description|
|----|---|
|try_query()|try SQL query and print error message in case of failure, increase global_result|
|check_t1_table()|check if t1 present in give DB|
|test_maxscale_connections|check status of connections to RWSplit, ReadConn master, ReadConn slave routers|
|check_maxscale_alive()|Check if RWSplit, ReadConn master, ReadConn slave routers are alive|
|check_log_err()|Check Maxscale log for presence of absence of specific string|
|find_connected_slave|find first slave that have connections from Maxscale|
#### Maxscale machine control functions
|Function|Short description|
|----|---|
|start_maxscale()||
|stop_maxscale()||
|restart_maxscale()||
|execute_ssh_maxscale()|execute command on Maxscale node via ssh|
#### Properties
|Name|Short description|Corresponding env variable|
|----|-----|----|
|global_result|0 if there is not single failure during the test| - |
|repl|Mariadb_nodes object for Master/Slave nodes| - |
|galera|Mariadb_nodes object for Galera nodes| - |
|smoke|do short tests if TRUE|smoke|
|maxscale_IP|IP address of Maxscale machine|maxscale_IP|
|maxscale_user|DB user name to access via Maxscale|maxscale_user|
|maxscale_password|password for MaxscaleDB user|maxscale_password|
|maxadmin_password|password for MaxAdmin interface (user name is hard coded 'admin')|maxadmin_password|
|conn_rwsplit|MYSQL handler of connections to RWSplit router| - |
|conn_master|MYSQL handler of connections to ReadConn router in master mode| - |
|conn_slave|MYSQL handler of connections to ReadConn router in master mode| - |
### Mariadb_nodes class
#### Master/Slave and Galera setup and check
|Function|Short description|
|----|---|
|start_replication()|Configure nodes from 'repl' object as Master/Slave|
|start_galera()|Configure nodes from 'galera'|
|set_slave()|execute CHANGE MASTER TO agains the node|
|check_replication()|Check if 'repl' nodes are properly configured as Master/Slave|
|check_galera()|Check if 'galera' nodes are are properly configured as Galera cluster|
|change_master|Make another node to be a master|
#### Connections functions
|Function|Short description|
|----|---|
|connect()|open connections to all nodes, store MYSQL handlers in internal variable, second call leads to handlers leak|
|close_connections()|close connections to all nodes|
#### Nodes control functions
|Function|Short description|
|----|---|
|block_node()|block MariaDB server on the node by firawall|
|unblock_node()|unblock MariaDB server on the node by firawall|
|unblock_all_nodes()|unblock MariaDB server on all nodes by firawall|
|stop_node()|stop MariaDB server on the node|
|start node()|start MariaDB server on the node|
|restart_node()|stop and restart MariaDB server on the node|
|check_node()|check if MariaDB server on the node is alive|
|check_and_restart_node()|check if MariaDB server on the node is alive and restart it if it is not alive|
|stop_nodes()|stop MariaDB server on all nodes|
|ssh_node()|Execute command on the node via ssh, return error code|
|ssh_node_output()|Same as ssh_nodE(), but return command output|
|flush_hosts()|Execute 'mysqladmin flush-hosts' on all nodes|
|execute_query_all_nodes()|Execute same query on all nodes|
#### Properties
|Name|Short description|Corresponding env variable|
|----|-----|----|
|N|Number of nodes|node_N <br> galera_N|
|user_name|DB user name|node_user <br> galera_user|
|password|password for DB user|node_password <br> galera_password|
|IP[ ]|IP address of the node|node_XXX <br> galera_XXX|
|IP_private[ ]|private IP of the node (for AWS nodes)|node_private_XXX <br> galera_private_XXX|
|port[ ]|MariaDB port for the node|node_port_XXX <br> galera_port_XXX|
|nodes[ ]|MYSQL handler| - |
### Maxadmin operations functions
[maxadmin_operations.h](maxadmin_operations.h) contains fuctions to communicate to Maxscale via MaxAdmin interface
|Function|Short description|
|----|---|
|execute_maxadmin_command()|send MaxAdmin command to Maxscale|
|execute_maxadmin_command_print()|send MaxAdmin command to Maxscale and print reply|
|get_maxadmin_param()|send MaxAdmin command to Maxscale and try to find the value of given parameter in output|

View File

@ -0,0 +1,6 @@
BEGIN;
SELECT (@@server_id) INTO @a;
SELECT @a;
@a
####server_id####
COMMIT;

View File

@ -0,0 +1,8 @@
USE test;
drop table if exists t1;
create table t1 (id integer);
set autocommit=0;
begin;
insert into t1 values(1);
commit;
drop table t1;

View File

@ -0,0 +1,4 @@
USE test;
SELECT IF(@@server_id <> @TMASTER_ID,'OK (slave)','FAIL (master)') AS result;
result
OK (slave)

View File

@ -0,0 +1,9 @@
USE test;
drop table if exists t1;
create table t1 (id integer);
set autocommit=0;
insert into t1 values(1);
select count(*) from t1;
count(*)
1
drop table t1;

View File

@ -0,0 +1,9 @@
USE test;
drop table if exists t1;
create table t1 (id integer);
set autocommit=OFF;
insert into t1 values(1);
select count(*) from t1;
count(*)
1
drop table t1;

View File

@ -0,0 +1,11 @@
USE test;
drop table if exists t1;
create table t1 (id integer);
set autocommit=0;
begin;
insert into t1 values(1);
commit;
select count(*) from t1;
count(*)
1
drop table t1;

View File

@ -0,0 +1,11 @@
USE test;
drop table if exists t1;
create table t1 (id integer);
set autocommit=0;
begin;
insert into t1 values(1);
commit;
select count(*) from t1;
count(*)
1
drop table t1;

View File

@ -0,0 +1,11 @@
USE test;
DROP DATABASE If EXISTS FOO;
SET autocommit=1;
BEGIN;
CREATE DATABASE FOO;
SELECT (@@server_id) INTO @a;
SELECT IF(@a <> @TMASTER_ID,'OK (slave)','FAIL (master)') AS result;
result
OK (slave)
DROP DATABASE FOO;
COMMIT;

View File

@ -0,0 +1,17 @@
USE test;
DROP TABLE IF EXISTS T1;
DROP EVENT IF EXISTS myevent;
SET autocommit=1;
BEGIN;
CREATE TABLE T1 (id integer);
CREATE EVENT myevent
ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 1 HOUR
DO
UPDATE t1 SET id = id + 1;
SELECT (@@server_id) INTO @a;
SELECT IF(@a <> @TMASTER_ID,'OK (slave)','FAIL (master)') AS result;
result
OK (slave)
DROP TABLE T1;
DROP EVENT myevent;
COMMIT;

View File

@ -0,0 +1,11 @@
USE test;
DROP TABLE IF EXISTS T1;
SET autocommit=1;
BEGIN;
CREATE TABLE T1 (id integer);
SELECT (@@server_id) INTO @a;
SELECT IF(@a <> @TMASTER_ID,'OK (slave)','FAIL (master)') AS result;
result
OK (slave)
DROP TABLE T1;
COMMIT;

View File

@ -0,0 +1,14 @@
USE test;
DROP PROCEDURE IF EXISTS simpleproc;
SET autocommit=1;
BEGIN;
CREATE PROCEDURE simpleproc (OUT param1 INT)
BEGIN
SELECT COUNT(*) INTO param1 FROM t;
END //
SELECT (@@server_id) INTO @a;
SELECT IF(@a <> @TMASTER_ID,'OK (slave)','FAIL (master)') AS result;
result
OK (slave)
DROP PROCEDURE simpleproc;
COMMIT;

View File

@ -0,0 +1,13 @@
USE test;
DROP FUNCTION IF EXISTS hello;
SET autocommit=1;
BEGIN;
CREATE FUNCTION hello (s CHAR(20))
RETURNS CHAR(50) DETERMINISTIC
RETURN CONCAT('Hello, ',s,'!');
SELECT (@@server_id) INTO @a;
SELECT IF(@a <> @TMASTER_ID,'OK (slave)','FAIL (master)') AS result;
result
OK (slave)
DROP FUNCTION hello;
COMMIT;

View File

@ -0,0 +1,12 @@
USE test;
DROP TABLE IF EXISTS T1;
CREATE TABLE T1 (id integer);
SET autocommit=1;
BEGIN;
CREATE INDEX foo_t1 on T1 (id);
SELECT (@@server_id) INTO @a;
SELECT IF(@a <> @TMASTER_ID,'OK (slave)','FAIL (master)') AS result;
result
OK (slave)
DROP TABLE T1;
COMMIT;

View File

@ -0,0 +1,6 @@
use test;
set autocommit=1;
use mysql;
select count(*) from user where user='skysql';
count(*)
2

View File

@ -0,0 +1,15 @@
USE test;
DROP TABLE IF EXISTS myCity;
SET autocommit = 0;
START TRANSACTION;
CREATE TABLE myCity (a int, b char(20));
INSERT INTO myCity VALUES (1, 'Milan');
INSERT INTO myCity VALUES (2, 'London');
COMMIT;
START TRANSACTION;
DELETE FROM myCity;
SELECT COUNT(*) FROM myCity;
COUNT(*)
0
COMMIT;
DROP TABLE myCity;

View File

@ -0,0 +1,15 @@
USE test;
DROP TABLE IF EXISTS myCity;
SET autocommit = Off;
START TRANSACTION;
CREATE TABLE myCity (a int, b char(20));
INSERT INTO myCity VALUES (1, 'Milan');
INSERT INTO myCity VALUES (2, 'London');
COMMIT;
START TRANSACTION;
DELETE FROM myCity;
SELECT COUNT(*) FROM myCity;
COUNT(*)
0
COMMIT;
DROP TABLE myCity;

View File

@ -0,0 +1,13 @@
USE test;
DROP TABLE IF EXISTS myCity;
SET autocommit = 0;
CREATE TABLE myCity (a int, b char(20));
INSERT INTO myCity VALUES (1, 'Milan');
INSERT INTO myCity VALUES (2, 'London');
COMMIT;
DELETE FROM myCity;
SELECT COUNT(*) FROM myCity;
COUNT(*)
0
COMMIT;
DROP TABLE myCity;

View File

@ -0,0 +1,13 @@
USE test;
DROP TABLE IF EXISTS myCity;
SET autocommit = oFf;
CREATE TABLE myCity (a int, b char(20));
INSERT INTO myCity VALUES (1, 'Milan');
INSERT INTO myCity VALUES (2, 'London');
COMMIT;
DELETE FROM myCity;
SELECT COUNT(*) FROM myCity;
COUNT(*)
0
COMMIT;
DROP TABLE myCity;

View File

@ -0,0 +1,5 @@
--disable_query_log
--disable_result_log
SELECT SLEEP(5);
--enable_result_log
--enable_query_log

View File

@ -0,0 +1,11 @@
--source testconf.inc
USE test;
--disable_warnings
drop table if exists t1;
--enable_warnings
create table t1 (id integer);
set autocommit=0; # open transaction
begin;
insert into t1 values(1); # write to master
commit;
drop table t1;

View File

@ -0,0 +1,3 @@
--source testconf.inc
USE test;
SELECT IF(@@server_id <> @TMASTER_ID,'OK (slave)','FAIL (master)') AS result;

View File

@ -0,0 +1,11 @@
--source testconf.inc
USE test;
--disable_warnings
drop table if exists t1;
--enable_warnings
create table t1 (id integer);
set autocommit=0; # open transaction
insert into t1 values(1); # write to master
select count(*) from t1; # read from master
drop table t1;

View File

@ -0,0 +1,11 @@
--source testconf.inc
USE test;
--disable_warnings
drop table if exists t1;
--enable_warnings
create table t1 (id integer);
set autocommit=OFF; # open transaction
insert into t1 values(1); # write to master
select count(*) from t1; # read from master
drop table t1;

View File

@ -0,0 +1,13 @@
--source testconf.inc
USE test;
--disable_warnings
drop table if exists t1;
--enable_warnings
create table t1 (id integer);
set autocommit=0; # open transaction
begin;
insert into t1 values(1); # write to master
commit;
select count(*) from t1; # read from master since autocommit is disabled
drop table t1;

View File

@ -0,0 +1,13 @@
--source testconf.inc
USE test;
--disable_warnings
drop table if exists t1;
--enable_warnings
create table t1 (id integer);
set autocommit=0; # open transaction
begin;
insert into t1 values(1); # write to master
commit;
select count(*) from t1; # read from master since autocommit is disabled
drop table t1;

View File

@ -0,0 +1,4 @@
use test;
set autocommit=1;
use mysql;
select count(*) from user where user='skysql';

View File

@ -0,0 +1,16 @@
--source testconf.inc
USE test;
--disable_warnings
DROP TABLE IF EXISTS myCity;
--enable_warnings
SET autocommit = 0;
START TRANSACTION;
CREATE TABLE myCity (a int, b char(20));
INSERT INTO myCity VALUES (1, 'Milan');
INSERT INTO myCity VALUES (2, 'London');
COMMIT;
START TRANSACTION;
DELETE FROM myCity;
SELECT COUNT(*) FROM myCity; # read transaction's modifications from master
COMMIT;
DROP TABLE myCity;

View File

@ -0,0 +1,16 @@
--source testconf.inc
USE test;
--disable_warnings
DROP TABLE IF EXISTS myCity;
--enable_warnings
SET autocommit = Off;
START TRANSACTION;
CREATE TABLE myCity (a int, b char(20));
INSERT INTO myCity VALUES (1, 'Milan');
INSERT INTO myCity VALUES (2, 'London');
COMMIT;
START TRANSACTION;
DELETE FROM myCity;
SELECT COUNT(*) FROM myCity; # read transaction's modifications from master
COMMIT;
DROP TABLE myCity;

View File

@ -0,0 +1,16 @@
--source testconf.inc
USE test;
--disable_warnings
DROP TABLE IF EXISTS myCity;
--enable_warnings
SET autocommit = 0;
CREATE TABLE myCity (a int, b char(20));
INSERT INTO myCity VALUES (1, 'Milan');
INSERT INTO myCity VALUES (2, 'London');
COMMIT;
DELETE FROM myCity; # implicit transaction started
SELECT COUNT(*) FROM myCity; # read transaction's modifications from master
COMMIT;
DROP TABLE myCity;

View File

@ -0,0 +1,16 @@
--source testconf.inc
USE test;
--disable_warnings
DROP TABLE IF EXISTS myCity;
--enable_warnings
SET autocommit = oFf;
CREATE TABLE myCity (a int, b char(20));
INSERT INTO myCity VALUES (1, 'Milan');
INSERT INTO myCity VALUES (2, 'London');
COMMIT;
DELETE FROM myCity; # implicit transaction started
SELECT COUNT(*) FROM myCity; # read transaction's modifications from master
COMMIT;
DROP TABLE myCity;

123
system-test/JENKINS.md Normal file
View File

@ -0,0 +1,123 @@
# Jenkins
## List of Jenkins installations
| URL | Description |
|----|----|
|[max-tst-01.mariadb.com:8089](http://max-tst-01.mariadb.com:8089)|AWS, qemu; Regular testing for different MariaDB versions, different Linux distributions, Developers testing|
|[maxscale-jenkins.mariadb.com:8089/](http://maxscale-jenkins.mariadb.com:8089/)|AWS, VBox; Regular builds for all distributions, build for Coverity, regular test VBox+CentOS6+MariaDB5.5|
|[maxscale-jenkins.mariadb.com:8090](http://maxscale-jenkins.mariadb.com:8090/)|MDBCI testing and debugging, Jenkins experiments|
## Basic Jenkins jobs
### [max-tst-01.mariadb.com:8089](http://max-tst-01.mariadb.com:8089)
| Job | Description |
|----|----|
|[build_and_test](http://max-tst-01.mariadb.com:8089/view/test/job/build_and_test/)|Build Maxscale and run systems tests|
|[run_test](http://max-tst-01.mariadb.com:8089/view/test/job/run_test/)|Run system tests, Maxscale package should be in the repository|
|[build](http://max-tst-01.mariadb.com:8089/job/build/build)|Build Maxscale, create repository and publish it to [http://max-tst-01.mariadb.com/ci-repository/](http://max-tst-01.mariadb.com/ci-repository/)|
|[run_test_no_env_rebuild](http://max-tst-01.mariadb.com:8089/view/test/job/run_test_no_env_rebuild/)|Run system tests without creating a new set of VMs|
|[create_env](http://max-tst-01.mariadb.com:8089/view/env/job/create_env/)|Create VMs, install build environment to Maxscale machine, build Maxscale on Maxscale machine|
|[destroy](http://max-tst-01.mariadb.com:8089/view/axilary/job/destroy/)|Destroy VMs created by [run_test](http://max-tst-01.mariadb.com:8089/view/test/job/run_test/) or [create_env](http://max-tst-01.mariadb.com:8089/view/env/job/create_env/)|
|[remove_lock](http://max-tst-01.mariadb.com:8089/view/axilary/job/remove_lock/)|Remove Vagrant lock set by [run_test](http://max-tst-01.mariadb.com:8089/view/test/job/run_test/) or [create_env](http://max-tst-01.mariadb.com:8089/view/env/job/create_env/)|
Every test run should have unique name (parameter 'name'). This name is used as a name of MDBCI configuration.
If parameter 'do_not_destroy' is set to 'yes' virtual machines (VM) are not destroyed after tests execution and can be laters used
for debugging or new test runs (see [run_test_no_env_rebuild](http://max-tst-01.mariadb.com:8089/view/test/job/run_test_no_env_rebuild/))
VMs can be accessed from vagrant@max-tst-01.mariadb.com machine using 'mdbci ssh' or 'vagrant ssh' as well as direct ssh
access using environmental variables provided by
[set_env_vagrant.sh](https://github.com/mariadb-corporation/maxscale-system-test/blob/master/ENV_SETUP.md#access-vms)
script.
Parameter 'box' defines type of VM and Linux distribution to be used for tests.
Test results go to [CDash](http://jenkins.engskysql.com/CDash/index.php?project=MaxScale), logs and core dumps are
stored [here](http://max-tst-01.mariadb.com/LOGS/).
[create_env](http://max-tst-01.mariadb.com:8089/view/env/job/create_env/) job allows to create a set of VMs
(for backend and Maxscale) and does Maxscale build on the Maxscale VM. After execution this job Maxscale machine
contains Maxscale source and binaries. *NOTE:* to properly configure Maxscale init scripts it is necessary to
use rpm/dpkg tool to install Maxscale package (package can be found in the Maxscale build directory).
[run_test](http://max-tst-01.mariadb.com:8089/view/test/job/run_test/) and
[create_env](http://max-tst-01.mariadb.com:8089/view/env/job/create_env/)
jobs create Vagrant lock which prevents running two Vagrant instances in parallel (such parallel execution can
cause Vagrant of VM provider failures). In case of job crash or interruption by user Vagrant lock stays in locked state
and prevents any new VM creation. To remove lock job
[remove_lock](http://max-tst-01.mariadb.com:8089/view/axilary/job/remove_lock/)
should be used.
## Process examples
### Running regression test against a branch
Execute [build_and_test](http://max-tst-01.mariadb.com:8089/view/test/job/build_and_test/)
Recommendations regarding parameters:
* 'name' - unique name: it can be any text string, but as a good practice rule: 'name' should refer to branch,
Linux distribution, date/time of testing, MariaDB version
* 'box' - most recommended boxes are 'centos_7.0_libvirt' (QEMU box) and 'centos7' (Amazon Web Services box)
* 'source' - which type of source to use. BRANCH for git branch, TAG for a git tag and COMMIT for a commit ID.
* 'value' - name of the branch (if 'source' is BRANCH), name of the GIT tag (if 'source' is TAG) or commint ID (if 'source' is COMMIT)
### Build MaxScale
Execute [build](http://max-tst-01.mariadb.com:8089/job/build/build) job.
Parameter 'target' means a name of repository to put packages:
e.g. if 'target' is 'develop' packages are going to
[http://max-tst-01.mariadb.com/ci-repository/develop/](http://max-tst-01.mariadb.com/ci-repository/develop)
NOTE: building is executed only for selected distribution ('box' parameter). Be careful with other distributions: if build is not executed for same distribution old version can be in the repository (from some old builds). Later tests have to be executed against the same distribution otherwise they can be run against old version of MaxScale. It is recommended to use unique name for 'target'.
To debug failed build:
* set 'do_not_destroy_vm' parameter to 'yes'
* after the build:
<pre>
ssh -i vagrant.pem vagrant@max-tst-01.mariadb.com
cd ~/mdbci/build-&lt;box&gt;-&lt;date&gt;&lt;time&gt;
vagrant ssh
</pre>
For example:
<pre>
ssh -i vagrant.pem vagrant@max-tst-01.mariadb.com
cd ~/mdbci/build_centos6-20160119-0935
vagrant ssh
</pre>
### Create set of Master/Slave and Galera nodes and setup build environment for Maxscale on one more node
Execute [create_env](http://max-tst-01.mariadb.com:8089/view/env/job/create_env/) job.
Login to Maxscale machine (see [environment documentation](ENV_SETUP.md#access-vms)).
MaxScale source code, binaries and packages can be found in the ~/workspace/ directory.
All build tools are installed. GIT can be used to go trough source code.
It is not recommended to commit anything from virtual machine to GitHub.
Please use 'rpm' or 'dpkg' to properly install Maxscale package (/etc/init.d/maxscale script will not be
installed without execution of 'rpm' or 'dpkg')
### Running test agains exiting version of Maxscale
Execute [run_test](http://max-tst-01.mariadb.com:8089/view/test/job/run_test/) job.
Be sure Maxscale binary repository is present on the
[http://max-tst-01.mariadb.com/ci-repository/](http://max-tst-01.mariadb.com/ci-repository/)
server. Please check:
* there is a directory with the name equal to 'target' parameter
* there is sub-directory for selected distribution ('box' parameter)
e.g. if 'target' is 'develop' and distribution is CentOS7 (boxes 'centos7' or 'centos_7.0_libvirt') the directory [http://max-tst-01.mariadb.com/ci-repository/develop/mariadb-maxscale/centos/7/x86_64/](http://max-tst-01.mariadb.com/ci-repository/develop/mariadb-maxscale/centos/7/x86_64/) have to contain Maxscale RPM packages.
If parameter 'do_not_destroy' set to 'yes' after the test virtual machine will not be destroyed and
can be used for debugging. See [environment documentation](ENV_SETUP.md#access-vms) to get know how to access virtual machines.
### Maintenance operations
If test run was executed with parameter 'do_not_destroy' set yo 'yes' please do not forget to execute
[destroy](http://max-tst-01.mariadb.com:8089/view/axilary/job/destroy/) against your 'target'
This job also have to be executed if test run job crashed or it was interrupted.

0
system-test/README Normal file
View File

66
system-test/README.md Normal file
View File

@ -0,0 +1,66 @@
# maxscale-system-test
System level tests for MaxScale
## Basics
- every test is separate executable file
- backend for test:
- 1 machine for Maxscale
- >= 4 machines for Master/Slave
- >= 4 machines for Galera cluster
- environmental variables contains all information about backend: IPs, user names, passwords, paths to tools, etc
- backed can be created with help of [MDBCI tool](https://github.com/mariadb-corporation/mdbci)
- configuring of Master/Slave and Galera can be done with help of [maxscale-system-test/mdbci/run_test.sh script](mdbci/run_test.sh)
## Manuals
[MDBCI and running tests basics](mdbci/README.md)
[Hints: How to write a test](HOW_TO_WRITE_TEST.md)
[Jenkins instructions](JENKINS.md)
## Environmental variables
|variable|meaning|
|--------|-------|
|node_N|Number of machines for Master/Slave|
|node_XXX_network|IP address of Master/Slave machine number XXX|
|node_XXX_private_ip|private IP address of Master/Slave machine XXX for AWS machines (for everything else - same as node_XXX|
|node_XXX_port|MariaDB port of Master/Slave machine XXX|
|node_XXX_whoami|user name to access Master/Slave machine XXX via ssh|
|node_XXX_access_sudo|'sudo ' if node_access_user_XXX does not have root rights, empty string if node_access_user_XXX has root rights|
|node_XXX_keyfile|full name of secret key to access Master/Slave machine XXX via ssh|
|node_XXX_start_db_command|bash command to start DB server on Master/Slave machine XXX|
|node_XXX_stop_db_command|bash command to stop DB server on Master/Slave machine XXX|
|node_user|DB user name to access Master/Slave nodes (have to have all priveligies with GRANT option)|
|node_password|password for node_user|
|galera_N|Number of machines for Galera|
|galera_XXX_network|IP address of Galera machine number XXX|
|galera_XXX_private|private IP address of Galera machine XXX for AWS machines (for everything else - same as node_XXX|
|galera_XXX_port|MariaDB port of Galera machine XXX|
|galera_XXX_whoami|user name to access Galera machine XXX via ssh|
|galera_XXX_access|'sudo ' if node_access_user_XXX does not have root rights, empty string if node_access_user_XXX has root rights|
|galera_XXX_keyfile|full name of secret key to access Galera machine XXX via ssh|
|galera_XXX_start_db_command|bash command to start DB server on Galera machine XXX|
|galera_XXX_stop_db_command|bash command to stop DB server on Galera machine XXX|
|galera_user|DB user name to access Galera nodes (have to have all priveligies with GRANT option)|
|galera_password|password for node_user|
|maxscale_cnf|full name of Maxscale configuration file (maxscale.cnf)|
|maxscale_log_dir|directory for Maxscale log files|
|maxscale_IP|IP address of Maxscale machine|
|maxscale_sshkey|full name of secret key to access Maxscale machine via ssh|
|maxscale_access_user|user name to access Maxscale machine via ssh|
|maxscale_access_sudo|'sudo ' if maxscale_access_user does not have root rights, empty string if maxscale_access_user has root rights|
|maxscale_user|DB user to access via Maxscale|
|maxscale_password|password for maxscale_user|
|maxscale_hostname|hostname of Maxscale machine|
|sysbench_dir|directory where Sysbanch is installed|
|ssl|'yes' if tests should try to use ssl to connect to Maxscale and to backends (obsolete, now should be 'yes' in all cases)|
|smoke|if 'yes' all tests are executed in 'quick' mode (less iterations, skip heavy operations)|
|backend_ssl|if 'yes' ssl config will be added to all servers definition in maxscale.cnf|
|use_snapshots|if TRUE every test is trying to revert snapshot before running the test|
|take_snapshot_command|revert_snapshot_command|
|revert_snapshot_command|Command line to revert a snapshot of all VMs|
|no_nodes_check|if yes backend checks are not executed (needed in case of RDS or similar backend)|
|no_backend_log_copy|if yes logs from backend nodes are not copied (needed in case of RDS or similar backend)|
|no_maxscale_start|Do not start Maxscale automatically|
|no_vm_revert|If true tests do not revert VMs after the test even if test failed|

View File

@ -0,0 +1,32 @@
# Results locations
| Location | Description |
|----------|-------------|
|[run_test](http://max-tst-01.mariadb.com:8089/view/test/job/run_test/) Jenkins job log|Vagrant and test application outputs|
|[CDash](jenkins.engskysql.com/CDash/index.php?project=MaxScale)|CTest reports|
|[http://max-tst-01.mariadb.com/LOGS/](http://max-tst-01.mariadb.com/LOGS/)|MaxScale logs and core dumps|
|/home/vagrant/LOGS|Same as [http://max-tst-01.mariadb.com/LOGS/](http://max-tst-01.mariadb.com/LOGS/)|
|Maxscale VM /var/log/maxscale|MaxScale log from latest test case|
|Maxscale VM /tpm/core*|Core dump from latest test case|
|Maxscale VM home directory|QLA filter files (if enabled in MaxScale test configuration|
|nodeN, galeraN VMs|MariaDB/MySQL logs (see MariaDB/MySQL documentation for details)|
For access to VMs see [environment documentation](ENV_SETUP.md#access-vms)
Jenkins job log consists of following parts:
* Vagrant output: VMs creation priocess, MariaDB Master/Slave and MariaDB Galera stuff installation, MaxScale installation
* [set_env_vagrant.sh](https://github.com/mariadb-corporation/build-scripts-vagrant/blob/master/test/set_env_vagrant.sh) output: retrieval of all VM parameters
* setup scripts output: MariaDB initialisation on backend nodes, DB users setup, enabling core dump on MaxScale VM
* test application output for all tests: eevry line starts from test case number and ':' (can be grepped)
* CTest final printing: N of M tests passed, CTest warnings, email sending logs
To check presence of core dumps:
<pre>
find /home/vagrant/LOGS/&lt;last_test_results_dir&gt; | grep core
</pre>
where 'last_test_results_dir' - automatically generated name of logs directory (based on date and time of test run)
To understand test case output please see test case description in Doxygen comments in every test case source file.
VMs are alive after the test run only if test run is done with 'do_not_destroy' parameter.

View File

@ -0,0 +1,140 @@
/**
* Test runtime modification of router options
*/
#include <maxtest/testconnections.hh>
#include <vector>
#include <iostream>
#include <functional>
#define TEST(a) {#a, a}
void alter_readwritesplit(TestConnections& test)
{
test.maxscales->wait_for_monitor();
// Open a connection before and after setting master_failure_mode to fail_on_write
Connection first = test.maxscales->rwsplit();
Connection second = test.maxscales->rwsplit();
Connection third = test.maxscales->rwsplit();
test.maxscales->wait_for_monitor();
first.connect();
test.check_maxctrl("alter service RW-Split-Router master_failure_mode fail_on_write");
second.connect();
// Check that writes work for both connections
test.expect(first.query("SELECT @@last_insert_id"),
"Write to first connection should work: %s",
first.error());
test.expect(second.query("SELECT @@last_insert_id"),
"Write to second connection should work: %s",
second.error());
// Block the master
test.repl->block_node(0);
test.maxscales->wait_for_monitor();
// Check that reads work for the newer connection and fail for the older one
test.expect(!first.query("SELECT 1"),
"Read to first connection should fail.");
test.expect(second.query("SELECT 1"),
"Read to second connection should work: %s",
second.error());
// Unblock the master, restart Maxscale and check that changes are persisted
test.repl->unblock_node(0);
test.maxscales->wait_for_monitor();
test.maxscales->restart();
third.connect();
test.expect(third.query("SELECT @@last_insert_id"),
"Write to third connection should work: %s",
third.error());
test.repl->block_node(0);
test.maxscales->wait_for_monitor();
test.expect(third.query("SELECT 1"),
"Read to third connection should work: %s",
third.error());
test.repl->unblock_node(0);
test.maxscales->wait_for_monitor();
}
void alter_readconnroute(TestConnections& test)
{
test.repl->connect();
std::string master_id = test.repl->get_server_id_str(0);
test.repl->disconnect();
Connection conn = test.maxscales->readconn_master();
for (int i = 0; i < 5; i++)
{
conn.connect();
Row row = conn.row("SELECT @@server_id");
conn.disconnect();
test.expect(row[0] == master_id,
"First connection should use master: %s != %s",
row[0].c_str(),
master_id.c_str());
}
test.check_maxctrl("alter service Read-Connection-Router-Master router_options slave");
for (int i = 0; i < 5; i++)
{
conn.connect();
Row row = conn.row("SELECT @@server_id");
conn.disconnect();
test.expect(row[0] != master_id,
"Second connection should not use master: %s == %s",
row[0].c_str(),
master_id.c_str());
}
}
void alter_schemarouter(TestConnections& test)
{
Connection conn = test.maxscales->readconn_slave();
conn.connect();
test.expect(!conn.query("SELECT 1"), "Query before reconfiguration should fail");
conn.disconnect();
test.check_maxctrl("alter service SchemaRouter ignore_databases_regex \".*\"");
conn.connect();
test.expect(conn.query("SELECT 1"), "Query after reconfiguration should work: %s", conn.error());
conn.disconnect();
}
void alter_unsupported(TestConnections& test)
{
int rc = test.maxscales->ssh_node_f(0, true, "maxctrl alter service RW-Split-Router unknown parameter");
test.expect(rc != 0, "Unknown router parameter should be detected");
rc = test.maxscales->ssh_node_f(0, true, "maxctrl alter service RW-Split-Router filters Regex");
test.expect(rc != 0, "Unsupported router parameter should be detected");
}
int main(int argc, char** argv)
{
TestConnections test(argc, argv);
std::vector<std::pair<const char*, std::function<void(TestConnections&)>>> tests =
{
TEST(alter_readwritesplit),
TEST(alter_readconnroute),
TEST(alter_schemarouter),
TEST(alter_unsupported)
};
for (auto& a : tests)
{
std::cout << a.first << std::endl;
a.second(test);
}
return test.global_result;
}

14
system-test/astylerc Normal file
View File

@ -0,0 +1,14 @@
--style=allman
--indent=spaces=4
--indent-switches
--indent-labels
--min-conditional-indent=0
--pad-oper
--pad-header
--add-brackets
--convert-tabs
--max-code-length=110
--break-after-logical
--mode=c
--suffix=none
--max-instatement-indent=110

View File

@ -0,0 +1,148 @@
/**
* @file bug601.cpp regression case for bug 601 ("COM_CHANGE_USER fails with correct user/pwd if executed
* during authentication")
* - configure Maxscale.cnf to use only one thread
* - in 100 parallel threads start to open/close session
* - do change_user 2000 times
* - check all change_user are ok
* - check Mascale is alive
*/
/*
* Vilho Raatikka 2014-10-30 14:30:57 UTC
* If COM_CHANGE_USER is executed while backend protocol's state is not yet MYSQL_AUTH_RECV it will fail in
* the backend.
*
* If MaxScale uses multiple worked threads this occurs rarely and it would be possible to easily suspend
* execution of COM_CHANGE_USER.
*
* If MaxScale uses one worker thread then there's currently no way to suspend execution. It would require
* thread to put current task on hold, complete authentication task and return to COM_CHANGE_USER execution.
*
* In theory it is possible to add an event to client's DCB and let it become notified in the same way than
* events that occur in sockets. It would have to be added first (not last) and ensure that no other command
* is executed before it.
*
* Since this is the only case known where this would be necessary, it could be enough to add a "pending auth
* change" pointer in client's protocol object which would be checked before thread returns to epoll_wait
* after completing the authentication.
* Comment 1 Massimiliano 2014-11-07 17:01:29 UTC
* Current code in develop branch let COM_CHANGE_USER work fine.
*
* I noticed sometime a failed authentication issue using only.
* This because backend protocol's state is not yet MYSQL_AUTH_RECV and necessary data for succesfull backend
* change user (such as scramble data from handshake) may be not available.
*
*
* I put a query before change_user and the issue doesn't appear: that's another proof.
* Comment 2 Vilho Raatikka 2014-11-13 15:54:15 UTC
* In gw_change_user->gw_send_change_user_to_backend authentication message was sent to backend server before
* backend had its scramble data. That caused authentication to fail.
* Comment 3 Vilho Raatikka 2014-11-13 15:58:34 UTC
* if (func.auth ==)gw_change_user->gw_send_change_user_to_backend is called before backend has its scramble,
* auth packet is set to backend's delauqueue instead of writing it to backend. When backend_write_delayqueue
* is called COM_CHANGE_USER packets are rewritten with backend's current data.
*/
#include <iostream>
#include <maxtest/testconnections.hh>
using namespace std;
pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
int exit_flag = 0;
TestConnections* Test;
void* parall_traffic(void* ptr);
int main(int argc, char* argv[])
{
int iterations = 1000;
Test = new TestConnections(argc, argv);
if (Test->smoke)
{
iterations = 100;
}
pthread_t parall_traffic1[100];
Test->set_timeout(60);
Test->repl->connect();
Test->repl->execute_query_all_nodes((char*) "set global max_connect_errors=1000;");
Test->repl->execute_query_all_nodes((char*) "set global max_connections=1000;");
Test->maxscales->connect_maxscale(0);
Test->tprintf("Creating one user 'user@%%'");
execute_query_silent(Test->maxscales->conn_rwsplit[0], (char*) "DROP USER user@'%'");
Test->try_query(Test->maxscales->conn_rwsplit[0], (char*) "CREATE USER user@'%%' identified by 'pass2'");
Test->try_query(Test->maxscales->conn_rwsplit[0], (char*) "GRANT SELECT ON test.* TO user@'%%';");
Test->try_query(Test->maxscales->conn_rwsplit[0], (char*) "FLUSH PRIVILEGES;");
Test->tprintf("Starting parallel thread which opens/closes session in the loop");
for (int j = 0; j < 25; j++)
{
pthread_create(&parall_traffic1[j], NULL, parall_traffic, NULL);
}
Test->tprintf("Doing change_user in the loop");
for (int i = 0; i < iterations; i++)
{
Test->set_timeout(15);
Test->add_result(mysql_change_user(Test->maxscales->conn_rwsplit[0], "user", "pass2", (char*) "test"),
"change_user failed! %s", mysql_error(Test->maxscales->conn_rwsplit[0]));
Test->add_result(mysql_change_user(Test->maxscales->conn_rwsplit[0], Test->maxscales->user_name,
Test->maxscales->password,
(char*) "test"), "change_user failed! %s",
mysql_error(Test->maxscales->conn_rwsplit[0]));
}
Test->tprintf("Waiting for all threads to finish");
exit_flag = 1;
for (int j = 0; j < 25; j++)
{
Test->set_timeout(30);
pthread_join(parall_traffic1[j], NULL);
}
Test->tprintf("All threads are finished");
Test->repl->flush_hosts();
Test->tprintf("Change user to '%s' in order to be able to DROP user", Test->maxscales->user_name);
Test->set_timeout(30);
mysql_change_user(Test->maxscales->conn_rwsplit[0],
Test->maxscales->user_name,
Test->maxscales->password,
NULL);
Test->tprintf("Dropping user", Test->maxscales->user_name);
Test->try_query(Test->maxscales->conn_rwsplit[0], (char*) "DROP USER user@'%%';");
Test->maxscales->verbose = true;
Test->check_maxscale_alive(0);
Test->maxscales->verbose = false;
int rval = Test->global_result;
delete Test;
return rval;
}
void* parall_traffic(void* ptr)
{
while (exit_flag == 0)
{
MYSQL* conn = Test->maxscales->open_rwsplit_connection(0);
while (exit_flag == 0 && mysql_query(conn, "DO 1") == 0)
{
sleep(1);
}
mysql_close(conn);
}
return NULL;
}

View File

@ -0,0 +1,296 @@
/**
* @file bug620.cpp bug620 regression case ("enable_root_user=true generates errors to error log")
*
* - Maxscale.cnf contains RWSplit router definition with enable_root_user=true
* - GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'skysqlroot';
* - try to connect using 'root' user and execute some query
* - errors are not expected in the log. All Maxscale services should be alive.
*/
/*
* Vilho Raatikka 2014-11-14 09:03:59 UTC
* Enabling use of root user in MaxScale causes the following being printed to error log. Disabling the
* setting enable_root_user prevents these errors.
*
* 2014-11-14 11:02:47 Error : getaddrinfo failed for [linux-yxkl.site] due [Name or service not known]
* 2014-11-14 11:02:47 140635119954176 [mysql_users_add()] Failed adding user root@linux-yxkl.site for
* service [RW Split Router]
* 2014-11-14 11:02:47 Error : getaddrinfo failed for [::1] due [Address family for hostname not supported]
* 2014-11-14 11:02:47 140635119954176 [mysql_users_add()] Failed adding user root@::1 for service [RW
* Split Router]
* 2014-11-14 11:02:47 140635119954176 [mysql_users_add()] Failed adding user root@127.0.0.1 for service
* [RW Split Router]
* Comment 1 Vilho Raatikka 2014-11-14 09:04:40 UTC
* This appears with binary built from develop branch 14.11.14
* Comment 2 Massimiliano 2014-11-14 09:15:27 UTC
* The messages appear in the log because root user has an invalid hostname: linux-yxkl.site
*
*
* The second message root@127.0.0.1 may be related to a previous root@localhost entry.
*
*
* Names are resolved to IPs and added into maxscale hashtable: localhost and 127.0.0.1 result in a
* duplicated entry
*
*
*
* A standard root@localhost only entry doesn't cause any logged message
* Comment 3 Vilho Raatikka 2014-11-14 09:24:56 UTC
* Problem is that they seem critical errors but MaxScale still works like nothing had happened. If the
* default hostname of the server host is not good, what does it mean for MaxScale? Doest it still accept root
* user or not? Why it only causes trouble for root user but not for others?
*
* If the error has no effect in practice, then log entries could be better in debug log.
*
* Thread ids are suitable in debug log but not anywhere else.
* Comment 4 Massimiliano 2014-11-14 09:32:27 UTC
* The 'enable_root_user' option only allows selecting 'root' user from backend databases.
*
*
* The error messages are printed for all users and report that
*
* specific_user@specific_host is not loaded but
*
*
* Example:
*
* 2014-11-14 11:02:47 Error : getaddrinfo failed for [linux-yxkl.site] due [Name or service not known]
* 2014-11-14 11:02:47 140635119954176 [mysql_users_add()] Failed adding user root@linux-yxkl.site for
* service [RW Split Router]
*
* 2014-11-14 04:19:23 Error : getaddrinfo failed for [ftp.*.fi] due [Name or service not known]
* 2014-11-14 04:19:23 67322400 [mysql_users_add()] Failed adding user foo@ftp.*.fi for service [Master
* Service]
*
*
*
* In the examples foo@%.funet.fi and root@linux-yxkl.site are not loaded.
*
*
* foo@localhost and root@localhost are loaded
* Comment 5 Vilho Raatikka 2014-11-14 10:55:35 UTC
* (In reply to comment #4)
* > The 'enable_root_user' option only allows selecting 'root' user from backend
* > databases.
*
* I think that enable_root_user means : MaxScale user can use her 'root' account also with MaxScale.
*
* Technically your explanation may be correct and I'm not against that. What I mean is that the user may not
* want to worry about what is 'loaded' or 'selected' under the cover.
* She simply wants to use root user. If it is not possible then that should be written to error log clearly.
* For example, "Use of 'root' disabled due to unreachable hostname" or something equally clear.
*
* Reporting several lines of errors may confuse the user especially if the root account still works
* perfectly.
*
* >
* >
* > The error messages are printed for all users and report that
* >
* > specific_user@specific_host is not loaded but
* >
* >
* > Example:
* >
* > 2014-11-14 11:02:47 Error : getaddrinfo failed for [linux-yxkl.site] due
* > [Name or service not known]
* > 2014-11-14 11:02:47 140635119954176 [mysql_users_add()] Failed adding user
* > root@linux-yxkl.site for service [RW Split Router]
* >
* > 2014-11-14 04:19:23 Error : getaddrinfo failed for [ftp.*.fi] due [Name or
* > service not known]
* > 2014-11-14 04:19:23 67322400 [mysql_users_add()] Failed adding user
* > foo@ftp.*.fi for service [Master Service]
* >
* >
* >
* > In the examples foo@%.funet.fi and root@linux-yxkl.site are not loaded.
* >
* >
* > foo@localhost and root@localhost are loaded
* Comment 6 Massimiliano 2014-11-14 11:00:04 UTC
* MaxScale MySQL authentication is based on user@host
*
*
* You may have such situation:
*
* foo@localhost [OK]
* foo@x-y-z.notexists [KO]
*
* user 'foo@localhost' is loaded the latter isn't
*
*
* For root user is the same.
*
* Allowing selection of root user means selecting all the rows from mysql.user table where user='root'
*
*
* if there is any row (root@xxxx) that cannot be loaded the message appears.
*
* In a standard setup we don't expect any log messages
* Comment 7 Timofey Turenko 2014-12-10 16:02:36 UTC
* I tried following:
*
* via RWSplit:
*
* GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'skysqlroot';
*
* and try to connect to RWSplit using 'root' and 'skysqlroot' and try some simple query:
*
* 2014-12-10 17:35:43 Error : getaddrinfo failed for [::1] due [Address family for hostname not supported]
* 2014-12-10 17:35:43 Warning: Failed to add user root@::1 for service [RW Split Router]. This user will
* be unavailable via MaxScale.
* 2014-12-10 17:35:43 Warning: Failed to add user root@127.0.0.1 for service [RW Split Router]. This user
* will be unavailable via MaxScale.
* 2014-12-10 17:35:43 Error : Failed to start router for service 'HTTPD Router'.
* 2014-12-10 17:35:43 Error : Failed to start service 'HTTPD Router'.
* 2014-12-10 17:36:08 Error : getaddrinfo failed for [::1] due [Address family for hostname not supported]
* 2014-12-10 17:36:08 Warning: Failed to add user root@::1 for service [RW Split Router]. This user will
* be unavailable via MaxScale.
* 2014-12-10 17:36:08 Warning: Failed to add user root@127.0.0.1 for service [RW Split Router]. This user
* will be unavailable via MaxScale.
*
*
* Is it expected?
* Comment 8 Massimiliano 2014-12-10 16:09:23 UTC
* root@::1 could not be loaded because it's for IPv6
*
* root@127.0.0.1 may be not loaded if root@localhost was found before
*
* As names are translated to IPv4 addresses localhost->127.0.0.1 and that'a duplicate
*
*
* 2014-12-10 17:35:43 Error : Failed to start router for service 'HTTPD Router'.
* 2014-12-10 17:35:43 Error : Failed to start service 'HTTPD Router'.
*
* Those messages are not part of mysql users load phase.
*
* when you have auth errors users are reload (in the allowed time window) and you see the messages again
*
*
* With admin interface you can check:
*
*
* show dbusers RW Split Router
*
* and you should see root@% you added with the grant
* Comment 9 Timofey Turenko 2014-12-12 21:59:30 UTC
* Following is present in the error log just after MaxScale start:
*
*
* 2014-12-12 23:49:07 Error : getaddrinfo failed for [::1] due [Address family for hostname not supported]
* 2014-12-12 23:49:07 Warning: Failed to add user root@::1 for service [RW Split Router]. This user will
* be unavailable via MaxScale.
* 2014-12-12 23:49:07 Warning: Failed to add user root@127.0.0.1 for service [RW Split Router]. This user
* will be unavailable via MaxScale.
*
*
* first two line are clear: no support for IPv6, but would it be better to print 'warning' instead of
* 'error'?
*
* "Failed to add user root@127.0.0.1" - is it correct?
*
* direct connection to backend gives:
* MariaDB [(none)]> select User, host from mysql.user;
+---------+-----------+
| User | host |
+---------+-----------+
| maxuser | % |
| repl | % |
| skysql | % |
| root | 127.0.0.1 |
| root | ::1 |
| | localhost |
| maxuser | localhost |
| root | localhost |
| skysql | localhost |
| | node1 |
| root | node1 |
+---------+-----------+
|
| admin interface gives:
|
| MaxScale> show dbusers "RW Split Router"
| Users table data
| Hashtable: 0x7f6b64000c30, size 52
| No. of entries: 7
| Average chain length: 0.1
| Longest chain length: 1
| User names: root@192.168.122.106, repl@%, skysql@%, maxuser@127.0.0.1, skysql@127.0.0.1, root@127.0.0.1,
|maxuser@%
|
|
| So, root@127.0.0.1 is present in the list.
| Comment 10 Mark Riddoch 2015-01-05 13:03:34 UTC
| The message "Failed to add user root@127.0.0.1" is because the two entries root@localhsot and
|root@127.0.0.1 are seen as duplicates in MaxScale. This is a result of MaxScale resolving hostnames at the
|time it reads the database rather than at connect time. So a duplicate is detected and the second one causes
|the error to be displayed.
| Comment 11 Timofey Turenko 2015-01-09 19:26:35 UTC
| works as expected, closing.
| Check for lack of "Error : getaddrinfo failed" added (just in case) and for warning about 'skysql'
*/
#include <iostream>
#include <unistd.h>
#include <maxtest/testconnections.hh>
using namespace std;
int main(int argc, char* argv[])
{
TestConnections* Test = new TestConnections(argc, argv);
Test->set_timeout(30);
Test->maxscales->connect_maxscale(0);
Test->tprintf("Creating 'root'@'%%'\n");
// global_result += execute_query(Test->maxscales->conn_rwsplit[0], (char *) "CREATE USER 'root'@'%'; SET
// PASSWORD FOR 'root'@'%' = PASSWORD('skysqlroot');");
Test->try_query(Test->maxscales->conn_rwsplit[0],
(char*) "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%%' IDENTIFIED BY 'skysqlroot';");
Test->try_query(Test->maxscales->conn_rwsplit[0],
(char*) "GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'skysqlroot';");
sleep(10);
MYSQL* conn;
Test->tprintf("Connecting using 'root'@'%%'\n");
conn = open_conn(Test->maxscales->rwsplit_port[0],
Test->maxscales->IP[0],
(char*) "root",
(char*) "skysqlroot",
Test->ssl);
if (mysql_errno(conn) != 0)
{
Test->add_result(1, "Connection using 'root' user failed, error: %s\n", mysql_error(conn));
}
else
{
Test->tprintf("Simple query...\n");
Test->try_query(conn, (char*) "SELECT * from mysql.user");
Test->try_query(conn,
(char*) "set password for 'root'@'localhost' = PASSWORD('');");
}
if (conn != NULL)
{
mysql_close(conn);
}
Test->tprintf("Dropping 'root'@'%%'\n");
Test->try_query(Test->maxscales->conn_rwsplit[0], (char*) "DROP USER 'root'@'%%';");
Test->maxscales->close_maxscale_connections(0);
Test->log_excludes(0, "Failed to add user skysql");
Test->log_excludes(0, "getaddrinfo failed");
Test->log_excludes(0, "Couldn't find suitable Master");
Test->check_maxscale_alive(0);
int rval = Test->global_result;
delete Test;
return rval;
}

View File

@ -0,0 +1,48 @@
/**
* @file bug572.cpp regression case for bug 572 ( " If reading a user from users table fails, MaxScale fails"
*)
*
* - try GRANT with wrong IP using all Maxscale services:
* + GRANT ALL PRIVILEGES ON *.* TO 'foo'@'*.foo.notexists' IDENTIFIED BY 'foo';
* + GRANT ALL PRIVILEGES ON *.* TO 'bar'@'127.0.0.*' IDENTIFIED BY 'bar'
* + DROP USER 'foo'@'*.foo.notexists'
* + DROP USER 'bar'@'127.0.0.*'
* - do "select * from mysql.user" using RWSplit to check if Maxsclae crashed
*/
#include <iostream>
#include <unistd.h>
#include <maxtest/testconnections.hh>
using namespace std;
void create_drop_bad_user(MYSQL* conn, TestConnections* Test)
{
Test->try_query(conn,
(char*)
"GRANT ALL PRIVILEGES ON *.* TO 'foo'@'*.foo.notexists' IDENTIFIED BY 'foo';");
Test->try_query(conn, (char*) "GRANT ALL PRIVILEGES ON *.* TO 'bar'@'127.0.0.*' IDENTIFIED BY 'bar'");
Test->try_query(conn, (char*) "DROP USER 'foo'@'*.foo.notexists'");
Test->try_query(conn, (char*) "DROP USER 'bar'@'127.0.0.*'");
}
int main(int argc, char* argv[])
{
TestConnections* Test = new TestConnections(argc, argv);
Test->set_timeout(10);
Test->repl->connect();
Test->maxscales->connect_maxscale(0);
Test->tprintf("Trying GRANT for with bad IP: RWSplit\n");
create_drop_bad_user(Test->maxscales->conn_rwsplit[0], Test);
Test->tprintf("Trying SELECT to check if Maxscale hangs\n");
Test->try_query(Test->maxscales->conn_rwsplit[0], (char*) "select * from mysql.user");
int rval = Test->global_result;
delete Test;
return rval;
}

View File

@ -0,0 +1,84 @@
/**
* @file bug143.cpp bug143 regression case (MaxScale ignores host in user authentication)
*
* - create user@'non_existing_host1', user@'%', user@'non_existing_host2' identified by different passwords.
* - try to connect using RWSplit. First and third are expected to fail, second should succeed.
*/
#include <maxtest/testconnections.hh>
int main(int argc, char* argv[])
{
TestConnections* Test = new TestConnections(argc, argv);
Test->tprintf("Creating user 'user' with 3 different passwords for different hosts\n");
Test->maxscales->connect_maxscale(0);
execute_query(Test->maxscales->conn_rwsplit[0],
"CREATE USER 'user'@'non_existing_host1' IDENTIFIED BY 'pass1'");
execute_query(Test->maxscales->conn_rwsplit[0], "CREATE USER 'user'@'%%' IDENTIFIED BY 'pass2'");
execute_query(Test->maxscales->conn_rwsplit[0],
"CREATE USER 'user'@'non_existing_host2' IDENTIFIED BY 'pass3'");
execute_query(Test->maxscales->conn_rwsplit[0],
"GRANT ALL PRIVILEGES ON *.* TO 'user'@'non_existing_host1'");
execute_query(Test->maxscales->conn_rwsplit[0], "GRANT ALL PRIVILEGES ON *.* TO 'user'@'%%'");
execute_query(Test->maxscales->conn_rwsplit[0],
"GRANT ALL PRIVILEGES ON *.* TO 'user'@'non_existing_host2'");
Test->tprintf("Synchronizing slaves");
Test->set_timeout(50);
Test->repl->sync_slaves();
Test->tprintf("Trying first hostname, expecting failure");
Test->set_timeout(15);
MYSQL* conn = open_conn(Test->maxscales->rwsplit_port[0],
Test->maxscales->IP[0],
(char*) "user",
(char*) "pass1",
Test->ssl);
if (mysql_errno(conn) == 0)
{
Test->add_result(1, "MaxScale ignores host in authentication\n");
}
if (conn != NULL)
{
mysql_close(conn);
}
Test->tprintf("Trying second hostname, expecting success");
Test->set_timeout(15);
conn = open_conn(Test->maxscales->rwsplit_port[0],
Test->maxscales->IP[0],
(char*) "user",
(char*) "pass2",
Test->ssl);
Test->add_result(mysql_errno(conn), "MaxScale can't connect: %s\n", mysql_error(conn));
if (conn != NULL)
{
mysql_close(conn);
}
Test->tprintf("Trying third hostname, expecting failure");
Test->set_timeout(15);
conn = open_conn(Test->maxscales->rwsplit_port[0],
Test->maxscales->IP[0],
(char*) "user",
(char*) "pass3",
Test->ssl);
if (mysql_errno(conn) == 0)
{
Test->add_result(1, "MaxScale ignores host in authentication\n");
}
if (conn != NULL)
{
mysql_close(conn);
}
execute_query(Test->maxscales->conn_rwsplit[0], "DROP USER 'user'@'non_existing_host1'");
execute_query(Test->maxscales->conn_rwsplit[0], "DROP USER 'user'@'%%'");
execute_query(Test->maxscales->conn_rwsplit[0], "DROP USER 'user'@'non_existing_host2'");
Test->maxscales->close_maxscale_connections(0);
int rval = Test->global_result;
delete Test;
return rval;
}

View File

@ -0,0 +1,93 @@
/**
* @file bug653.cpp regression case for bug 653 ("Memory corruption when users with long hostnames that can
* no the resolved are loaded into MaxScale")
*
* - CREATE USER
*'user_with_very_long_hostname'@'very_long_hostname_that_can_not_be_resolved_and_it_probably_caused_crash.com.net.org'
* IDENTIFIED BY 'old';
* - try to connect using user 'user_with_very_long_hostname'
* - DROP USER
*'user_with_very_long_hostname'@'very_long_hostname_that_can_not_be_resolved_and_it_probably_caused_crash.com.net.org'
* - check MaxScale is alive
*/
/*
* Mark Riddoch 2014-12-16 13:17:25 UTC
* Program received signal SIGSEGV, Segmentation fault.
* 0x00007ffff49385ac in free () from /lib64/libc.so.6
* Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.47.el6_2.12.x86_64
* keyutils-libs-1.4-4.el6.x86_64 krb5-libs-1.10.3-10.el6_4.2.x86_64 libaio-0.3.107-10.el6.x86_64
* libcom_err-1.41.12-14.el6.x86_64 libgcc-4.4.7-4.el6.x86_64 libselinux-2.0.94-5.3.el6_4.1.x86_64
* libstdc++-4.4.7-4.el6.x86_64 nss-pam-ldapd-0.7.5-14.el6_2.1.x86_64
* nss-softokn-freebl-3.14.3-10.el6_5.x86_64 openssl-1.0.1e-16.el6_5.15.x86_64 zlib-1.2.3-29.el6.x86_64
* (gdb) where
#0 0x00007ffff49385ac in free () from /lib64/libc.so.6
#1 0x000000000041d421 in add_mysql_users_with_host_ipv4 (users=0x72c4c0,
* user=0x739030 "u3", host=0x739033 "aver.log.hostname.to.overflow.the.buffer",
* passwd=0x73905c "", anydb=0x739089 "Y", db=0x0) at dbusers.c:291
#2 0x000000000041e302 in getUsers (service=0x728ef0, users=0x72c4c0)
* at dbusers.c:742
#3 0x000000000041cf97 in load_mysql_users (service=0x728ef0) at dbusers.c:99
#4 0x00000000004128c7 in serviceStartPort (service=0x728ef0, port=0x729b70)
* at service.c:227
#5 0x0000000000412e27 in serviceStart (service=0x728ef0) at service.c:365
#6 0x0000000000412f00 in serviceStartAll () at service.c:413
#7 0x000000000040b592 in main (argc=2, argv=0x7fffffffe108) at gateway.c:1750
* Comment 1 Mark Riddoch 2014-12-16 13:18:09 UTC
* The problem is a buffer overrun in normalise_hostname. Fix underway.
* Comment 2 Mark Riddoch 2014-12-16 15:45:59 UTC
* Increased buffer size to prevent overrun issue
* Comment 3 Timofey Turenko 2014-12-22 15:39:32 UTC
* I'm not sure I understand the bug correctly.
* But 60-chars long host name does not cause problem (longer is not possible "String
* 'very_long_hostname_that_can_not_be_resolved_and_it_probably_caused_cra' is too long for host name (should
* be no longer than 60)"
*/
#include <iostream>
#include <maxtest/testconnections.hh>
int main(int argc, char* argv[])
{
TestConnections* Test = new TestConnections(argc, argv);
Test->set_timeout(50);
Test->maxscales->connect_maxscale(0);
Test->tprintf("Creating user with old style password\n");
Test->try_query(Test->maxscales->conn_rwsplit[0],
(char*) "CREATE USER 'user_long_host11'@'very_long_hostname_that_probably_caused_crashhh.com.net.org' IDENTIFIED BY 'old'");
Test->try_query(Test->maxscales->conn_rwsplit[0],
(char*) "GRANT ALL PRIVILEGES ON *.* TO 'user_long_host11'@'very_long_hostname_that_probably_caused_crashhh.com.net.org' WITH GRANT OPTION");
sleep(10);
Test->tprintf("Trying to connect using user with old style password\n");
MYSQL* conn = open_conn(Test->maxscales->rwsplit_port[0],
Test->maxscales->IP[0],
(char*) "user_long_host11",
(char*) "old",
Test->ssl);
if (mysql_errno(conn) != 0)
{
Test->tprintf("Connections is not open as expected\n");
}
else
{
Test->add_result(1, "Connections is open for the user with bad host\n");
}
if (conn != NULL)
{
mysql_close(conn);
}
Test->try_query(Test->maxscales->conn_rwsplit[0],
(char*) "DROP USER 'user_long_host11'@'very_long_hostname_that_probably_caused_crashhh.com.net.org'");
Test->maxscales->close_maxscale_connections(0);
Test->check_maxscale_alive(0);
int rval = Test->global_result;
delete Test;
return rval;
}

View File

@ -0,0 +1,31 @@
/**
* Check that old-style passwords are detected
*/
#include <maxtest/testconnections.hh>
int main(int argc, char* argv[])
{
TestConnections test(argc, argv);
test.repl->connect();
execute_query(test.repl->nodes[0], "CREATE USER 'old'@'%%' IDENTIFIED BY 'old';");
execute_query(test.repl->nodes[0], "SET PASSWORD FOR 'old'@'%%' = OLD_PASSWORD('old')");
execute_query(test.repl->nodes[0], "FLUSH PRIVILEGES");
test.repl->sync_slaves();
test.set_timeout(20);
test.tprintf("Trying to connect using user with old style password");
MYSQL* conn = open_conn(test.maxscales->rwsplit_port[0],
test.maxscales->IP[0],
(char*) "old",
(char*) "old",
test.ssl);
test.add_result(mysql_errno(conn) == 0, "Connections is open for the user with old style password.\n");
mysql_close(conn);
execute_query(test.repl->nodes[0], "DROP USER 'old'@'%%'");
return test.global_result;
}

View File

@ -0,0 +1,305 @@
/**
* @file bug705.cpp regression case for bug 705 ("Authentication fails when the user connects to a database
* when the SQL mode includes ANSI_QUOTES")
*
* - use only one backend
* - derectly to backend SET GLOBAL sql_mode="ANSI"
* - restart MaxScale
* - check log for "Error : Loading database names for service RW_Split encountered error: Unknown column"
*/
/*
* ivan.stoykov@skysql.com 2015-01-26 14:01:11 UTC
* When the sql_mode includes ANSI_QUOTES, maxscale fails to execute the SQL at LOAD_MYSQL_DATABASE_NAMES
* string
*
* https://github.com/mariadb-corporation/MaxScale/blob/master/server/core/dbusers.c
* line 90:
#define LOAD_MYSQL_DATABASE_NAMES "SELECT * FROM ( (SELECT COUNT(1) AS ndbs FROM INFORMATION_SCHEMA.SCHEMATA)
* AS tbl1, (SELECT GRANTEE,PRIVILEGE_TYPE from INFORMATION_SCHEMA.USER_PRIVILEGES WHERE privilege_type='SHOW
* DATABASES' AND REPLACE(GRANTEE, \"\'\",\"\")=CURRENT_USER()) AS tbl2)"
*
* the error log outputs that string:
* "Error : Loading database names for service galera_bs_router encountered error: Unknown column ''' in
* 'where clause'"
*
* I think the quotes in LOAD_MYSQL_DATABASE_NAMES and all the SQL used by MaxScale should be adjusted
* according to the recent sql_mode at the backend server.
*
* How to repeat:
* mysql root@centos-7-minimal:[Mon Jan 26 15:00:48 2015][(none)]> SET SESSION sql_mode = "ORACLE"; select
* @@sql_mode;
* Query OK, 0 rows affected (0.00 sec)
*
+----------------------------------------------------------------------------------------------------------------------+
| @@sql_mode
| |
+----------------------------------------------------------------------------------------------------------------------+
| PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,ORACLE,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,NO_AUTO_CREATE_USER
||
+----------------------------------------------------------------------------------------------------------------------+
| 1 row in set (0.00 sec)
|
| mysql root@centos-7-minimal:[Mon Jan 26 15:00:55 2015][(none)]> SELECT
|@@innodb_version,@@version,@@version_comment, GRANTEE,PRIVILEGE_TYPE from INFORMATION_SCHEMA.USER_PRIVILEGES
|WHERE privilege_type='SHOW DATABASES' AND REPLACE(GRANTEE, "\'","")=CURRENT_USER();
| ERROR 1054 (42S22): Unknown column '\'' in 'where clause'
| mysql root@centos-7-minimal:[Mon Jan 26 15:00:57 2015][(none)]>
| Comment 1 ivan.stoykov@skysql.com 2015-01-26 14:02:42 UTC
| Work around: set the sql_mode without ANSI_QUOTES:
|
| mysql root@centos-7-minimal:[Mon Jan 26 15:00:57 2015][(none)]> SET SESSION sql_mode = "MYSQL323"; select
|@@sql_mode;
| Query OK, 0 rows affected (0.00 sec)
|
+------------------------------+
| @@sql_mode |
+------------------------------+
| MYSQL323,HIGH_NOT_PRECEDENCE |
+------------------------------+
| 1 row in set (0.00 sec)
|
| mysql root@centos-7-minimal:[Mon Jan 26 15:01:52 2015][(none)]> SELECT
|@@innodb_version,@@version,@@version_comment, GRANTEE,PRIVILEGE_TYPE from INFORMATION_SCHEMA.USER_PRIVILEGES
|WHERE privilege_type='SHOW DATABASES' AND REPLACE(GRANTEE, "\'","")=CURRENT_USER();
+------------------+-----------------------+-----------------------------------+--------------------+----------------+
| @@innodb_version | @@version | @@version_comment | GRANTEE |
|PRIVILEGE_TYPE |
+------------------+-----------------------+-----------------------------------+--------------------+----------------+
| 5.6.21-70.0 | 10.0.15-MariaDB-wsrep | MariaDB Server, wsrep_25.10.r4144 | 'root'@'localhost' | SHOW
|DATABASES |
+------------------+-----------------------+-----------------------------------+--------------------+----------------+
| 1 row in set (0.00 sec)
| Comment 2 Massimiliano 2015-01-26 14:19:45 UTC
| More informations needed for "the recent sql_mode at the backend server"
|
| Is that an issue with a particular mysql/mariadb backend version?
| Comment 3 ivan.stoykov@skysql.com 2015-01-26 14:30:08 UTC
| No, it is not related to any particular version.
|
| I tested on Percona, MySQL , MariaDB 5.5, MariaDB 10.0.15 with the same result:
|
| [Mon Jan 26 16:24:34 2015][mysql]> SET SESSION sql_mode = ""; SELECT @@sql_mode;
| Query OK, 0 rows affected (0.00 sec)
|
+------------+
| @@sql_mode |
+------------+
| |
+------------+
| 1 row in set (0.00 sec)
| [Mon Jan 26 16:24:53 2015][mysql]> SELECT @@innodb_version,@@version,@@version_comment,
|GRANTEE,PRIVILEGE_TYPE from INFORMATION_SCHEMA.USER_PRIVILEGES WHERE privilege_type='SHOW DATABASES' AND
|REPLACE(GRANTEE, "\'","")=CURRENT_USER();
+------------------+-----------------+--------------------------------------------------+--------------------+----------------+
| @@innodb_version | @@version | @@version_comment | GRANTEE
| | PRIVILEGE_TYPE |
+------------------+-----------------+--------------------------------------------------+--------------------+----------------+
| 5.5.41-37.0 | 5.5.41-37.0-log | Percona Server (GPL), Release 37.0, Revision 727 | 'seik'@'localhost'
|| SHOW DATABASES |
+------------------+-----------------+--------------------------------------------------+--------------------+----------------+
| 1 row in set (0.00 sec)
|
| [Mon Jan 26 16:24:57 2015][mysql]> SET SESSION sql_mode = "DB2";SELECT @@sql_mode;
| Query OK, 0 rows affected (0.00 sec)
|
+-----------------------------------------------------------------------------------------------+
| @@sql_mode |
+-----------------------------------------------------------------------------------------------+
| PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,DB2,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS |
+-----------------------------------------------------------------------------------------------+
| 1 row in set (0.00 sec)
|
| :[Mon Jan 26 16:26:19 2015][mysql]> SELECT @@innodb_version,@@version,@@version_comment,
|GRANTEE,PRIVILEGE_TYPE from INFORMATION_SCHEMA.USER_PRIVILEGES WHERE privilege_type='SHOW DATABASES' AND
|REPLACE(GRANTEE, "\'","")=CURRENT_USER();
| ERROR 1054 (42S22): Unknown column '\'' in 'where clause'
|
| mysql root@centos-7-minimal:[Mon Jan 26 14:27:33 2015][(none)]> SET SESSION sql_mode = "POSTGRESQL";
|select @@sql_mode;
|
|
|
|
|
| Query
|OK, 0 rows affected (0.00 sec)
|
+------------------------------------------------------------------------------------------------------+
| @@sql_mode |
+------------------------------------------------------------------------------------------------------+
| PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,POSTGRESQL,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS |
+------------------------------------------------------------------------------------------------------+
| 1 row in set (0.01 sec)
|
| mysql root@centos-7-minimal:[Mon Jan 26 14:42:23 2015][(none)]> SELECT
|@@innodb_version,@@version,@@version_comment, GRANTEE,PRIVILEGE_TYPE from INFORMATION_SCHEMA.USER_PRIVILEGES
|WHERE privilege_type='SHOW DATABASES' AND REPLACE(GRANTEE, "\'","")=CURRENT_USER();
| ERROR 1054 (42S22): Unknown column '\'' in 'where clause'
| mysql root@centos-7-minimal:[Mon Jan 26 14:58:57 2015][(none)]> SET SESSION sql_mode = ""; select
|@@sql_mode;
| Query OK, 0 rows affected (0.00 sec)
|
+------------+
| @@sql_mode |
+------------+
| |
+------------+
| 1 row in set (0.00 sec)
|
| mysql root@centos-7-minimal:[Mon Jan 26 14:59:03 2015][(none)]> SELECT
|@@innodb_version,@@version,@@version_comment, GRANTEE,PRIVILEGE_TYPE from INFORMATION_SCHEMA.USER_PRIVILEGES
|WHERE privilege_type='SHOW DATABASES' AND REPLACE(GRANTEE, "\'","")=CURRENT_USER();
+------------------+-----------+------------------------------+--------------------+----------------+
| @@innodb_version | @@version | @@version_comment | GRANTEE | PRIVILEGE_TYPE |
+------------------+-----------+------------------------------+--------------------+----------------+
| 5.6.22 | 5.6.22 | MySQL Community Server (GPL) | 'root'@'localhost' | SHOW DATABASES |
+------------------+-----------+------------------------------+--------------------+----------------+
| 1 row in set (0.00 sec)
|
| mysql root@istoykov.skysql.com:[Mon Jan 26 15:28:12 2015][(none)]> SET SESSION sql_mode = "DB2"; SELECT
|@@sql_mode;
| Query OK, 0 rows affected (0.00 sec)
|
+-----------------------------------------------------------------------------------------------+
| @@sql_mode |
+-----------------------------------------------------------------------------------------------+
| PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,DB2,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS |
+-----------------------------------------------------------------------------------------------+
| 1 row in set (0.00 sec)
|
| mysql root@istoykov.skysql.com:[Mon Jan 26 15:28:19 2015][(none)]> SELECT
|@@innodb_version,@@version,@@version_comment, GRANTEE,PRIVILEGE_TYPE from INFORMATION_SCHEMA.USER_PRIVILEGES
|WHERE privilege_type='SHOW DATABASES' AND REPLACE(GRANTEE, "\'","")=CURRENT_USER();
| ERROR 1054 (42S22): Unknown column '\'' in 'where clause'
| mysql root@istoykov.skysql.com:[Mon Jan 26 15:28:32 2015][(none)]> SET SESSION sql_mode = "MYSQL40";
| SELECT @@sql_mode;
| Query OK, 0 rows affected (0.00 sec)
|
+-----------------------------+
| @@sql_mode |
+-----------------------------+
| MYSQL40,HIGH_NOT_PRECEDENCE |
+-----------------------------+
| 1 row in set (0.00 sec)
|
| mysql root@istoykov.skysql.com:[Mon Jan 26 15:29:09 2015][(none)]> SELECT
|@@innodb_version,@@version,@@version_comment, GRANTEE,PRIVILEGE_TYPE from INFORMATION_SCHEMA.USER_PRIVILEGES
|WHERE privilege_type='SHOW DATABASES' AND REPLACE(GRANTEE, "\'","")=CURRENT_USER();
+---------------------+--------------------+-------------------+--------------------+----------------+
| @@innodb_version | @@version | @@version_comment | GRANTEE | PRIVILEGE_TYPE |
+---------------------+--------------------+-------------------+--------------------+----------------+
| 5.5.38-MariaDB-35.2 | 5.5.39-MariaDB-log | MariaDB Server | 'root'@'localhost' | SHOW DATABASES |
+---------------------+--------------------+-------------------+--------------------+----------------+
| 1 row in set (0.00 sec)
| Comment 4 Massimiliano 2015-01-26 14:48:04 UTC
| It's still not clear if the issue is related to MaxScale or it's spotted only when yoy send the statements
|via mysql client
| Comment 5 ivan.stoykov@skysql.com 2015-01-26 16:37:32 UTC
| There is at least one case that after setting the sql_mode to string :
|
|"REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,ONLY_FULL_GROUP_BY,ANSI,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
|at 10.0.15-MariaDB-wsrep-log , using maxscale in this way returned an error.
|
| $ mysql --host max-scale-host --user=test --password=xxx --port 4449 mysqlslap
| ERROR 1045 (28000): Access denied for user 'test'@'IP (using password: YES) to database 'mysqlslap'
|
| error at the maxscale log:
| Error : Loading database names for service galera_bs_router encountered error: Unknown column ''' in
|'where clause'.
|
| the following test was OK:
| $ mysql --host max-scale-host --user=test --password=xxx --port 4449
|
| After switch sql_mode to '' as "mysql> set global sql_mode='';",
| the connection of the user to a database seems to work OK:
| $ mysql --host max-scale-host --user=test --password=xxx -D mysqlslap
| Reading table information for completion of table and column names
| You can turn off this feature to get a quicker startup with -A
|
| Welcome to the MySQL monitor. Commands end with ; or \g.
| Your MySQL connection id is 2532
| Server version: 5.5.41-MariaDB MariaDB Server, wsrep_25.10.r4144
|
| Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
|
| Oracle is a registered trademark of Oracle Corporation and/or its
| affiliates. Other names may be trademarks of their respective
| owners.
|
| Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
|
| mysql> Bye
|
|
| If needed , I will prepare other test case ?
| Comment 6 Massimiliano 2015-01-26 16:40:45 UTC
| Yes, please provide us other test cases and we will try to reproduce it
| Comment 7 Markus Mäkelä 2015-01-26 18:23:25 UTC
| Changed the double quotation marks to single quotation marks because the MySQL client manual says that
|ANSI_QUOTES still accepts single quotes.
|
| This can be verified by first setting sql_mode to ANSI:
|
| set global sql_mode="ANSI";
|
| after that, start MaxScale and the error log contains:
|
| MariaDB Corporation MaxScale /home/markus/build/log/skygw_err1.log Mon Jan 26 20:16:17 2015
| -----------------------------------------------------------------------
| --- Logging is enabled.
| 2015-01-26 20:16:17 Error : Loading database names for service RW Split Router encountered error:
|Unknown column ''' in 'where clause'.
| 2015-01-26 20:16:17 Error : Loading database names for service RW Split Hint Router encountered error:
|Unknown column ''' in 'where clause'.
| 2015-01-26 20:16:17 Error : Loading database names for service Read Connection Router encountered error:
|Unknown column ''' in 'where clause'.
|
| After the change the error is gone.
| Comment 8 Massimiliano 2015-01-26 21:16:03 UTC
| I managed to reproduce it in my environment:
|
| - created a setup with 1 server in a service named "RW_Split"
|
| - issued SET GLOBAL sql_mode="ANSI" via mysql client to that server
|
| - started MaxScale and found an error in the log:
|
|
| 2015-01-26 16:10:52 Error : Loading database names for service RW_Split encountered error: Unknown
|column ''' in 'where clause'.
|
*/
#include <iostream>
#include <maxtest/testconnections.hh>
#include <maxtest/maxadmin_operations.hh>
int main(int argc, char* argv[])
{
TestConnections* Test = new TestConnections(argc, argv);
Test->set_timeout(20);
printf("Connecting to backend %s\n", Test->repl->IP[0]);
fflush(stdout);
Test->repl->connect();
Test->tprintf("Sending SET GLOBAL sql_mode=\"ANSI\" to backend %s\n", Test->repl->IP[0]);
execute_query(Test->repl->nodes[0], "SET GLOBAL sql_mode=\"ANSI\"");
Test->repl->close_connections();
Test->tprintf("Restarting MaxScale\n");
Test->set_timeout(120);
Test->maxscales->restart_maxscale(0);
Test->log_excludes(0, "Loading database names");
Test->log_excludes(0, "Unknown column");
int rval = Test->global_result;
delete Test;
return rval;
// }
}

View File

@ -0,0 +1,133 @@
/**
* @file bug592.cpp regression case for bug 592 ( "slave in "Running" state breaks authorization" ) MXS-326
*
* - stop all slaves: "stop slave;" directly to every node (now they are in "Running" state, not in "Russning,
* Slave")
* - via RWSplit "CREATE USER 'test_user'@'%' IDENTIFIED BY 'pass'"
* - try to connect using 'test_user' (expecting success)
* - start all slaves: "start slave;" directly to every node
* - via RWSplit: "DROP USER 'test_user'@'%'"
*/
/*
* Timofey Turenko 2014-10-24 09:35:35 UTC
* 1. setup: Master/Slave replication
* 2. reboot slaves
* 3. create user usinf connection to RWSplit
* 4. try to use this user to connect to Maxscale
*
* expected result:
* Authentication is ok
*
* actual result:
* Access denied for user 'user'@'192.168.122.1' (using password: YES)
*
* Th issue was discovered with following setup state:
*
* MaxScale> show servers
* Server 0x3428260 (server1)
* Server: 192.168.122.106
* Status: Master, Running
* Protocol: MySQLBackend
* Port: 3306
* Server Version: 5.5.40-MariaDB-log
* Node Id: 106
* Master Id: -1
* Slave Ids: 107, 108 , 109
* Repl Depth: 0
* Number of connections: 0
* Current no. of conns: 0
* Current no. of operations: 0
* Server 0x3428160 (server2)
* Server: 192.168.122.107
* Status: Slave, Running
* Protocol: MySQLBackend
* Port: 3306
* Server Version: 5.5.40-MariaDB-log
* Node Id: 107
* Master Id: 106
* Slave Ids:
* Repl Depth: 1
* Number of connections: 0
* Current no. of conns: 0
* Current no. of operations: 0
* Server 0x3428060 (server3)
* Server: 192.168.122.108
* Status: Running
* Protocol: MySQLBackend
* Port: 3306
* Server Version: 5.5.40-MariaDB-log
* Node Id: 108
* Master Id: 106
* Slave Ids:
* Repl Depth: 1
* Number of connections: 0
* Current no. of conns: 0
* Current no. of operations: 0
* Server 0x338c3f0 (server4)
* Server: 192.168.122.109
* Status: Running
* Protocol: MySQLBackend
* Port: 3306
* Server Version: 5.5.40-MariaDB-log
* Node Id: 109
* Master Id: 106
* Slave Ids:
* Repl Depth: 1
* Number of connections: 0
* Current no. of conns: 0
* Current no. of operations: 0
*
*
* Maxscale read mysql.user table from server4 which was not properly replicated
* Comment 1 Mark Riddoch 2014-11-05 09:55:07 UTC
* In the reload users routine, if there is a master available then use that rather than the first.
*/
#include <iostream>
#include <maxtest/testconnections.hh>
int main(int argc, char* argv[])
{
TestConnections* Test = new TestConnections(argc, argv);
Test->set_timeout(10);
int i;
Test->repl->connect();
Test->maxscales->connect_maxscale(0);
for (i = 1; i < Test->repl->N; i++)
{
execute_query(Test->repl->nodes[i], (char*) "stop slave;");
}
execute_query(Test->maxscales->conn_rwsplit[0],
(char*) "CREATE USER 'test_user'@'%%' IDENTIFIED BY 'pass'");
MYSQL* conn = open_conn_no_db(Test->maxscales->rwsplit_port[0],
Test->maxscales->IP[0],
(char*) "test_user",
(char*) "pass",
Test->ssl);
if (conn == NULL)
{
Test->add_result(1, "Connections error\n");
}
for (i = 1; i < Test->repl->N; i++)
{
execute_query(Test->repl->nodes[i], (char*) "start slave;");
}
execute_query(Test->maxscales->conn_rwsplit[0], (char*) "DROP USER 'test_user'@'%%'");
Test->repl->close_connections();
Test->maxscales->close_maxscale_connections(0);
int rval = Test->global_result;
delete Test;
return rval;
}

View File

@ -0,0 +1,76 @@
/**
* @file bug448.cpp bug448 regression case ("Wildcard in host column of mysql.user table don't work properly")
*
* Test creates user1@xxx.%.%.% and tries to use it to connect
*/
#include <iostream>
#include <maxtest/get_my_ip.hh>
#include <maxtest/testconnections.hh>
int main(int argc, char* argv[])
{
char my_ip[1024];
char my_ip_db[1024];
char* first_dot;
TestConnections* Test = new TestConnections(argc, argv);
Test->set_timeout(20);
Test->repl->connect();
get_my_ip(Test->maxscales->IP[0], my_ip);
Test->tprintf("Test machine IP (got via network request) %s\n", my_ip);
Test->add_result(Test->get_client_ip(0, my_ip_db), "Unable to get IP using connection to DB\n");
Test->tprintf("Test machine IP (got via Show processlist) %s\n", my_ip);
first_dot = strstr(my_ip, ".");
strcpy(first_dot, ".%.%.%");
Test->tprintf("Test machine IP with %% %s\n", my_ip);
Test->tprintf("Connecting to Maxscale\n");
Test->add_result(Test->maxscales->connect_maxscale(0), "Error connecting to Maxscale\n");
Test->tprintf("Creating user 'user1' for %s host\n", my_ip);
Test->set_timeout(30);
Test->add_result(execute_query(Test->maxscales->conn_rwsplit[0], "CREATE USER user1@'%s';", my_ip),
"Failed to create user");
Test->add_result(execute_query(Test->maxscales->conn_rwsplit[0],
"GRANT ALL PRIVILEGES ON *.* TO user1@'%s' identified by 'pass1'; FLUSH PRIVILEGES;",
my_ip),
"Failed to grant privileges.");
Test->tprintf("Trying to open connection using user1\n");
MYSQL* conn = open_conn(Test->maxscales->rwsplit_port[0],
Test->maxscales->IP[0],
(char*) "user1",
(char*) "pass1",
Test->ssl);
if (mysql_errno(conn) != 0)
{
Test->add_result(1, "TEST_FAILED! Authentification failed! error: %s\n", mysql_error(conn));
}
else
{
Test->tprintf("Authentification for user@'%s' is ok", my_ip);
if (conn != NULL)
{
mysql_close(conn);
}
}
Test->add_result(execute_query(Test->maxscales->conn_rwsplit[0],
"DROP USER user1@'%s'; FLUSH PRIVILEGES;",
my_ip),
"Query Failed\n");
Test->maxscales->close_maxscale_connections(0);
Test->check_maxscale_alive(0);
int rval = Test->global_result;
delete Test;
return rval;
}

96
system-test/avro.cpp Normal file
View File

@ -0,0 +1,96 @@
/**
* @file avro.cpp test of avro
* - setup binlog and avro
* - put some data to t1
* - check avro file with "maxavrocheck -vv /var/lib/maxscale/avro/test.t1.000001.avro"
* - check that data in avro file is correct
*/
#include <sstream>
#include <iostream>
#include <maxtest/maxadmin_operations.hh>
#include <maxtest/maxinfo_func.hh>
#include <maxtest/sql_t1.hh>
#include <maxtest/testconnections.hh>
using std::cout;
using std::endl;
int main(int argc, char* argv[])
{
int exit_code;
TestConnections::skip_maxscale_start(true);
TestConnections test(argc, argv);
test.set_timeout(600);
test.maxscales->ssh_node(0, (char*) "rm -rf /var/lib/maxscale/avro", true);
/** Start master to binlogrouter replication */
test.replicate_from_master();
test.set_timeout(120);
test.repl->connect();
// MXS-2095: Crash on GRANT CREATE TABLE
execute_query(test.repl->nodes[0], "CREATE USER test IDENTIFIED BY 'test'");
execute_query(test.repl->nodes[0], "GRANT CREATE TEMPORARY TABLES ON *.* TO test");
execute_query(test.repl->nodes[0], "DROP USER test");
create_t1(test.repl->nodes[0]);
insert_into_t1(test.repl->nodes[0], 3);
execute_query(test.repl->nodes[0], "FLUSH LOGS");
test.repl->close_connections();
/** Give avrorouter some time to process the events */
test.stop_timeout();
sleep(10);
test.set_timeout(120);
char* output = test.maxscales->ssh_node_output(0,
"maxavrocheck -d /var/lib/maxscale/avro/test.t1.000001.avro",
true,
&exit_code);
std::istringstream iss;
iss.str(output);
int x1_exp = 0;
int fl_exp = 0;
int x = 16;
for (std::string line; std::getline(iss, line);)
{
long long int x1, fl;
test.set_timeout(20);
get_x_fl_from_json((char*)line.c_str(), &x1, &fl);
if (x1 != x1_exp || fl != fl_exp)
{
test.add_result(1,
"Output:x1 %lld, fl %lld, Expected: x1 %d, fl %d",
x1,
fl,
x1_exp,
fl_exp);
break;
}
if ((++x1_exp) >= x)
{
x1_exp = 0;
x = x * 16;
fl_exp++;
test.tprintf("fl = %d", fl_exp);
}
}
if (fl_exp != 3)
{
test.add_result(1, "not enough lines in avrocheck output");
}
execute_query(test.repl->nodes[0], "DROP TABLE test.t1");
test.stop_timeout();
test.revert_replicate_from_master();
return test.global_result;
}

View File

@ -0,0 +1,84 @@
/**
* @file avro_alter.cpp Test ALTER TABLE handling of avrorouter
*/
#include <maxtest/testconnections.hh>
#include <jansson.h>
#include <sstream>
#include <iostream>
int main(int argc, char* argv[])
{
int exit_code;
TestConnections::skip_maxscale_start(true);
TestConnections test(argc, argv);
test.set_timeout(600);
test.maxscales->ssh_node(0, (char*) "rm -rf /var/lib/maxscale/avro", true);
/** Start master to binlogrouter replication */
test.replicate_from_master();
test.set_timeout(120);
test.repl->connect();
// Execute two events for each version of the schema
execute_query_silent(test.repl->nodes[0], "DROP TABLE test.t1");
execute_query(test.repl->nodes[0], "CREATE TABLE test.t1(id INT)");
execute_query(test.repl->nodes[0], "INSERT INTO test.t1 VALUES (1)");
execute_query(test.repl->nodes[0], "DELETE FROM test.t1");
execute_query(test.repl->nodes[0], "ALTER TABLE test.t1 ADD COLUMN a VARCHAR(100)");
execute_query(test.repl->nodes[0], "INSERT INTO test.t1 VALUES (2, \"a\")");
execute_query(test.repl->nodes[0], "DELETE FROM test.t1");
execute_query(test.repl->nodes[0], "ALTER TABLE test.t1 ADD COLUMN b FLOAT");
execute_query(test.repl->nodes[0], "INSERT INTO test.t1 VALUES (3, \"b\", 3.0)");
execute_query(test.repl->nodes[0], "DELETE FROM test.t1");
execute_query(test.repl->nodes[0], "ALTER TABLE test.t1 CHANGE COLUMN b c DATETIME(3)");
execute_query(test.repl->nodes[0], "INSERT INTO test.t1 VALUES (4, \"c\", NOW())");
execute_query(test.repl->nodes[0], "DELETE FROM test.t1");
execute_query(test.repl->nodes[0], "ALTER TABLE test.t1 DROP COLUMN c");
execute_query(test.repl->nodes[0], "INSERT INTO test.t1 VALUES (5, \"d\")");
execute_query(test.repl->nodes[0], "DELETE FROM test.t1");
test.repl->close_connections();
/** Give avrorouter some time to process the events */
test.stop_timeout();
sleep(10);
test.set_timeout(120);
for (int i = 1; i <= 5; i++)
{
std::stringstream cmd;
cmd << "maxavrocheck -d /var/lib/maxscale/avro/test.t1.00000" << i << ".avro";
char* rows = test.maxscales->ssh_node_output(0, cmd.str().c_str(), true, &exit_code);
int nrows = 0;
std::istringstream iss;
iss.str(rows);
for (std::string line; std::getline(iss, line);)
{
json_error_t err;
json_t* json = json_loads(line.c_str(), 0, &err);
test.tprintf("%s", line.c_str());
test.add_result(json == NULL, "Failed to parse JSON: %s", line.c_str());
json_decref(json);
nrows++;
}
// The number of changes that are present in each version of the schema
const int nchanges = 2;
test.add_result(nrows != nchanges,
"Expected %d line in file number %d, got %d: %s",
nchanges,
i,
nrows,
rows);
free(rows);
}
test.stop_timeout();
execute_query(test.repl->nodes[0], "DROP TABLE test.t1;RESET MASTER");
test.revert_replicate_from_master();
return test.global_result;
}

59
system-test/avro_long.cpp Normal file
View File

@ -0,0 +1,59 @@
/**
* @file avro_long.cpp test of avro
* - setup binlog and avro
* - put some data to t1 in the loop
*/
#include <iostream>
#include <maxtest/maxadmin_operations.h>
#include <maxtest/sql_t1.h>
#include <maxtest/test_binlog_fnc.h>
#include <maxtest/testconnections.h>
int main(int argc, char* argv[])
{
TestConnections* Test = new TestConnections(argc, argv);
Test->set_timeout(600);
Test->maxscales->stop_maxscale(0);
Test->maxscales->ssh_node(0, (char*) "rm -rf /var/lib/maxscale/avro", true);
// Test->maxscales->ssh_node(0, (char *) "mkdir /var/lib/maxscale/avro; chown -R maxscale:maxscale
// /var/lib/maxscale/avro", true);
Test->repl->connect();
execute_query(Test->repl->nodes[0], (char*) "DROP TABLE IF EXISTS t1;");
Test->repl->close_connections();
sleep(5);
Test->start_binlog(0);
Test->set_timeout(120);
Test->maxscales->stop_maxscale(0);
Test->maxscales->ssh_node(0, (char*) "rm -rf /var/lib/maxscale/avro", true);
Test->set_timeout(120);
Test->maxscales->start_maxscale(0);
Test->set_timeout(60);
Test->repl->connect();
create_t1(Test->repl->nodes[0]);
for (int i = 0; i < 1000000; i++)
{
Test->set_timeout(60);
insert_into_t1(Test->repl->nodes[0], 3);
Test->tprintf("i=%d\n", i);
}
Test->repl->close_connections();
int rval = Test->global_result;
delete Test;
return rval;
}

View File

@ -0,0 +1,42 @@
/**
* @backend_auth_fail.cpp Repeatedly connect to maxscale while the backends reject all connections
*
* MaxScale should not crash
*/
#include <maxtest/testconnections.hh>
int main(int argc, char** argv)
{
MYSQL* mysql[1000];
TestConnections* Test = new TestConnections(argc, argv);
Test->repl->execute_query_all_nodes((char*) "set global max_connections = 10;");
for (int x = 0; x < 3; x++)
{
Test->tprintf("Creating 100 connections...\n");
for (int i = 0; i < 100; i++)
{
Test->set_timeout(30);
mysql[i] = Test->maxscales->open_readconn_master_connection(0);
execute_query_silent(mysql[i], "select 1");
}
Test->stop_timeout();
for (int i = 0; i < 100; i++)
{
Test->set_timeout(30);
mysql_close(mysql[i]);
}
}
// Wait for the connections to clean up
Test->stop_timeout();
sleep(2 * Test->repl->N);
Test->check_maxscale_alive(0);
int rval = Test->global_result;
delete Test;
return rval;
}

74
system-test/binary_ps.cpp Normal file
View File

@ -0,0 +1,74 @@
/**
* Test binary protocol prepared statement routing
*/
#include <maxtest/testconnections.hh>
int main(int argc, char** argv)
{
TestConnections test(argc, argv);
char server_id[test.repl->N][1024];
test.repl->connect();
/** Get server_id for each node */
for (int i = 0; i < test.repl->N; i++)
{
sprintf(server_id[i], "%d", test.repl->get_server_id(i));
}
test.maxscales->connect_maxscale(0);
test.set_timeout(20);
MYSQL_STMT* stmt = mysql_stmt_init(test.maxscales->conn_rwsplit[0]);
const char* write_query = "SELECT @@server_id, @@last_insert_id";
const char* read_query = "SELECT @@server_id";
char buffer[100] = "";
char buffer2[100] = "";
my_bool err = false;
my_bool isnull = false;
MYSQL_BIND bind[2] = {};
bind[0].buffer_length = sizeof(buffer);
bind[0].buffer = buffer;
bind[0].error = &err;
bind[0].is_null = &isnull;
bind[1].buffer_length = sizeof(buffer2);
bind[1].buffer = buffer2;
bind[1].error = &err;
bind[1].is_null = &isnull;
// Execute a write, should return the master's server ID
test.add_result(mysql_stmt_prepare(stmt, write_query, strlen(write_query)), "Failed to prepare");
test.add_result(mysql_stmt_execute(stmt), "Failed to execute");
test.add_result(mysql_stmt_bind_result(stmt, bind), "Failed to bind result");
test.add_result(mysql_stmt_fetch(stmt), "Failed to fetch result");
test.add_result(strcmp(buffer, server_id[0]), "Expected server_id '%s', got '%s'", server_id[0], buffer);
mysql_stmt_close(stmt);
stmt = mysql_stmt_init(test.maxscales->conn_rwsplit[0]);
// Execute read, should return a slave server ID
test.add_result(mysql_stmt_prepare(stmt, read_query, strlen(read_query)), "Failed to prepare");
test.add_result(mysql_stmt_execute(stmt), "Failed to execute");
test.add_result(mysql_stmt_bind_result(stmt, bind), "Failed to bind result");
test.add_result(mysql_stmt_fetch(stmt), "Failed to fetch result");
test.add_result(strcmp(buffer,
server_id[1]) && strcmp(buffer, server_id[2]) && strcmp(buffer, server_id[3]),
"Expected one of the slave server IDs (%s, %s or %s), not '%s'",
server_id[1],
server_id[2],
server_id[3],
buffer);
mysql_stmt_close(stmt);
test.maxscales->close_maxscale_connections(0);
// MXS-2266: COM_STMT_CLOSE causes a warning to be logged
test.log_excludes(0, "Closing unknown prepared statement");
return test.global_result;
}

View File

@ -0,0 +1,199 @@
/**
* Test that binary protocol cursors work as expected
*/
#include <maxtest/testconnections.hh>
#include <iostream>
using std::cout;
using std::endl;
void test1(TestConnections& test)
{
test.maxscales->connect_maxscale(0);
test.set_timeout(20);
MYSQL_STMT* stmt = mysql_stmt_init(test.maxscales->conn_rwsplit[0]);
const char* query = "SELECT @@server_id";
char buffer[100] = "";
my_bool err = false;
my_bool isnull = false;
MYSQL_BIND bind[1] = {};
bind[0].buffer_length = sizeof(buffer);
bind[0].buffer = buffer;
bind[0].error = &err;
bind[0].is_null = &isnull;
cout << "Prepare" << endl;
test.add_result(mysql_stmt_prepare(stmt, query, strlen(query)), "Failed to prepare");
unsigned long cursor_type = CURSOR_TYPE_READ_ONLY;
unsigned long rows = 0;
test.add_result(mysql_stmt_attr_set(stmt, STMT_ATTR_CURSOR_TYPE, &cursor_type),
"Failed to set attributes");
test.add_result(mysql_stmt_attr_set(stmt, STMT_ATTR_PREFETCH_ROWS, &rows), "Failed to set attributes");
cout << "Execute" << endl;
test.add_result(mysql_stmt_execute(stmt), "Failed to execute");
cout << "Bind result" << endl;
test.add_result(mysql_stmt_bind_result(stmt, bind), "Failed to bind result");
cout << "Fetch row" << endl;
test.add_result(mysql_stmt_fetch(stmt), "Failed to fetch result");
test.add_result(strlen(buffer) == 0, "Expected result buffer to not be empty");
cout << "Close statement" << endl;
mysql_stmt_close(stmt);
test.maxscales->close_maxscale_connections(0);
}
void test2(TestConnections& test)
{
test.set_timeout(20);
MYSQL* conn = open_conn_db_timeout(test.maxscales->rwsplit_port[0],
test.maxscales->ip(0),
"test",
test.maxscales->user_name,
test.maxscales->password,
1,
false);
MYSQL_STMT* stmt1 = mysql_stmt_init(conn);
MYSQL_STMT* stmt2 = mysql_stmt_init(conn);
const char* query1 = "SELECT @@server_id";
const char* query2 = "SELECT @@server_id, @@last_insert_id";
char buffer1[100] = "";
char buffer2[100] = "";
char buffer2_2[100] = "";
my_bool err = false;
my_bool isnull = false;
MYSQL_BIND bind1[1] = {};
MYSQL_BIND bind2[2] = {};
bind1[0].buffer_length = sizeof(buffer1);
bind1[0].buffer = buffer1;
bind1[0].error = &err;
bind1[0].is_null = &isnull;
bind2[0].buffer_length = sizeof(buffer2);
bind2[0].buffer = buffer2;
bind2[0].error = &err;
bind2[0].is_null = &isnull;
bind2[1].buffer_length = sizeof(buffer2);
bind2[1].buffer = buffer2_2;
bind2[1].error = &err;
bind2[1].is_null = &isnull;
cout << "First prepare, should go to slave" << endl;
test.add_result(mysql_stmt_prepare(stmt1, query1, strlen(query1)), "Failed to prepare");
unsigned long cursor_type = CURSOR_TYPE_READ_ONLY;
unsigned long rows = 0;
test.add_result(mysql_stmt_attr_set(stmt1, STMT_ATTR_CURSOR_TYPE, &cursor_type),
"Failed to set attributes");
test.add_result(mysql_stmt_attr_set(stmt1, STMT_ATTR_PREFETCH_ROWS, &rows), "Failed to set attributes");
test.add_result(mysql_stmt_execute(stmt1), "Failed to execute");
test.add_result(mysql_stmt_bind_result(stmt1, bind1), "Failed to bind result");
int rc1 = mysql_stmt_fetch(stmt1);
test.add_result(rc1, "Failed to fetch result: %d %s %s", rc1, mysql_stmt_error(stmt1), mysql_error(conn));
mysql_stmt_close(stmt1);
cout << "Second prepare, should go to master" << endl;
test.add_result(mysql_stmt_prepare(stmt2, query2, strlen(query2)), "Failed to prepare");
test.add_result(mysql_stmt_attr_set(stmt2, STMT_ATTR_CURSOR_TYPE, &cursor_type),
"Failed to set attributes");
test.add_result(mysql_stmt_attr_set(stmt2, STMT_ATTR_PREFETCH_ROWS, &rows), "Failed to set attributes");
test.add_result(mysql_stmt_execute(stmt2), "Failed to execute");
test.add_result(mysql_stmt_bind_result(stmt2, bind2), "Failed to bind result");
int rc2 = mysql_stmt_fetch(stmt2);
test.add_result(rc2, "Failed to fetch result: %d %s %s", rc2, mysql_stmt_error(stmt2), mysql_error(conn));
mysql_stmt_close(stmt2);
/** Get the master's server_id */
char server_id[1024];
test.repl->connect();
sprintf(server_id, "%d", test.repl->get_server_id(0));
test.add_result(strcmp(buffer1, buffer2) == 0, "Expected results to differ");
test.add_result(strcmp(buffer2, server_id) != 0,
"Expected prepare 2 to go to the master (%s) but it's %s",
server_id, buffer2);
}
void test3(TestConnections& test)
{
test.maxscales->connect_maxscale(0);
test.set_timeout(20);
MYSQL_STMT* stmt = mysql_stmt_init(test.maxscales->conn_rwsplit[0]);
const char* query = "SELECT @@server_id";
char buffer[100] = "";
my_bool err = false;
my_bool isnull = false;
MYSQL_BIND bind[1] = {};
bind[0].buffer_length = sizeof(buffer);
bind[0].buffer = buffer;
bind[0].error = &err;
bind[0].is_null = &isnull;
test.add_result(mysql_stmt_prepare(stmt, query, strlen(query)), "Failed to prepare");
cout << "Start transaction" << endl;
test.add_result(mysql_query(test.maxscales->conn_rwsplit[0], "START TRANSACTION"),
"START TRANSACTION should succeed: %s",
mysql_error(test.maxscales->conn_rwsplit[0]));
unsigned long cursor_type = CURSOR_TYPE_READ_ONLY;
unsigned long rows = 0;
test.add_result(mysql_stmt_attr_set(stmt, STMT_ATTR_CURSOR_TYPE, &cursor_type),
"Failed to set attributes");
test.add_result(mysql_stmt_attr_set(stmt, STMT_ATTR_PREFETCH_ROWS, &rows), "Failed to set attributes");
cout << "Execute" << endl;
test.add_result(mysql_stmt_execute(stmt), "Failed to execute");
test.add_result(mysql_stmt_bind_result(stmt, bind), "Failed to bind result");
test.add_result(mysql_stmt_fetch(stmt), "Failed to fetch result");
test.add_result(strlen(buffer) == 0, "Expected result buffer to not be empty");
cout << "Commit" << endl;
test.add_result(mysql_query(test.maxscales->conn_rwsplit[0], "COMMIT"),
"COMMIT should succeed: %s",
mysql_error(test.maxscales->conn_rwsplit[0]));
mysql_stmt_close(stmt);
test.maxscales->close_maxscale_connections(0);
char server_id[1024];
test.repl->connect();
sprintf(server_id, "%d", test.repl->get_server_id(0));
test.add_result(strcmp(buffer, server_id) != 0,
"Expected the execute inside a transaction to go to the master (%s) but it's %s",
server_id, buffer);
}
int main(int argc, char** argv)
{
TestConnections test(argc, argv);
cout << "Test 1: Testing simple cursor usage" << endl;
test1(test);
cout << "Done" << endl << endl;
cout << "Test 2: Testing read-write splitting with cursors" << endl;
test2(test);
cout << "Done" << endl << endl;
cout << "Test 3: Testing transactions with cursors" << endl;
test3(test);
cout << "Done" << endl << endl;
return test.global_result;
}

View File

@ -0,0 +1,46 @@
/**
* @file binlog_change_master.cpp In the binlog router setup stop Master and promote one of the Slaves to be
* new Master
* - setup binlog
* - start thread wich executes transactions
* - block master
* - transaction thread tries to elect a new master a continue with new master
* - continue transaction with new master
* - stop transactions
* - wait
* - check data on all nodes
*/
#include <maxtest/testconnections.hh>
#include "binlog_change_master_common.cpp"
int main(int argc, char* argv[])
{
TestConnections test(argc, argv);
auto cb = [&](MYSQL* blr) {
// Get the name of the current binlog
std::string file = get_row(test.repl->nodes[0], "SHOW MASTER STATUS")[0];
std::string target = get_row(test.repl->nodes[2], "SHOW MASTER STATUS")[0];
// Flush logs until the candidate master has a higher binlog sequence number
while (target.back() <= file.back())
{
execute_query(test.repl->nodes[2], "FLUSH LOGS");
target = get_row(test.repl->nodes[2], "SHOW MASTER STATUS")[0];
}
// Promote the candidate master by pointing the binlogrouter at it
test.try_query(blr, "STOP SLAVE");
test.try_query(blr, "CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d,"
"MASTER_LOG_FILE='%s', MASTER_LOG_POS=4",
test.repl->IP[2], test.repl->port[2], target.c_str());
test.try_query(blr, "START SLAVE");
};
run_test(test, cb);
return test.global_result;
}

View File

@ -0,0 +1,47 @@
#include <maxtest/testconnections.hh>
#include <functional>
void run_test(TestConnections& test, std::function<void(MYSQL*)> cb)
{
test.set_timeout(120);
test.start_binlog(0);
test.repl->connect();
// Create a table and insert some data
execute_query(test.repl->nodes[0], "CREATE OR REPLACE TABLE test.t1 (id INT)");
for (int i = 0; i < 25; i++)
{
test.try_query(test.repl->nodes[0], "INSERT INTO test.t1 VALUES (%d)", i);
}
// Sync the candidate master
std::string binlog_pos = get_row(test.repl->nodes[0], "SELECT @@gtid_binlog_pos")[0];
execute_query(test.repl->nodes[2], "SELECT MASTER_GTID_WAIT('%s', 120)", binlog_pos.c_str());
execute_query(test.repl->nodes[2], "STOP SLAVE");
MYSQL* blr = open_conn_no_db(test.maxscales->binlog_port[0],
test.maxscales->IP[0],
test.repl->user_name,
test.repl->password,
test.ssl);
// Call the callback that switches the master
cb(blr);
mysql_close(blr);
// Do another batch of inserts
for (int i = 0; i < 25; i++)
{
test.try_query(test.repl->nodes[2], "INSERT INTO test.t1 VALUES (%d)", i);
}
// Sync a slave and verify all of the data is replicated
binlog_pos = get_row(test.repl->nodes[2], "SELECT @@gtid_binlog_pos")[0];
execute_query(test.repl->nodes[3], "SELECT MASTER_GTID_WAIT('%s', 120)", binlog_pos.c_str());
std::string sum = get_row(test.repl->nodes[3], "SELECT COUNT(*) FROM test.t1")[0];
test.expect(sum == "50", "Inserted 50 rows but only %s were replicated", sum.c_str());
execute_query(test.repl->nodes[0], "DROP TABLE test.t1");
}

View File

@ -0,0 +1,25 @@
/**
* The GTID version of binlog_change_master
*/
#include <maxtest/testconnections.hh>
#include "binlog_change_master_common.cpp"
int main(int argc, char* argv[])
{
TestConnections test(argc, argv);
test.binlog_master_gtid = true;
test.binlog_slave_gtid = true;
auto cb = [&](MYSQL* blr) {
test.try_query(blr, "STOP SLAVE");
test.try_query(blr, "CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d,"
"MASTER_USE_GTID=SLAVE_POS",
test.repl->IP[2], test.repl->port[2]);
test.try_query(blr, "START SLAVE");
};
run_test(test, cb);
return test.global_result;
}

View File

@ -0,0 +1,12 @@
[mysqld]
plugin-load-add=file_key_management.so
file_key_management_encryption_algorithm=aes_cbc
file_key_management_filename = /etc/mariadb_binlog_keys.txt
encrypt-binlog=1
# Enable checksum
binlog_checksum=CRC32
#Enable large packets handling
max_allowed_packet=1042M
innodb_log_file_size=142M

View File

@ -0,0 +1,12 @@
[mysqld]
plugin-load-add=file_key_management.so
file_key_management_encryption_algorithm=aes_ctr
file_key_management_filename = /etc/mariadb_binlog_keys.txt
encrypt-binlog=1
# Enable checksum
binlog_checksum=CRC32
#Enable large packets handling
max_allowed_packet=1042M
innodb_log_file_size=142M

View File

@ -0,0 +1,21 @@
/**
* @file setup_incompl trying to start binlog setup with incomplete Maxscale.cnf
* check for crash
*/
#include <iostream>
#include <maxtest/testconnections.hh>
int main(int argc, char* argv[])
{
TestConnections* Test = new TestConnections(argc, argv);
Test->set_timeout(60);
Test->maxscales->connect_maxscale(0);
Test->maxscales->close_maxscale_connections(0);
int rval = Test->global_result;
delete Test;
return rval;
}

View File

@ -0,0 +1,54 @@
/**
* @file binlog_semisync.cpp Same test as setup_binlog, but with semisync enabled
*/
#include <iostream>
#include <maxtest/test_binlog_fnc.hh>
#include <maxtest/testconnections.hh>
int main(int argc, char* argv[])
{
TestConnections test(argc, argv);
test.repl->connect();
test.binlog_cmd_option = 1;
test.start_binlog(0);
test.repl->connect();
test.tprintf("install semisync plugin");
execute_query(test.repl->nodes[0],
(char*) "INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';");
test.tprintf("Reconnect");
test.repl->close_connections();
test.repl->connect();
test.tprintf("SET GLOBAL rpl_semi_sync_master_enabled = 1;");
execute_query(test.repl->nodes[0], (char*) "SET GLOBAL rpl_semi_sync_master_enabled = 1;");
test.repl->close_connections();
test_binlog(&test);
test.repl->connect();
test.tprintf("SET GLOBAL rpl_semi_sync_master_enabled = 0;");
execute_query(test.repl->nodes[0], (char*) "SET GLOBAL rpl_semi_sync_master_enabled = 0;");
test.repl->close_connections();
test_binlog(&test);
test.repl->connect();
test.tprintf("uninstall semisync plugin");
execute_query(test.repl->nodes[0], (char*) "UNINSTALL PLUGIN rpl_semi_sync_master;");
test.tprintf("Reconnect");
test.repl->close_connections();
test.repl->connect();
test.tprintf("SET GLOBAL rpl_semi_sync_master_enabled = 1;");
execute_query(test.repl->nodes[0], (char*) "SET GLOBAL rpl_semi_sync_master_enabled = 1;");
test.repl->close_connections();
test_binlog(&test);
test.repl->connect();
test.tprintf("SET GLOBAL rpl_semi_sync_master_enabled = 0;");
execute_query(test.repl->nodes[0], (char*) "SET GLOBAL rpl_semi_sync_master_enabled = 0;");
test.repl->sync_slaves();
test.repl->close_connections();
test_binlog(&test);
return test.global_result;
}

31
system-test/bug562.sh Executable file
View File

@ -0,0 +1,31 @@
#!/bin/bash
###
## @file bug562.sh Regression case for the bug "Wrong error message for Access denied error"
## - try to connect with bad credestials directly to MariaDB server and via Maxscale
## - compare error messages
export ssl_options="--ssl-cert=$src_dir/ssl-cert/client-cert.pem --ssl-key=$src_dir/ssl-cert/client-key.pem"
mariadb_err=`mysql -u no_such_user -psome_pwd -h $node_001_network $ssl_option $node_001_socket_cmd test 2>&1`
maxscale_err=`mysql -u no_such_user -psome_pwd -h ${maxscale_000_network} -P 4006 $ssl_options test 2>&1`
echo "MariaDB message"
echo "$mariadb_err"
echo " "
echo "Maxscale message"
echo "$maxscale_err"
res=0
#echo "$maxscale_err" | grep "$mariadb_err"
echo "$maxscale_err" |grep "ERROR 1045 (28000): Access denied for user 'no_such_user'@'"
if [ "$?" != 0 ]; then
echo "Maxscale message is not ok!"
echo "Message: $maxscale_err"
res=1
else
echo "Messages are same"
res=0
fi
exit $res

32
system-test/bug564.sh Executable file
View File

@ -0,0 +1,32 @@
#!/bin/bash
###
## @file bug564.sh Regression case for the bug "Wrong charset settings"
## - call MariaDB client with different --default-character-set= settings
## - check output of SHOW VARIABLES LIKE 'char%'
export ssl_options="--ssl-cert=$src_dir/ssl-cert/client-cert.pem --ssl-key=$src_dir/ssl-cert/client-key.pem"
for char_set in "latin1" "latin2"
do
line1=`mysql -u$node_user -p$node_password -h ${maxscale_000_network} -P 4006 $ssl_options --default-character-set="$char_set" -e "SHOW VARIABLES LIKE 'char%'" | grep "character_set_client"`
line2=`mysql -u$node_user -p$node_password -h ${maxscale_000_network} -P 4006 $ssl_options --default-character-set="$char_set" -e "SHOW VARIABLES LIKE 'char%'" | grep "character_set_connection"`
line3=`mysql -u$node_user -p$node_password -h ${maxscale_000_network} -P 4006 $ssl_options --default-character-set="$char_set" -e "SHOW VARIABLES LIKE 'char%'" | grep "character_set_results"`
echo $line1 | grep "$char_set"
res1=$?
echo $line2 | grep "$char_set"
res2=$?
echo $line3 | grep "$char_set"
res3=$?
if [[ $res1 != 0 ]] || [[ $res2 != 0 ]] || [[ $res3 != 0 ]] ; then
echo "charset is ignored"
mysql -u$node_user -p$node_password -h ${maxscale_000_network} -P 4006 $ssl_options --default-character-set="latin2" -e "SHOW VARIABLES LIKE 'char%'"
exit 1
fi
done
exit 0

24
system-test/bug567.sh Executable file
View File

@ -0,0 +1,24 @@
#!/bin/bash
###
## @file bug567.sh Regression case for the bug "Crash if files from /dev/shm/ removed"
## - try to remove everythign from /dev/shm/$maxscale_pid
## check if Maxscale is alive
export ssl_options="--ssl-cert=$src_dir/ssl-cert/client-cert.pem --ssl-key=$src_dir/ssl-cert/client-key.pem"
#pid=`ssh -i $maxscale_sshkey -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${maxscale_000_whoami}@${maxscale_000_network} "pgrep maxscale"`
#echo "Maxscale pid is $pid"
echo "removing log directory from /dev/shm/"
if [ ${maxscale_000_network} != "127.0.0.1" ] ; then
ssh -i ${maxscale_000_keyfile} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${maxscale_000_whoami}@${maxscale_000_network} "sudo rm -rf /dev/shm/maxscale/*"
else
sudo rm -rf /dev/shm/maxscale/*
fi
sleep 1
echo "checking if Maxscale is alive"
echo "show databases;" | mysql -u$node_user -p$node_password -h ${maxscale_000_network} -P 4006 $ssl_options
res=$?
exit $res

114
system-test/bug587.cpp Normal file
View File

@ -0,0 +1,114 @@
/**
* @file bug587.cpp regression case for the bug 587 ( "Hint filter don't work if listed before regex filter
* in configuration file" )
*
* - Maxscale.cnf
* @verbatim
* [hints]
* type=filter
* module=hintfilter
*
* [regex]
* type=filter
* module=regexfilter
* match=fetch
* replace=select
*
* [RW Split Router]
* type=service
* router= readwritesplit
* servers=server1, server2, server3,server4
* user=skysql
* passwd=skysql
* max_slave_connections=100%
* use_sql_variables_in=all
* router_options=slave_selection_criteria=LEAST_BEHIND_MASTER
* filters=hints|regex
* @endverbatim
* - second test (bug587_1) is executed with "filters=regex|hints" (dffeent order of filters)
* - check if hints filter working by executing and comparing results:
* + via RWSPLIT: "select @@server_id; -- maxscale route to server server%d" (%d - node number)
* + directly to backend node "select @@server_id;"
* - do the same test with "filters=regex|hints" "filters=hints|regex"
*/
/*
* Vilho Raatikka 2014-10-21 19:12:33 UTC
* If filters and rwsplit are configured as follows, hints don't work.
*
* [hints]
* type=filter
* module=hintfilter
*
* [regex]
* type=filter
* module=regexfilter
* match=fetch
* replace=select
*
* [RW Split Router]
* type=service
* router=readwritesplit
* servers=server1,server2,server3,server4
* max_slave_connections=100%
* use_sql_variables_in=all
* user=maxuser
* passwd=maxpwd
* filters=hints|regex
*
* Changing filters=regex|hints makes it work. This is due to processing order. Regex filter drops hint off.
* Comment 1 Vilho Raatikka 2014-10-23 18:08:07 UTC
* buffer.c:gwbuf_make_contiguous: hint wasn't duplicated to new GWBUF struct. As a result hints were lost if
* query rewriting resulted in longer query than the original.
*/
#include <iostream>
#include <unistd.h>
#include <maxtest/testconnections.hh>
using namespace std;
int main(int argc, char* argv[])
{
TestConnections* Test = new TestConnections(argc, argv);
Test->repl->limit_nodes(4);
Test->set_timeout(10);
Test->repl->connect();
Test->maxscales->connect_maxscale(0);
char server_id[256];
char server_id_d[256];
char hint_sql[64];
for (int i = 1; i < 25; i++)
{
for (int j = 0; j < Test->repl->N; j++)
{
Test->set_timeout(10);
sprintf(hint_sql, "select @@server_id; -- maxscale route to server server%d", j + 1);
Test->tprintf("%s\n", hint_sql);
find_field(Test->maxscales->conn_rwsplit[0], hint_sql, (char*) "@@server_id", &server_id[0]);
find_field(Test->repl->nodes[j],
(char*) "select @@server_id;",
(char*) "@@server_id",
&server_id_d[0]);
Test->tprintf("server%d ID from Maxscale: \t%s\n", j + 1, server_id);
Test->tprintf("server%d ID directly from node: \t%s\n", j + 1, server_id_d);
Test->add_result(strcmp(server_id, server_id_d), "Hints does not work!\n");
}
}
Test->maxscales->close_maxscale_connections(0);
Test->repl->close_connections();
Test->check_maxscale_alive(0);
int rval = Test->global_result;
delete Test;
return rval;
}

41
system-test/bug670_sql.h Normal file
View File

@ -0,0 +1,41 @@
#pragma once
const char* bug670_sql
=
"set autocommit=0;\
use mysql;\
set autocommit=1;\
use test;\
set autocommit=0;\
use mysql;\
set autocommit=1;\
select user,host from user;\
set autocommit=0;\
use fakedb;\
use test;\
use mysql;\
use dontuse;\
use mysql;\
drop table if exists t1;\
commit;\
use test;\
use mysql;\
set autocommit=1;\
create table t1(id integer primary key);\
insert into t1 values(5);\
use test;\
use mysql;\
select user from user;\
set autocommit=0;\
set autocommit=1;\
set autocommit=0;\
insert into mysql.t1 values(7);\
use mysql;\
rollback work;\
commit;\
delete from mysql.t1 where id=7;\
insert into mysql.t1 values(7);\
select host,user from mysql.user;\
set autocommit=1;\
delete from mysql.t1 where id = 7; \
select 1 as \"endof cycle\" from dual;\n";

43
system-test/bug676.cpp Normal file
View File

@ -0,0 +1,43 @@
/**
* @file bug676.cpp reproducing attempt for bug676
* - connect to RWSplit
* - stop node0
* - sleep 20 seconds
* - reconnect
* - check if 'USE test' is ok
* - check MaxScale is alive
*/
#include <iostream>
#include <maxtest/testconnections.hh>
int main(int argc, char* argv[])
{
TestConnections::require_galera(true);
TestConnections test(argc, argv);
test.set_timeout(30);
test.maxscales->connect_maxscale(0);
test.tprintf("Stopping node 0");
test.galera->block_node(0);
test.maxscales->close_maxscale_connections(0);
test.stop_timeout();
test.tprintf("Waiting until the monitor picks a new master");
test.maxscales->wait_for_monitor();
test.set_timeout(30);
test.maxscales->connect_maxscale(0);
test.try_query(test.maxscales->conn_rwsplit[0], "USE test");
test.try_query(test.maxscales->conn_rwsplit[0], "show processlist;");
test.maxscales->close_maxscale_connections(0);
test.stop_timeout();
test.galera->unblock_node(0);
return test.global_result;
}

233
system-test/bulk_insert.cpp Normal file
View File

@ -0,0 +1,233 @@
/**
* MXS-1121: MariaDB 10.2 Bulk Insert test
*
* This test is a copy of one of the examples for bulk inserts:
* https://mariadb.com/kb/en/mariadb/bulk-insert-column-wise-binding/
*/
#include <maxtest/testconnections.hh>
static int show_mysql_error(MYSQL* mysql)
{
printf("Error(%d) [%s] \"%s\"\n",
mysql_errno(mysql),
mysql_sqlstate(mysql),
mysql_error(mysql));
return 1;
}
static int show_stmt_error(MYSQL_STMT* stmt)
{
printf("Error(%d) [%s] \"%s\"\n",
mysql_stmt_errno(stmt),
mysql_stmt_sqlstate(stmt),
mysql_stmt_error(stmt));
return 1;
}
int bind_by_column(MYSQL* mysql)
{
MYSQL_STMT* stmt;
MYSQL_BIND bind[3];
/* Data for insert */
const char* surnames[] = {"Widenius", "Axmark", "N.N."};
unsigned long surnames_length[] = {8, 6, 4};
const char* forenames[] = {"Monty", "David", "will be replaced by default value"};
char forename_ind[] = {STMT_INDICATOR_NTS, STMT_INDICATOR_NTS, STMT_INDICATOR_DEFAULT};
char id_ind[] = {STMT_INDICATOR_NULL, STMT_INDICATOR_NULL, STMT_INDICATOR_NULL};
unsigned int array_size = 3;
if (mysql_query(mysql, "DROP TABLE IF EXISTS test.bulk_example1"))
{
return show_mysql_error(mysql);
}
if (mysql_query(mysql,
"CREATE TABLE test.bulk_example1 (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY," \
"forename CHAR(30) NOT NULL DEFAULT 'unknown', surname CHAR(30))"))
{
return show_mysql_error(mysql);
}
stmt = mysql_stmt_init(mysql);
if (mysql_stmt_prepare(stmt, "INSERT INTO test.bulk_example1 VALUES (?,?,?)", -1))
{
return show_stmt_error(stmt);
}
memset(bind, 0, sizeof(MYSQL_BIND) * 3);
/* We autogenerate id's, so all indicators are STMT_INDICATOR_NULL */
bind[0].u.indicator = id_ind;
bind[0].buffer_type = MYSQL_TYPE_LONG;
bind[1].buffer = forenames;
bind[1].buffer_type = MYSQL_TYPE_STRING;
bind[1].u.indicator = forename_ind;
bind[2].buffer_type = MYSQL_TYPE_STRING;
bind[2].buffer = surnames;
bind[2].length = surnames_length;
/* set array size */
mysql_stmt_attr_set(stmt, STMT_ATTR_ARRAY_SIZE, &array_size);
/* bind parameter */
mysql_stmt_bind_param(stmt, bind);
/* execute */
if (mysql_stmt_execute(stmt))
{
return show_stmt_error(stmt);
}
mysql_stmt_close(stmt);
/* Check that the rows were inserted */
if (mysql_query(mysql, "SELECT * FROM test.bulk_example1"))
{
return show_mysql_error(mysql);
}
MYSQL_RES* res = mysql_store_result(mysql);
if (res == NULL || mysql_num_rows(res) != 3)
{
printf("Expected 3 rows but got %d (%s)\n", res ? (int)mysql_num_rows(res) : 0, mysql_error(mysql));
return 1;
}
if (mysql_query(mysql, "DROP TABLE test.bulk_example1"))
{
return show_mysql_error(mysql);
}
return 0;
}
int bind_by_row(MYSQL* mysql)
{
MYSQL_STMT* stmt;
MYSQL_BIND bind[3];
struct st_data
{
unsigned long id;
char id_ind;
char forename[30];
char forename_ind;
char surname[30];
char surname_ind;
};
struct st_data data[] =
{
{0, STMT_INDICATOR_NULL, "Monty", STMT_INDICATOR_NTS, "Widenius", STMT_INDICATOR_NTS},
{0, STMT_INDICATOR_NULL, "David", STMT_INDICATOR_NTS, "Axmark", STMT_INDICATOR_NTS},
{0, STMT_INDICATOR_NULL, "default", STMT_INDICATOR_DEFAULT, "N.N.", STMT_INDICATOR_NTS},
};
unsigned int array_size = 3;
size_t row_size = sizeof(struct st_data);
if (mysql_query(mysql, "DROP TABLE IF EXISTS bulk_example2"))
{
show_mysql_error(mysql);
}
if (mysql_query(mysql,
"CREATE TABLE bulk_example2 (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY," \
"forename CHAR(30) NOT NULL DEFAULT 'unknown', surname CHAR(30))"))
{
show_mysql_error(mysql);
}
stmt = mysql_stmt_init(mysql);
if (mysql_stmt_prepare(stmt, "INSERT INTO bulk_example2 VALUES (?,?,?)", -1))
{
show_stmt_error(stmt);
}
memset(bind, 0, sizeof(MYSQL_BIND) * 3);
/* We autogenerate id's, so all indicators are STMT_INDICATOR_NULL */
bind[0].u.indicator = &data[0].id_ind;
bind[0].buffer_type = MYSQL_TYPE_LONG;
bind[1].buffer = &data[0].forename;
bind[1].buffer_type = MYSQL_TYPE_STRING;
bind[1].u.indicator = &data[0].forename_ind;
bind[2].buffer_type = MYSQL_TYPE_STRING;
bind[2].buffer = &data[0].surname;
bind[2].u.indicator = &data[0].surname_ind;
/* set array size */
mysql_stmt_attr_set(stmt, STMT_ATTR_ARRAY_SIZE, &array_size);
/* set row size */
mysql_stmt_attr_set(stmt, STMT_ATTR_ROW_SIZE, &row_size);
/* bind parameter */
mysql_stmt_bind_param(stmt, bind);
/* execute */
if (mysql_stmt_execute(stmt))
{
show_stmt_error(stmt);
}
mysql_stmt_close(stmt);
/* Check that the rows were inserted */
if (mysql_query(mysql, "SELECT * FROM test.bulk_example2"))
{
return show_mysql_error(mysql);
}
MYSQL_RES* res = mysql_store_result(mysql);
if (res == NULL || mysql_num_rows(res) != 3)
{
printf("Expected 3 rows but got %d (%s)\n", res ? (int)mysql_num_rows(res) : 0, mysql_error(mysql));
return 1;
}
if (mysql_query(mysql, "DROP TABLE test.bulk_example2"))
{
return show_mysql_error(mysql);
}
return 0;
}
int main(int argc, char** argv)
{
TestConnections::require_repl_version("10.2");
TestConnections test(argc, argv);
test.maxscales->connect_maxscale(0);
test.repl->connect();
test.tprintf("Testing column-wise binding with a direct connection");
test.add_result(bind_by_column(test.repl->nodes[0]), "Bulk inserts with a direct connection should work");
test.tprintf("Testing column-wise binding with readwritesplit");
test.add_result(bind_by_column(test.maxscales->conn_rwsplit[0]),
"Bulk inserts with readwritesplit should work");
test.tprintf("Testing column-wise binding with readconnroute");
test.add_result(bind_by_column(test.maxscales->conn_master[0]),
"Bulk inserts with readconnroute should work");
test.tprintf("Testing row-wise binding with a direct connection");
test.add_result(bind_by_row(test.repl->nodes[0]), "Bulk inserts with a direct connection should work");
test.tprintf("Testing row-wise binding with readwritesplit");
test.add_result(bind_by_row(test.maxscales->conn_rwsplit[0]),
"Bulk inserts with readwritesplit should work");
test.tprintf("Testing row-wise binding with readconnroute");
test.add_result(bind_by_row(test.maxscales->conn_master[0]),
"Bulk inserts with readconnroute should work");
test.maxscales->close_maxscale_connections(0);
return test.global_result;
}

View File

@ -0,0 +1,9 @@
{
"store": [
{
"attribute": "table",
"op": "=",
"value": "caching"
}
]
}

View File

@ -0,0 +1,4 @@
drop database if exists cachingdb;
create database cachingdb;
use cachingdb;
create table caching (a INT, b TEXT, c FLOAT);

View File

@ -0,0 +1,2 @@
USE cachingdb;
DELETE FROM caching;

View File

@ -0,0 +1 @@
drop database if exists cachingdb;

View File

@ -0,0 +1,2 @@
USE cachingdb;
INSERT INTO caching VALUES (42, 'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis.', 3.14);

View File

@ -0,0 +1,3 @@
USE cachingdb;
SELECT * FROM caching;
a b c

View File

@ -0,0 +1,4 @@
USE cachingdb;
SELECT * FROM caching;
a b c
42 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis. 3.14

View File

@ -0,0 +1,6 @@
USE cachingdb;
SELECT * FROM caching;
a b c
84 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula. 6.28

View File

@ -0,0 +1,8 @@
USE cachingdb;
SELECT * FROM caching;
a b c
126 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula. 9.42

View File

@ -0,0 +1,4 @@
USE cachingdb;
UPDATE caching SET a=84, b='Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula.', c=6.28;

View File

@ -0,0 +1,4 @@
USE cachingdb;
UPDATE caching SET a=84, b='Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula.', c=6.28;

View File

@ -0,0 +1,6 @@
USE cachingdb;
UPDATE caching SET a=126, b='Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula.', c=9.42;

View File

@ -0,0 +1,14 @@
#
# Cache basic
#
# See ../cache_rules.
--disable_warnings
drop database if exists cachingdb;
--enable_warnings
create database cachingdb;
use cachingdb;
create table caching (a INT, b TEXT, c FLOAT);

View File

@ -0,0 +1,11 @@
#
# Cache basic
#
# See ../cache_rules.
#
# This script assumes t/create.test has been successfully executed.
#
USE cachingdb;
DELETE FROM caching;

View File

@ -0,0 +1,6 @@
#
# Cache basic
#
# See ../cache_rules.
drop database if exists cachingdb;

View File

@ -0,0 +1,11 @@
#
# Cache basic
#
# See ../cache_rules.
#
# This script assumes t/create.test has been successfully executed.
#
USE cachingdb;
INSERT INTO caching VALUES (42, 'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis.', 3.14);

View File

@ -0,0 +1,11 @@
#
# Cache basic
#
# See ../cache_rules.
#
# This script assumes t/create.test has been successfully executed.
#
USE cachingdb;
SELECT * FROM caching;

View File

@ -0,0 +1,11 @@
#
# Cache basic
#
# See ../cache_rules.
#
# This script assumes t/insert1.test has been successfully executed.
#
USE cachingdb;
SELECT * FROM caching;

View File

@ -0,0 +1,11 @@
#
# Cache basic
#
# See ../cache_rules.
#
# This script assumes t/update2.test has been successfully executed.
#
USE cachingdb;
SELECT * FROM caching;

View File

@ -0,0 +1,11 @@
#
# Cache basic
#
# See ../cache_rules.
#
# This script assumes t/update3.test has been successfully executed.
#
USE cachingdb;
SELECT * FROM caching;

View File

@ -0,0 +1,13 @@
#
# Cache basic
#
# See ../cache_rules.
#
# This script assumes t/insert.test has been successfully executed.
#
USE cachingdb;
UPDATE caching SET a=84, b='Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula.', c=6.28;

View File

@ -0,0 +1,13 @@
#
# Cache basic
#
# See ../cache_rules.
#
# This script assumes t/insert.test has been successfully executed.
#
USE cachingdb;
UPDATE caching SET a=84, b='Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula.', c=6.28;

View File

@ -0,0 +1,15 @@
#
# Cache basic
#
# See ../cache_rules.
#
# This script assumes t/insert.test has been successfully executed.
#
USE cachingdb;
UPDATE caching SET a=126, b='Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi eget turpis massa. Duis sit amet commodo ante. Aenean eleifend ipsum sed enim fermentum, eget efficitur risus pulvinar. Maecenas tellus augue, laoreet eget risus porta, porta volutpat tellus. Mauris aliquam vitae velit id faucibus. Aenean euismod, mi nec luctus lacinia, ligula eros commodo velit, ac sagittis ipsum magna scelerisque mi. Aliquam sed sapien sit amet mi convallis pharetra. Sed facilisis, felis ac eleifend fringilla, mauris augue egestas dui, aliquet mollis tellus enim eget erat. Vivamus rhoncus neque nec feugiat mollis.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula.
Ut feugiat facilisis urna, ac mollis purus iaculis eget. Fusce egestas est quis mauris euismod, et laoreet nunc commodo. Nullam vehicula tellus in sapien viverra vulputate. Etiam eu libero ultrices mi auctor laoreet. Curabitur lacus nisi, ullamcorper eu quam a, auctor malesuada est. Sed feugiat sagittis augue, non semper ante commodo at. Donec lobortis dapibus nunc sit amet interdum. Quisque egestas elementum enim, nec malesuada nisl tincidunt eget. Suspendisse nulla purus, ullamcorper ut ultricies et, pharetra sed metus. Donec eleifend neque vitae lorem dignissim mattis. Donec gravida dui et ultricies feugiat. Aliquam est lectus, consectetur eu est at, finibus ullamcorper ex. Etiam sit amet erat quis dolor commodo facilisis sed finibus enim. Etiam iaculis ultrices vehicula.', c=9.42;

99
system-test/cache_basic.sh Executable file
View File

@ -0,0 +1,99 @@
#!/bin/bash
user=$maxscale_user
password=$maxscale_password
# See cnf/maxscale.cnf.template.cache_basic
port=4008
# Ensure that these are EXACTLY like the corresponding values
# in cnf/maxscale.cnf.template.cache_basic
soft_ttl=5
hard_ttl=10
function run_test
{
local test_name=$1
echo $test_name
logdir=log_$test_name
mkdir -p $logdir
mysqltest --host=${maxscale_000_network} --port=$port \
--user=$user --password=$password \
--logdir=$logdir \
--test-file=$dir/t/$test_name.test \
--result-file=$dir/r/$test_name.result \
--silent
if [ $? -eq 0 ]
then
echo " OK"
rc=0
else
echo " FAILED"
rc=1
fi
return $rc
}
export dir="$src_dir/cache/$1"
source=$src_dir/cache/$1/cache_rules.json
target=${maxscale_000_whoami}@${maxscale_000_network}:/home/${maxscale_000_whoami}/cache_rules.json
if [ ${maxscale_000_network} != "127.0.0.1" ] ; then
scp -i ${maxscale_000_keyfile} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null $source $target
ssh -i $maxscale_000_keyfile -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${maxscale_000_whoami}@${maxscale_000_network} chmod a+r /home/${maxscale_000_whoami}/cache_rules.json
else
cp $source /home/${maxscale_000_whoami}/cache_rules.json
fi
if [ $? -ne 0 ]
then
echo "error: Could not copy rules file to maxscale host."
exit 1
fi
echo $source copied to $target, restarting Maxscale
ssh -i $maxscale_000_keyfile -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${maxscale_000_whoami}@${maxscale_000_network} 'sudo service maxscale restart'
# We sleep slightly longer than the TTL to ensure that the TTL mechanism
# kicks in.
let seconds=$soft_ttl+2
run_test create || exit 1
run_test insert1 || exit 1
# We should now get result 1, as this is the first select.
run_test select1 || exit 1
run_test update2 || exit 1
# We should now get result 1, as ttl has NOT passed.
run_test select1 || exit 1
echo "Sleeping $seconds"
sleep $seconds
# We should now get result 2, as soft ttl has PASSED.
run_test select2 || exit 1
run_test update3 || exit 1
# We should now get result 2, as ttl has NOT passed.
run_test select2 || exit 1
echo "Sleeping $seconds"
sleep $seconds
# We should now get result 3, as soft ttl has PASSED.
run_test select3 || exit 1
run_test delete || exit 1
# We should now get result 3, as soft ttl has NOT passed.
run_test select3 || exit 1
echo "Sleeping $seconds"
sleep $seconds
# We should now get result 0, as soft ttl has PASSED.
run_test select0 || exit 1
# Cleanup
run_test drop || exit 1

View File

@ -0,0 +1,212 @@
/*
* Copyright (c) 2016 MariaDB Corporation Ab
*
* Use of this software is governed by the Business Source License included
* in the LICENSE.TXT file and at www.mariadb.com/bsl11.
*
* Change Date: 2024-07-07
*
* On the date above, in accordance with the Business Source License, use
* of this software will be governed by version 2 or later of the General
* Public License.
*/
#include <iostream>
#include <string>
#include <vector>
#include <maxtest/testconnections.hh>
using namespace std;
namespace
{
void drop(TestConnections& test)
{
MYSQL* pMysql = test.maxscales->conn_rwsplit[0];
string stmt("DROP TABLE IF EXISTS cache_test");
cout << stmt << endl;
test.try_query(pMysql, "%s", stmt.c_str());
}
void create(TestConnections& test)
{
drop(test);
MYSQL* pMysql = test.maxscales->conn_rwsplit[0];
string stmt("CREATE TABLE cache_test (a INT)");
cout << stmt << endl;
test.try_query(pMysql, "%s", stmt.c_str());
}
void insert(TestConnections& test)
{
MYSQL* pMysql = test.maxscales->conn_rwsplit[0];
string stmt("INSERT INTO cache_test VALUES (1)");
cout << stmt << endl;
test.try_query(pMysql, "%s", stmt.c_str());
}
void update(TestConnections& test, int value)
{
MYSQL* pMysql = test.maxscales->conn_rwsplit[0];
string stmt("UPDATE cache_test SET a=");
stmt += std::to_string(value);
cout << stmt << endl;
test.try_query(pMysql, "%s", stmt.c_str());
}
void select(TestConnections& test, int* pValue)
{
MYSQL* pMysql = test.maxscales->conn_rwsplit[0];
string stmt("SELECT * FROM cache_test");
cout << stmt << endl;
if (mysql_query(pMysql, stmt.c_str()) == 0)
{
if (mysql_field_count(pMysql) != 0)
{
size_t nRows = 0;
do
{
MYSQL_RES* pRes = mysql_store_result(pMysql);
MYSQL_ROW row = mysql_fetch_row(pRes);
*pValue = atoi(row[0]);
mysql_free_result(pRes);
++nRows;
}
while (mysql_next_result(pMysql) == 0);
test.expect(nRows == 1, "Unexpected number of rows: %lu", nRows);
}
}
else
{
test.expect(false, "SELECT failed.");
}
}
namespace Cache
{
enum What
{
POPULATE,
USE
};
}
void set(TestConnections& test, Cache::What what, bool value)
{
MYSQL* pMysql = test.maxscales->conn_rwsplit[0];
string stmt("SET @maxscale.cache.");
stmt += ((what == Cache::POPULATE) ? "populate" : "use");
stmt += "=";
stmt += (value ? "true" : "false");
cout << stmt << endl;
test.try_query(pMysql, "%s", stmt.c_str());
}
}
namespace
{
void init(TestConnections& test)
{
create(test);
insert(test);
}
void run(TestConnections& test)
{
init(test);
int value;
// Let's populate the cache.
set(test, Cache::POPULATE, true);
set(test, Cache::USE, false);
select(test, &value);
test.expect(value == 1, "Initial value was not 1.");
// And update the real value.
update(test, 2); // Now the cache contains 1 and the db 2.
// With @maxscale.cache.use==false we should get the updated value.
set(test, Cache::POPULATE, false);
set(test, Cache::USE, false);
select(test, &value);
test.expect(value == 2, "The value received was not the latest one.");
// With @maxscale.cache.use==true we should get the old one, since
// it was not updated above as @maxscale.cache.populate==false.
set(test, Cache::POPULATE, false);
set(test, Cache::USE, true);
select(test, &value);
test.expect(value == 1, "The value received was not the populated one.");
// The hard_ttl is 8, so we sleep(10) seconds to ensure that TTL has passed.
cout << "Sleeping 10 seconds." << endl;
sleep(10);
// With @maxscale.cache.use==true we should now get the latest value.
// The value in the cache is stale, so it will be updated even if
// @maxscale.cache.populate==false.
set(test, Cache::POPULATE, false);
set(test, Cache::USE, true);
select(test, &value);
test.expect(value == 2, "The cache was not updated even if TTL was passed.");
// Let's update again
update(test, 3);
// And fetch again. Should still be 2, as the item in the cache is not stale.
set(test, Cache::POPULATE, false);
set(test, Cache::USE, true);
select(test, &value);
test.expect(value == 2, "New value %d, although the value in the cache is not stale.", value);
// Force an update.
set(test, Cache::POPULATE, true);
set(test, Cache::USE, false);
select(test, &value);
test.expect(value == 3, "Did not get new value.");
// And check that the cache indeed was updated, but update the DB first.
update(test, 4);
set(test, Cache::POPULATE, false);
set(test, Cache::USE, true);
select(test, &value);
test.expect(value == 3, "Got a newer value than expected.");
}
}
int main(int argc, char* argv[])
{
TestConnections test(argc, argv);
if (test.maxscales->connect_rwsplit() == 0)
{
run(test);
}
test.maxscales->connect();
drop(test);
test.maxscales->disconnect();
return test.global_result;
}

Some files were not shown because too many files have changed in this diff Show More