As of version 1.1.5, Slony-I has a common test bed framework intended to better support running a comprehensive set of tests at least somewhat automatically. Older tests used pgbench (not a bad thing) but were troublesome to automate because they were set up to spawn each slon in an xterm for the user to watch.
The new test framework is mostly written in Bourne shell, and is intended to be portable to both Bash (widely used on Linux) and Korn shell (widely found on commercial UNIX systems). The code lives in the source tree under the tests directory.
At present, nearly all of the tests make use of only two databases that, by default, are on a single PostgreSQL postmaster on one host. This is perfectly fine for those tests that involve verifying that Slony-I functions properly on various sorts of data. Those tests do things like varying date styles, and creating tables and sequences that involve unusual names to verify that quoting is being handled properly.
It is also possible to configure environment variables so that the replicated nodes will be placed on different database backends, optionally on remote hosts, running varying versions of PostgreSQL.
Here are some of the vital files...
This is the central script for running tests. Typical usage is thus:
usage ./run_test.sh testname
You need to specify the subdirectory name of the test set to be run; each such set is stored in a subdirectory of tests.
You may need to set one or more of the following environment variables to reflect your local configuration. For instance, the writer runs "test1" against PostgreSQL 8.0.x using the following command line:
PGBINDIR=/opt/OXRS/dbs/pgsql8/bin PGPORT=5532 PGUSER=cbbrowne ./run_test.sh test1
This determines where the test scripts look for PostgreSQL and Slony-I binaries. The default is /usr/local/pgsql/bin.
There are also variables PGBINDIR1 thru PGBINDIR13 which allows you to specify a separate path for each database instance. That will be particularly useful when testing interoperability of Slony-I across different versions of PostgreSQL on different platforms. In order to create a database of each respective version, you need to point to an initdb of the appropriate version.
This indicates what port the backend is on. By default, 5432 is used.
There are also variables PORT1 thru PORT13 which allow you to specify a separate port number for each database instance. That will be particularly useful when testing interoperability of Slony-I across different versions of PostgreSQL.
By default, the user postgres is used; this is taken as the default user ID to use for all of the databases.
There are also variables USER1 thru USER13 which allow specifying a separate user name for each database instance. As always, with Slony-I, this needs to be a PostgreSQL "superuser."
By default, the user postgres is used; this is taken as the default user ID to use for the SLONIK STORE PATH connections to all of the databases.
There are also variables WEAKUSER1 thru WEAKUSER13 which allow specifying a separate user name for each database instance. This user does not need to be a PostgreSQL "superuser." This user can start out with no permissions; it winds up granted read permissions on the tables that the test uses, plus read access throughout the Slony-I schema, plus write access to one table and sequence used to manage node locks.
By default, localhost is used.
There are also variables HOST1 thru HOST13 which allow specifying a separate host for each database instance.
- DB1 thru DB13
By default, slonyregress1 thru slonyregress13 are used.
You may override these from the environment if you have some reason to use different names.
By default, UNICODE is used, so that tests can create UTF8 tables and test the multibyte capabilities.
If your version of Linux uses a variation of mktemp that does not generate a full path to the location of the desired temporary file/directory, then set this value.
By default, the tests will generate their output in /tmp, /usr/tmp, or /var/tmp, unless you set your own value for this environment variable.
Where to look for Slony-I tools such as slony1_dump.sh.
If set to "true", for a particular node, which will normally get configured out of human sight in the generic-to-a-particular-test file settings.ik, then this node will be used as a data source for Section 16, and this causes the test tools to set up a directory for the archive_dir option.
If set to "true", for a particular node, which will normally get configured out of human sight in settings.ik for a particular test, then this indicates that this node is being created via Section 16, and a slon is not required for this node.
If set to "true", for a particular node, typically handled in settings.ik for a given test, then configuration will be set up in a per-node slon.conf runtime config file.
Email address of the person who might be contacted about the test results. This is stored in the SLONYTESTFILE, and may eventually be aggregated in some sort of buildfarm-like registry.
File in which to store summary results from tests. Eventually, this may be used to construct a buildfarm-like repository of aggregated test results.
- random_number and random_string
If you run make in the test directory, C programs random_number and random_string will be built which will then be used when generating random data in lieu of using shell/SQL capabilities that are much slower than the C programs.
Within each test, you will find the following files:
This file contains a description of the test, and is displayed to the reader when the test is invoked.
This contains script code that generates SQL to perform updates.
This is a slonik script for adding the tables for the test to repliation.
slonik to initialize the cluster for the test.
slonik to initialize additional nodes to be used in the test.
An SQL script to create the tables and sequences required at the start of the test.
An SQL script to initialize the schema with whatever state is required for the "master" node.
A slonik script to set up subscriptions.
A shell script that is used to control the size of the cluster, how many nodes are to be created, and where the origin is.
A series of SQL queries, one per line, that are to be used to validate that the data matches across all the nodes. Note that in order to avoid spurious failures, the queries must use unambiguous ORDER BY clauses.