19. Slony-I Administration Scripts

A number of tools have grown over the course of the history of Slony-I to help users manage their clusters. This section along with the ones on Section 5 and Section 6 discusses them.

19.1. altperl Scripts

There is a set of scripts to simplify administeration of set of Slony-I instances. The scripts support having arbitrary numbers of nodes. They may be installed as part of the installation process:

./configure --with-perltools

This will produce a number of scripts with the prefix slonik_. They eliminate tedium by always referring to a central configuration file for the details of your site configuration. A documented sample of this file is provided in altperl/slon_tools.conf-sample. Most also include some command line help with the "--help" option, making them easier to learn and use.

Most generate Slonik scripts that are printed to STDOUT. At one time, the commands were passed directly to slonik for execution. Unfortunately, this turned out to be a pretty large calibre "foot gun", as minor typos on the command line led, on a couple of occasions, to pretty calamitous actions. The savvy administrator should review the script before piping it to slonik.

19.1.1. Support for Multiple Clusters

The UNIX environment variable SLONYNODES is used to determine what Perl configuration file will be used to control the shape of the nodes in a Slony-I cluster. If it is not provided, a default slon_tools.conf location will be referenced.

What variables are set up.

  • $CLUSTER_NAME=orglogs; # What is the name of the replication cluster?

  • $LOGDIR='/opt/OXRS/log/LOGDBS'; # What is the base directory for logs?

  • $APACHE_ROTATOR="/opt/twcsds004/OXRS/apache/rotatelogs"; # If set, where to find Apache log rotator

  • foldCase # If set to 1, object names (including schema names) will be folded to lower case. By default, your object names will be left alone. Note that PostgreSQL itself folds object names to lower case; if you create a table via the command CREATE TABLE SOME_THING (Id INTEGER, STudlYName text);, the result will be that all of those components are forced to lower case, thus equivalent to create table some_thing (id integer, studlyname text);, and the name of table and, in this case, the fields will all, in fact, be lower case.

You then define the set of nodes that are to be replicated using a set of calls to add_node().

add_node (host => '10.20.30.40', dbname => 'orglogs', port => 5437, user => 'postgres', node => 4, parent => 1);

The set of parameters for add_node() are thus:

my %PARAMS =   (host=> undef,		# Host name
	   	dbname => 'template1',	# database name
		port => 5432,		# Port number
		user => 'postgres',	# user to connect as
		node => undef,		# node number
		password => undef,	# password for user
		parent => 1,		# which node is parent to this node
		noforward => undef	# shall this node be set up to forward results?
                sslmode => undef        # SSL mode argument - determine 
                                        # priority of SSL usage
                                        # = disable,allow,prefer,require
);

19.1.2. Set configuration - cluster.set1, cluster.set2

The UNIX environment variable SLONYSET is used to determine what Perl configuration file will be used to determine what objects will be contained in a particular replication set.

Unlike SLONYNODES, which is essential for all of the slonik-generating scripts, this only needs to be set when running create_set, as that is the only script used to control what tables will be in a particular replication set.

19.1.3. slonik_build_env

Queries a database, generating output hopefully suitable for slon_tools.conf consisting of:

  • a set of add_node() calls to configure the cluster

  • The arrays @KEYEDTABLES, nvar>@SERIALTnvar>, and @SEQUENCES

19.1.4. slonik_print_preamble

This generates just the "preamble" that is required by all slonik scripts. In effect, this provides a "skeleton" slonik script that does not do anything.

19.1.5. slonik_create_set

This requires SLONYSET to be set as well as SLONYNODES; it is used to generate the slonik script to set up a replication set consisting of a set of tables and sequences that are to be replicated.

19.1.6. slonik_drop_node

Generates Slonik script to drop a node from a Slony-I cluster.

19.1.7. slonik_drop_set

Generates Slonik script to drop a replication set (e.g. - set of tables and sequences) from a Slony-I cluster.

19.1.8. slonik_drop_table

Generates Slonik script to drop a table from replication. Requires, as input, the ID number of the table (available from table sl_table) that is to be dropped.

19.1.9. slonik_execute_script

Generates Slonik script to push DDL changes to a replication set.

19.1.10. slonik_failover

Generates Slonik script to request failover from a dead node to some new origin

19.1.11. slonik_init_cluster

Generates Slonik script to initialize a whole Slony-I cluster, including setting up the nodes, communications paths, and the listener routing.

19.1.12. slonik_merge_sets

Generates Slonik script to merge two replication sets together.

19.1.13. slonik_move_set

Generates Slonik script to move the origin of a particular set to a different node.

19.1.14. replication_test

Script to test whether Slony-I is successfully replicating data.

19.1.15. slonik_restart_node

Generates Slonik script to request the restart of a node. This was particularly useful pre-1.0.5 when nodes could get snarled up when slon daemons died.

19.1.16. slonik_restart_nodes

Generates Slonik script to restart all nodes in the cluster. Not particularly useful.

19.1.17. slony_show_configuration

Displays an overview of how the environment (e.g. - SLONYNODES) is set to configure things.

19.1.18. slon_kill

Kills slony watchdog and all slon daemons for the specified set. It only works if those processes are running on the local host, of course!

19.1.19. slon_start

This starts a slon daemon for the specified cluster and node, and uses slon_watchdog to keep it running.

19.1.20. slon_watchdog

Used by slon_start.

19.1.21. slon_watchdog2

This is a somewhat smarter watchdog; it monitors a particular Slony-I node, and restarts the slon process if it hasn't seen updates go in in 20 minutes or more.

This is helpful if there is an unreliable network connection such that the slon sometimes stops working without becoming aware of it.

19.1.22. slonik_store_node

Adds a node to an existing cluster.

19.1.23. slonik_subscribe_set

Generates Slonik script to subscribe a particular node to a particular replication set.

19.1.24. slonik_uninstall_nodes

This goes through and drops the Slony-I schema from each node; use this if you want to destroy replication throughout a cluster. This is a VERY unsafe script!

19.1.25. slonik_unsubscribe_set

Generates Slonik script to unsubscribe a node from a replication set.

19.1.26. slonik_update_nodes

Generates Slonik script to tell all the nodes to update the Slony-I functions. This will typically be needed when you upgrade from one version of Slony-I to another.

19.2. mkslonconf.sh

This is a shell script designed to rummage through a Slony-I cluster and generate a set of slon.conf files that slon accesses via the slon -f slon.conf option.

With all of the configuration residing in a configuration file for each slon, they can be invoked with minimal muss and fuss, with no risk of forgetting the -a option and thereby breaking a log shipping node.

Running it requires the following environment configuration:

For any "new" nodes that it discovers, this script will create a new slon conf file.

Warning

It is fair to say that there are several conditions to beware of; none of these should be greatly surprising...

  • The DSN is pulled from the minimum value found for each node in sl_path. You may very well need to modify this.

  • Various parameters are set to default values; you may wish to customize them by hand.

  • If you are running slon processes on multiple nodes (e.g. - as when running Slony-I across a WAN), this script will happily create fresh new config files for slons you wanted to have run on another host.

    Be sure to check out what nodes it set up before restarting slons.

    This would usually only cause some minor inconvenience due to, for instance, a slon running at a non-preferred site, and either failing due to lack of network connectivity (in which no damage is done!) or running a bit less efficiently than it might have due to living at the wrong end of the network "pipe."

    On the other hand, if you are running a log shipping node at the remote site, accidentally introducing a slon that isn't collecting logs could ruin your whole week.

The file layout set up by mkslonconf.sh was specifically set up to allow managing slons across a multiplicity of clusters using the script in the following section...

19.3. launch_clusters.sh

This is another shell script which uses the configuration as set up by mkslonconf.sh and is intended to either be run at system boot time, as an addition to the rc.d processes, or regularly, as a cron process, to ensure that slon processes are running.

It uses the following environment variables:

In effect, you could run this every five minutes, and it would launch any missing slon processes.

19.4. slony1_extract_schema.sh

You may find that you wish to create a new node some time well after creating a cluster. The script slony1_extract_schema.sh will help you with this.

A command line might look like the following:

PGPORT=5881 PGHOST=master.int.example.info ./slony1_extract_schema.sh payroll payroll temppayroll

It performs the following:

19.5. slony-cluster-analysis

If you are running a lot of replicated databases, where there are numerous Slony-I clusters, it can get painful to track and document this. The following tools may be of some assistance in this.

slony-cluster-analysis.sh is a shell script intended to provide some over-time analysis of the configuration of a Slony-I cluster. You pass in the usual libpq environment variables (PGHOST, PGPORT, PGDATABASE, and such) to connect to a member of a Slony-I cluster, and pass the name of the cluster as an argument.

The script then does the following:

There is a sample "wrapper" script, slony-cluster-analysis-mass.sh, which sets things up to point to a whole bunch of Slony-I clusters.

This should make it easier for a group of DBAs to keep track of two things:

19.6. Generating slonik scripts using configure-replication.sh

The tools script configure-replication.sh is intended to automate generating slonik scripts to configure replication. This script is based on the configuration approach taken by the Section 22.

This script uses a number (possibly large, if your configuration needs to be particularly complex) of environment variables to determine the shape of the configuration of a cluster. It uses default values extensively, and in many cases, relatively few environment values need to be set in order to get a viable configuration.

19.6.1. Global Values

There are some values that will be used universally across a cluster:

CLUSTER

Name of Slony-I cluster

NUMNODES

Number of nodes to set up

PGUSER

name of PostgreSQL superuser controlling replication

PGPORT

default port number

PGDATABASE

default database name

TABLES

a list of fully qualified table names (e.g. - complete with namespace, such as public.my_table)

SEQUENCES

a list of fully qualified sequence names (e.g. - complete with namespace, such as public.my_sequence)

Defaults are provided for all of these values, so that if you run configure-replication.sh without setting any environment variables, you will get a set of slonik scripts. They may not correspond, of course, to any database you actually want to use...

19.6.2. Node-Specific Values

For each node, there are also four environment variables; for node 1:

DB1

database to connect to

USER1

superuser to connect as

PORT1

port

HOST1

host

It is quite likely that DB*, USER*, and PORT* should be drawn from the global PGDATABASE, PGUSER, and PGPORT values above; having the discipline of that sort of uniformity is usually a good thing.

In contrast, HOST* values should be set explicitly for HOST1, HOST2, ..., as you don't get much benefit from the redundancy replication provides if all your databases are on the same server!

19.6.3. Resulting slonik scripts

slonik config files are generated in a temp directory under /tmp. The usage is thus:

  • preamble.slonik is a "preamble" containing connection info used by the other scripts.

    Verify the info in this one closely; you may want to keep this permanently to use with future maintenance you may want to do on the cluster.

  • create_set.slonik

    This is the first script to run; it sets up the requested nodes as being Slony-I nodes, adding in some Slony-I-specific config tables and such.

    You can/should start slon processes any time after this step has run.

  • store_paths.slonik

    This is the second script to run; it indicates how the slons should intercommunicate. It assumes that all slons can talk to all nodes, which may not be a valid assumption in a complexly-firewalled environment. If that assumption is untrue, you will need to modify the script to fix the paths.

  • create_set.slonik

    This sets up the replication set consisting of the whole bunch of tables and sequences that make up your application's database schema.

    When you run this script, all that happens is that triggers are added on the origin node (node #1) that start collecting updates; replication won't start until #5...

    There are two assumptions in this script that could be invalidated by circumstances:

    • That all of the tables and sequences have been included.

      This becomes invalid if new tables get added to your schema and don't get added to the TABLES list.

    • That all tables have been defined with primary keys.

      Best practice is to always have and use true primary keys. If you have tables that require choosing a candidate primary key or that require creating a surrogate key using TABLE ADD KEY, you will have to modify this script by hand to accomodate that.

  • subscribe_set_2.slonik

    And 3, and 4, and 5, if you set the number of nodes higher...

    This is the step that "fires up" replication.

    The assumption that the script generator makes is that all the subscriber nodes will want to subscribe directly to the origin node. If you plan to have "sub-clusters," perhaps where there is something of a "master" location at each data centre, you may need to revise that.

    The slon processes really ought to be running by the time you attempt running this step. To do otherwise would be rather foolish.

19.7. slon.in-profiles

Apache-Style profiles for FreeBSD ports/databases/slony/*

In the tools area, slon.in-profiles is a script that might be used to start up slon instances at the time of system startup. It is designed to interact with the FreeBSD Ports system.