3. System Requirements

Any platform that can run PostgreSQL should be able to run Slony-I.

The platforms that have received specific testing at the time of this release are FreeBSD-4X-i368, FreeBSD-5X-i386, FreeBSD-5X-alpha, OS-X-10.3, Linux-2.4X-i386 Linux-2.6X-i386 Linux-2.6X-amd64, Solaris™-2.8-SPARC, Solaris™-2.9-SPARC, AIX 5.1, OpenBSD-3.5-sparc64 and Windows™ 2000, XP and 2003 (32 bit).

3.1. Slony-I Software Dependancies

At present, Slony-I as well as PostgreSQL need to be able to be compiled from source at your site.

In order to compile Slony-I, you need to have the following tools:

Also check to make sure you have sufficient disk space. You will need approximately 5MB for the source tree during build and installation.

Note: In Slony-I version 1.1, it is possible to compile Slony-I separately from PostgreSQL, making it practical for the makers of distributions of Linux and FreeBSD to include precompiled binary packages for Slony-I. If no suitable packages are available, you will need to be prepared to compile Slony-I yourself.

3.2. Getting Slony-I Source

You can get the Slony-I source from http://developer.postgresql.org/~wieck/slony1/download/

3.3. Database Encoding

PostgreSQL databases may be created in a number of language encodings, set up via the createdb --encoding=$ENCODING databasename option. Slony-I assumes that they use identical encodings.

If the encodings are "closely equivalent", you may be able to get away with them not being absolutely identical. For instance, if the origin system used LATIN1 and a subscriber used SQL_ASCII and another subscriber used UNICODE, and your application never challenges the boundary conditions between these variant encodings, you may never experience any problems.

In PostgreSQL 8.1, changes were made to the UNICODE encoding because earlier versions accepted some invalid encodings. This can lead to replication problems.

Note also that if the client encoding (configured assortedly in postgresql.conf, parameter client_encoding, or via the psql \encoding command, or the psql internal variable ENCODING) varies from the server encoding, this mismatch may lead to Slony-I being unable to replicate those characters supported by the client encoding but not by the server encoding.

3.4. Time Synchronization

All the servers used within the replication cluster need to have their Real Time Clocks in sync. This is to ensure that slon doesn't generate errors with messages indicating that a subscriber is already ahead of its provider during replication. Interpreting logs when servers have a different idea of what time it is leads to confusion and frustration. It is recommended that you have ntpd running on all nodes, where subscriber nodes using the "master" provider host as their time server.

It is possible for Slony-I itself to function even in the face of there being some time discrepancies, but having systems "in sync" is usually pretty important for distributed applications.

See www.ntp.org for more details about NTP (Network Time Protocol).

Some users have reported problems that have been traced to their locales indicating the use of some time zone that PostgreSQL did not recognize.

In any case, what commonly seems to be the "best practice" with Slony-I (and, for that matter, PostgreSQL) is for the postmaster user and/or the user under which slon runs to use TZ=UTC or TZ=GMT. Those timezones are sure to be supported on any platform, and have the merit over "local" timezones that times never wind up leaping around due to Daylight Savings Time.

3.5. Network Connectivity

It is necessary that the hosts that are to replicate between one another have bidirectional network communications between the PostgreSQL instances. That is, if node B is replicating data from node A, it is necessary that there be a path from A to B and from B to A. It is recommended that, as much as possible, all nodes in a Slony-I cluster allow this sort of bidirection communications from any node in the cluster to any other node in the cluster.

For ease of configuration, network addresses should ideally be consistent across all of the nodes. STORE PATH does allow them to vary, but down this road lies madness as you try to manage the multiplicity of paths pointing to the same server.

A possible workaround for this, in environments where firewall rules are particularly difficult to implement, may be to establish SSH Tunnels that are created on each host that allow remote access through a local IP address such as 127.0.0.1, using a different port for each destination.

Note that slonik and the slon instances need no special connections or protocols to communicate with one another; they merely need access to the PostgreSQL databases, connecting as a "superuser" that has the ability to update system tables.

An implication of this communications model is that the entire extended network in which a Slony-I cluster operates must be able to be treated as being secure. If there is a remote location where you cannot trust one of the databases that is a Slony-I node to be considered "secure," this represents a vulnerability that can adversely affect the security of the entire cluster. As a "peer-to-peer" system, any of the hosts is able to introduce replication events that will affect the entire cluster. Therefore, the security policies throughout the cluster can only be considered as stringent as those applied at the weakest link. Running a Slony-I node at a branch location that can't be kept secure compromises security for the cluster as a whole.

New in Slony-I version 1.1 is a feature whereby updates for a particular replication set may be serialized via a scheme called log shipping. The data stored in sl_log_1 and sl_log_2 is also written out to log files on disk. These files may then be transmitted in any manner desired, whether via scp, FTP, burning them onto DVD-ROMs and mailing them, or, at the frivolous end of the spectrum, by recording them on a USB "flash device" and attaching them to birds, allowing some equivalent to transmission of IP datagrams on avian carriers - RFC 1149. But whatever the transmission mechanism, this allows one way communications such that subscribers that use log shipping have no need of access to other Slony-I nodes.