Christopher Browne cbbrowne at afilias.info
Thu Feb 19 12:05:22 PST 2015
On Thu, Feb 19, 2015 at 11:59 AM, Mark Steben <mark.steben at drivedominion.com
> wrote:

> Good morning,
>
> We are running the following on both master and slave: (a simple 1 master
> to 1 slave configuration)
>     postgresql 9.2.5
>     slony1-2.2.2
>      x86_64 GNU/Linux
>
> We currently run altperl scripts to kill / start slon daemons from the
> slave:
>    cd ...bin folder
>    ./slon_kill -c .../slon_tools.....conf
>          and
>    ./slon_start -c ../slon_tools...conf 1 (and 2)
>
>  Because we need to run maintenance on the replicated db on the master
> without slony running I would like to run these commands on the master
> before and after the maintenance.  Since the daemons now run on the slave
> when I attempt to run these commands on the master the daemons aren't
> found.  Is
> there a prescribed way to accomplish this?  I could continue to run them
> on the
> slave and send a flag to the master when complete but I'd like to take a
> simpler approach if possible.
>  Any insight appreciated.  Thank you.
>

There are three things you are identifying here, and they are each quite
independent of each other:

a) Each database participating in replication is a "cluster node", and will
run whereever it happens to run

b) Each node requires a slon process that manages replicating data to that
node, as well as bookkeeping (e.g. - managing the flow of replication
events)

c) Slonik is the tool that manages configuration of the cluster; it must
have access to all of the nodes that it is to manage

The only one of those things that needs to run in a particular place is the
set of Postgres databases.  (And you get to pick where they run.)

There is no "prescribed way" to run the slon processes of b); you are free
to run those processes where ever you prefer.  We have found it useful to
run all the slon processes for the replicas within a given data centre on
the same host, as it is generally more convenient to manage logs and
restart processes if they are in one place.  You are apparently running
them on the same host that is also hosting one of the replicated databases;
nothing wrong with that.

If you want to run slonik on another host, that's fine, but, as observed,
if you need to also do management of slon processes (e.g. - need to restart
them), it tends to be convenient to run slonik on the same node so that
your shell scripts can both manage the slon processes and the slonik
scripts.

The approach we have tended to take has been to define a "database
administration" host which hosts both slons and slonik.  That seems to be
most convenient.  Commonly, we have added a host (real or virtual) that is
devoted to this sort of thing, separate from the hosts supporting Postgres
backends.

That adds an extra host, so I don't think it's entirely fair to call it a
"simpler approach"; by having a separate host, we don't need to think about
whether that host is hosting an origin or a subscriber, or to imagine we
need to shift database management tasks elsewhere supposing we reshape a
cluster.  We just assume "connect to the DB App Server and manage things
from there."
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.slony.info/pipermail/slony1-general/attachments/20150219/2a38fe64/attachment.htm 


More information about the Slony1-general mailing list