Jan Wieck JanWieck
Wed Jul 14 20:51:55 PDT 2004
On 7/14/2004 1:17 AM, George McQuade wrote:

> Hello Slony Team,
> 
> Awesome work!. Have a quick question about a master node that is behind
> a dynamic ip address dsl connection,
> 
> The master db replicates fine to the slave.
> Master ip address changes, I'm thinking i can update the sl_path tables:

I made pretty good experiences with running Slony over WAN right this 
week. There is a 200-user Order-mix TPC-W running on a server at my home 
in Philadelphia. This generates about 60,000 transactions with 15,000 
updated rows per hour. The database is replicated to one local box. I 
have another replica in a virtual machine on my Laptop here in Toronto.

The connection between the the systems is established through a 
compressed ssh-portforwarding. For example running on foo

     ssh -C -N -L5433:localhost:5432 -R5433:localhost:5432 bar

this causes that each system can reach the other ones database on 
localhost:5433, that the data link is compressd, secure and that I don't 
have to reconfigure the sl_path in the case the IP changes (mine never 
does, Comcast is pretty good at that). Just restart the ssh with the 
changed IP.

> 
> In master:
> update _test2.sl_path set pa_conninfo='dbname=winls host=mmm.mmm.mmm.mmm
> user=postgres' where _test2.sl_path.pa_server=1;
> 
> Then exact same command in slave:
> update _test2.sl_path set pa_conninfo='dbname=winls host=mmm.mmm.mmm.mmm
> user=postgres' where _test2.sl_path.pa_server=1;
> 
> and I restart the slon daemons in both master and slave with proper
> host=mmm.mmm.mmm.mmm entry.
> 
> Slave slon seems to start ok
> Master slon complains about:
> 
> CONFIG storeNode: no_id=2 no_comment='Node 2'
> DEBUG2 setNodeLastEvent: no_id=2 event_seq=866
> CONFIG storePath: pa_server=2 pa_client=1 pa_conninfo="dbname=mirror
> host=sss.sss.sss.sss user=postgres" pa_connretry=10
> CONFIG storeListen: li_origin=2 li_receiver=1 li_provider=2
> CONFIG storeSet: set_id=1 set_origin=1 set_comment='WinLS tables'
> DEBUG2 sched_wakeup_node(): no_id=1 (0 threads + worker signaled)
> DEBUG2 main: last local event sequence = 911
> CONFIG main: configuration complete - starting threads
> DEBUG1 localListenThread: thread starts
> FATAL  localListenThread: Another slon daemon is serving this node
> already

There is a left over pg_listener entry because of a postmaster crash. 
What you want to do is to execute a slonik script containing

     cluster name = '...';
     node 2 admin conninfo = '...';
     restart node 2;

before attempting to fire up slon. Things will improve in that area.


Jan

-- 
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck at Yahoo.com #



More information about the Slony1-general mailing list