Anoop Bhat ABhat at trustwave.com
Fri Nov 7 15:07:58 PST 2008
Yup. The slon daemons are running on the slave itself.

root     18692     1  0 14:23 pts/1    00:00:00 /usr/bin//slon -s 1000 -d2 vul_repl_11062008 host=crdmaster dbname=vul user=crd port=5432
root     18694 18692  0 14:23 pts/1    00:00:00 /usr/bin//slon -s 1000 -d2 vul_repl_11062008 host=crdmaster dbname=vul user=crd port=5432
root     18702     1  0 14:23 pts/1    00:00:00 /usr/bin/perl /usr/local/bin/slon_watchdog --config=vul_slontools.conf node1 30
root     18745     1  0 14:24 pts/1    00:00:00 /usr/bin//slon -s 1000 -d2 vul_repl_11062008 host=172.31.6.70 dbname=vul_slave user=crd port=5432
root     18748 18745  0 14:24 pts/1    00:00:00 /usr/bin//slon -s 1000 -d2 vul_repl_11062008 host=172.31.6.70 dbname=vul_slave user=crd port=5432
root     18759     1  0 14:24 pts/1    00:00:00 /usr/bin/perl /usr/local/bin/slon_watchdog --config=vul_slontools.conf node2 30

The log files show stuff like this

2008-11-06 14:22:55 CST DEBUG2 localListenThread: Received event 2,125 SYNC
2008-11-06 14:23:01 CST DEBUG2 remoteListenThread_1: queue event 1,125 SYNC
2008-11-06 14:23:01 CST DEBUG2 remoteWorkerThread_1: Received event 1,125 SYNC
2008-11-06 14:23:01 CST DEBUG2 remoteWorkerThread_1: SYNC 125 processing
2008-11-06 14:23:01 CST DEBUG2 remoteWorkerThread_1: syncing set 1 with 0 table(s) from provider 1
2008-11-06 14:23:01 CST DEBUG2 remoteWorkerThread_1: current local log_status is 0
2008-11-06 14:23:01 CST DEBUG2 remoteWorkerThread_1_1: current remote log_status = 0
2008-11-06 14:23:01 CST DEBUG2 remoteHelperThread_1_1: 0.048 seconds delay for first row
2008-11-06 14:23:01 CST DEBUG2 remoteHelperThread_1_1: 0.095 seconds until close cursor
2008-11-06 14:23:01 CST DEBUG2 remoteHelperThread_1_1: inserts=0 updates=0 deletes=0
2008-11-06 14:23:01 CST DEBUG2 remoteWorkerThread_1: new sl_rowid_seq value: 1000000000000000
2008-11-06 14:23:01 CST DEBUG2 remoteWorkerThread_1: SYNC 125 done in 0.193 seconds
2008-11-06 14:23:01 CST DEBUG2 remoteWorkerThread_1: forward confirm 2,125 received by 1
2008-11-06 14:23:01 CST DEBUG2 syncThread: new sl_action_seq 1 - SYNC 126
2008-11-06 14:23:05 CST DEBUG2 localListenThread: Received event 2,126 SYNC
2008-11-06 14:23:11 CST DEBUG2 syncThread: new sl_action_seq 1 - SYNC 127
2008-11-06 14:23:12 CST DEBUG2 remoteListenThread_1: queue event 1,126 SYNC
2008-11-06 14:23:12 CST DEBUG2 remoteWorkerThread_1: Received event 1,126 SYNC
2008-11-06 14:23:12 CST DEBUG2 remoteWorkerThread_1: SYNC 126 processing
2008-11-06 14:23:12 CST DEBUG2 remoteWorkerThread_1: syncing set 1 with 0 table(s) from provider 1
2008-11-06 14:23:12 CST DEBUG2 remoteWorkerThread_1: current local log_status is 0
2008-11-06 14:23:12 CST DEBUG2 remoteWorkerThread_1_1: current remote log_status = 0
2008-11-06 14:23:12 CST DEBUG2 remoteHelperThread_1_1: 0.048 seconds delay for first row
2008-11-06 14:23:12 CST DEBUG2 remoteHelperThread_1_1: 0.095 seconds until close cursor
2008-11-06 14:23:12 CST DEBUG2 remoteHelperThread_1_1: inserts=0 updates=0 deletes=0
2008-11-06 14:23:12 CST DEBUG2 remoteWorkerThread_1: new sl_rowid_seq value: 1000000000000000
2008-11-06 14:23:12 CST DEBUG2 remoteWorkerThread_1: SYNC 126 done in 0.194 seconds
2008-11-06 14:23:12 CST DEBUG2 remoteWorkerThread_1: forward confirm 2,126 received by 1
2008-11-06 14:23:15 CST DEBUG2 localListenThread: Received event 2,127 SYNC
2008-11-06 14:23:21 CST DEBUG2 syncThread: new sl_action_seq 1 - SYNC 128
2008-11-06 14:23:23 CST DEBUG2 remoteListenThread_1: queue event 1,127 SYNC
2008-11-06 14:23:23 CST DEBUG2 remoteWorkerThread_1: Received event 1,127 SYNC
2008-11-06 14:23:23 CST DEBUG2 remoteWorkerThread_1: SYNC 127 processing
2008-11-06 14:23:23 CST DEBUG2 remoteWorkerThread_1: syncing set 1 with 0 table(s) from provider 1
2008-11-06 14:23:23 CST DEBUG2 remoteWorkerThread_1: current local log_status is 0
2008-11-06 14:23:23 CST DEBUG2 remoteWorkerThread_1_1: current remote log_status = 0
2008-11-06 14:23:23 CST DEBUG2 remoteHelperThread_1_1: 0.048 seconds delay for first row
2008-11-06 14:23:23 CST DEBUG2 remoteHelperThread_1_1: 0.095 seconds until close cursor
2008-11-06 14:23:23 CST DEBUG2 remoteHelperThread_1_1: inserts=0 updates=0 deletes=0
2008-11-06 14:23:23 CST DEBUG2 remoteWorkerThread_1: new sl_rowid_seq value: 1000000000000000
2008-11-06 14:23:23 CST DEBUG2 remoteWorkerThread_1: SYNC 127 done in 0.194 seconds
2008-11-06 14:23:23 CST DEBUG2 remoteWorkerThread_1: forward confirm 2,128 received by 1
2008-11-06 14:23:25 CST DEBUG2 localListenThread: Received event 2,128 SYNC

This just keeps on going.

Currently, there are 0 rows in any of the tables on the slave. I'm contemplating abandoning the scripts on the slave and running it from the master. Or dropping the altperl scripts all together and just doing the slonik commands manually.

________________________________
From: Glyn Astill <glynastill at yahoo.co.uk>
Reply-To: <glynastill at yahoo.co.uk>
Date: Fri, 7 Nov 2008 07:25:22 -0600
To: <slony1-general at lists.slony.info>, Anoop Bhat <ABhat at trustwave.com>
Subject: Re: [Slony1-general] Sl_status

--- On Thu, 6/11/08, Anoop Bhat <ABhat at trustwave.com> wrote:

> Hi,
>
> I learned from cbrowne that I can query sl_status to find
> out the status of the replication.
>
> Within a minute or so, the origin's sl_status for the
> cluster I created looked like this
>
>  st_origin | st_received | st_last_event |
> st_last_event_ts      | st_last_received |
> st_last_received_ts     | st_last_received_event_ts  |
> st_lag_num_events |   st_lag_time
> -----------+-------------+---------------+----------------------------+------------------+----------------------------+----------------------------+-------------------+-----------------
>          1 |           2 |            35 | 2008-11-06
> 20:02:24.943084 |               35 | 2008-11-06
> 14:07:11.002322 | 2008-11-06 20:02:24.943084 |
>   0 | 00:00:06.259891
>
>
> st_last_event and st_last_received grew.
>

Are the slons running? And if so looking in the logs is a good start.

> However, I'm not sure what's being replicated and
> if it's going into the right tables in the slave db.
>
> The db's are called vul and vul_slave. On vul_slave,
> what can I check on to see if it's gotten any data.
>

Make a change to a replicated table, then go check it on the slave.









More information about the Slony1-general mailing list