Rob Brucks rob.brucks at rackspace.com
Mon Feb 29 08:00:43 PST 2016
Everything is running on the same server.  One subscriber instance has been shut down.

test_db=# select * from _sloncluster.sl_status;
-[ RECORD 1 ]-------------+------------------------------
st_origin                 | 1
st_received               | 3
st_last_event             | 5000000264
st_last_event_ts          | 2016-02-29 09:58:24.455544-06
st_last_received          | 5000000175
st_last_received_ts       | 2016-02-29 09:43:47.461217-06
st_last_received_event_ts | 2016-02-29 09:43:38.254386-06
st_lag_num_events         | 89
st_lag_time               | 00:14:55.35601
-[ RECORD 2 ]-------------+------------------------------
st_origin                 | 1
st_received               | 2
st_last_event             | 5000000264
st_last_event_ts          | 2016-02-29 09:58:24.455544-06
st_last_received          | 5000000176
st_last_received_ts       | 2016-02-29 09:43:52.284371-06
st_last_received_event_ts | 2016-02-29 09:43:48.258894-06
st_lag_num_events         | 88
st_lag_time               | 00:14:45.351502

Thanks,
--Rob

From: Melvin Davidson <melvin6925 at yahoo.com<mailto:melvin6925 at yahoo.com>>
Reply-To: Melvin Davidson <melvin6925 at yahoo.com<mailto:melvin6925 at yahoo.com>>
Date: Friday, February 26, 2016 at 8:19 PM
To: Rob Brucks <rob.brucks at rackspace.com<mailto:rob.brucks at rackspace.com>>, "slony1-general at lists.slony.info<mailto:slony1-general at lists.slony.info>" <slony1-general at lists.slony.info<mailto:slony1-general at lists.slony.info>>
Subject: Re: [Slony1-general] Replication Lag?




________________________________
From: Rob Brucks <rob.brucks at rackspace.com<mailto:rob.brucks at rackspace.com>>
To: "slony1-general at lists.slony.info<mailto:slony1-general at lists.slony.info>" <slony1-general at lists.slony.info<mailto:slony1-general at lists.slony.info>>
Sent: Friday, February 26, 2016 3:51 PM
Subject: [Slony1-general] Replication Lag?

I have a fairly simple test cluster, three PG instances running on different ports:  a master and two slave DBs both subscribed to the master.  Everything is running on the same server (it's a playground), so I have three PG instances (ports 5432, 5433, 5434; connecting via sockets in /tmp) and a slony daemon for each instance.

I'm running Postgres 9.3.9 and slony1 2.2.4 on Centos 6.7 x86_64.  Both postgres and slony were installed via yum using the PGDG repo.

I used the config below to initialize a very simple replication setup of only one table and one sequence.

My problem is that if I shut down just one slave postgres instance then sl_status on the master instance shows replication stalling on both slave DBs, instead of just one.

But, if I insert some test data into the master DB, I see the data show up on the remaining active slave.  So replication to the remaining slave DB is obviously working.

We use sl_status to monitor replication so we need it to accurately report lag if there's an issue.  The Slony 1.2 version we used before did not behave this way, it accurately reported which slave was not replicating.

Why does sl_status report lag on the active slave even though replication appears to be working fine?

Do I have a misconfiguration somewhere?

Thanks,
Rob


Here's my slony config:


      CLUSTER NAME = slony;
      NODE 1 ADMIN CONNINFO = 'dbname=test_db host=/tmp port=5432 user=slony';
      NODE 2 ADMIN CONNINFO = 'dbname=test_db host=/tmp port=5433 user=slony';
      NODE 3 ADMIN CONNINFO = 'dbname=test_db host=/tmp port=5434 user=slony';

############ CLUSTERS

      INIT CLUSTER (ID = 1, COMMENT = 'Master');


############ NODES

      STORE NODE (ID = 2, COMMENT = 'Slave1', EVENT NODE = 1);
      STORE NODE (ID = 3, COMMENT = 'Slave2', EVENT NODE = 1);


############ PATHS

      STORE PATH (SERVER = 1, CLIENT = 2, CONNINFO = 'dbname=test_db host=/tmp port=5432 user=slony');
      STORE PATH (SERVER = 1, CLIENT = 3, CONNINFO = 'dbname=test_db host=/tmp port=5432 user=slony');
      STORE PATH (SERVER = 2, CLIENT = 1, CONNINFO = 'dbname=test_db host=/tmp port=5433 user=slony');
      STORE PATH (SERVER = 2, CLIENT = 3, CONNINFO = 'dbname=test_db host=/tmp port=5433 user=slony');
      STORE PATH (SERVER = 3, CLIENT = 1, CONNINFO = 'dbname=test_db host=/tmp port=5434 user=slony');
      STORE PATH (SERVER = 3, CLIENT = 2, CONNINFO = 'dbname=test_db host=/tmp port=5434 user=slony');


############ SETS

      CREATE SET (ID = 1, ORIGIN = 1, COMMENT = 'TEST Set 1');

############ SEQUENCES

      SET ADD SEQUENCE (SET ID = 1, ORIGIN = 1, ID = 1, FULLY QUALIFIED NAME = '"public"."test_seq"');

############ TABLES

      SET ADD TABLE (SET ID = 1, ORIGIN = 1, ID = 2, FULLY QUALIFIED NAME = '"public"."test"');

############ SUBSCRIPTIONS

      SUBSCRIBE SET (ID = 1, PROVIDER = 1, RECEIVER = 2, FORWARD = YES);
      SUBSCRIBE SET (ID = 1, PROVIDER = 1, RECEIVER = 3, FORWARD = YES);

_______________________________________________
Slony1-general mailing list
Slony1-general at lists.slony.info<mailto:Slony1-general at lists.slony.info>
http://lists.slony.info/mailman/listinfo/slony1-general


========================================================================================================
The config looks good.
On which server are you running the slon process?
What does your query for sl_status look like?

Melvin Davidson
    Cell 720-320-0155

I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you. [http://us.i1.yimg.com/us.yimg.com/i/mesg/tsmileys2/01.gif]
www.youtube.com/unusedhero
Folk Alley - All Folk - 24 Hours a day
www.folkalley.com


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.slony.info/pipermail/slony1-general/attachments/20160229/d81b69d7/attachment.htm 


More information about the Slony1-general mailing list