Sat Feb 15 23:25:23 PST 2014
- Previous message: [Slony1-general] Still having issues with wide area replication. large table , copy set 2 failed
- Next message: [Slony1-general] subscriber node has wrong localnodeid after initial sync
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Sat, Feb 15, 2014 at 10:48 PM, Jeff Frost <jeff at pgexperts.com> wrote: > It's probably a firewall timing out your PostgreSQL connection while the > indexes are being built on the replica. > > Look into tcp keep alive settings. > > Yes this is what I thought it was when I first started with this, but didn't make any progress. Keepalives by default is set to 7200 seconds, so 2 hours, this is failing in an hour, so I'll have to look at the firewalls between us but since I'm connected to these boxes the entire time, from the same network that is originating the slon configuration, I'm doubting the firewalls are reaping the connections. Looking at the TCP keepalive settings, I don't think there is any tuning there that can help net.ipv4.tcp_keepalive_time = 7200 net.ipv4.tcp_keepalive_probes = 9 net.ipv4.tcp_keepalive_intvl = 75 Well, maybe I can "reduce this" just to make some interesting traffic happen within that hour+ that the indexes are being created. Hmmm maybe. Tory -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.slony.info/pipermail/slony1-general/attachments/20140215/7d93a594/attachment.htm
- Previous message: [Slony1-general] Still having issues with wide area replication. large table , copy set 2 failed
- Next message: [Slony1-general] subscriber node has wrong localnodeid after initial sync
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list