Fri Aug 12 07:05:26 PDT 2011
- Previous message: [Slony1-general] problem with replication of the same table in multiple clusters
- Next message: [Slony1-general] upgrade from 1.2.20 to 2.0.6
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Le 11/08/2011 18:05, sivakumar krishnamurthy a écrit : > Hi All, > In one of my production environments due to network restrictions I > used to have the following setup. > Node 1 ---- Node 2 ----- Node 3 > Node 1, Node 2 is part of slony cluster1 > Node 2, Node 3 is part of slony cluster2 > table A is replicated from Node 1 to Node 2 and then to Node 3. This > means on Node 2, table A used to have both log_trigger and deny_access > trigger. > The above setup was working fine with PG 8.3.12 and slony 1.2.11. > However the same setup is not working with PG 9.0.4 and slony 2.0.6 > and has the following problem. > Any DML changes applied on Node 1 is replicated to Node 2 however its > not being replicated to Node 3. Also the sl_log_[12] tables on > cluster2 doesn't have any corresponding entries for DML changes. I > could also see SYNC events(sl_events) being replicated from Node 2 to > Node 3. > Can you please help me? > Thanks, > Sivakumar.K (...) Hi, It seems to me you want to have some cascading replication. To achieve that, you have to define a cluster with 3 nodes. You would have : slonik <<_EOF_ cluster name = cluster1; node 1 admin conninfo = 'dbname=test5432 host=localhost port=5432 user=slony'; node 2 admin conninfo = 'dbname=test5434 host=localhost port=5434 user=slony'; node 3 admin conninfo = 'dbname=test5433 host=localhost port=5433 user=slony'; init cluster(id=1, comment = 'Master Node'); store node (id=2, COMMENT = 'Node 2', EVENT NODE = 1); store node (id=3, COMMENT = 'Node 3', EVENT NODE = 2); create set (id=4, origin=1, comment='geo_region_table'); set add table (set id=4, origin=1, id=20, fully qualified name = 'public.geo_region', comment='geo_region table'); store path (server=1, client=3, conninfo='dbname=test5432 host=localhost port=5432 user=slony'); store path (server=3, client=1, conninfo='dbname=test5433 host=localhost port=5433 user=slony'); _EOF_ nohup slon cluster1 'dbname=test5432 port=5432 user=slony' > /data5432/cluster1.log 2>&1 & nohup slon cluster1 'dbname=test5433 port=5433 user=slony' > /data5433/cluster1.log 2>&1 & nohup slon cluster1 'dbname=test5434 port=5434 user=slony' > /data5434/cluster1.log 2>&1 & slonik <<_EOF_ cluster name = cluster1; node 1 admin conninfo = 'dbname=test5432 host=localhost port=5432 user=slony'; node 2 admin conninfo = 'dbname=test5434 host=localhost port=5434 user=slony'; node 3 admin conninfo = 'dbname=test5433 host=localhost port=5433 user=slony'; subscribe set (id=4, provider=1,receiver=2,forward=yes,OMIT COPY=no); subscribe set (id=4, provider=2,receiver=3,forward=no,OMIT COPY=no); _EOF_ - -- Stéphane Schildknecht Loxodata Contact régional PostgreSQL -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk5FMyYACgkQA+REPKWGI0HHWACaAvi6UTpn13PLDE8olzlhIQXc IGoAn2l5WzRpXWHMIN8Rvr+wBBW11YbT =txoz -----END PGP SIGNATURE-----
- Previous message: [Slony1-general] problem with replication of the same table in multiple clusters
- Next message: [Slony1-general] upgrade from 1.2.20 to 2.0.6
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list