Jérémie LOPEZ jlo
Thu Nov 4 17:21:32 PST 2004
Thanks a lot Christopher, a complete example is sure pleasant.

However, my case is different, so I'll be indeed more specific:
Let's say I have 2 PostgreSQL servers - PGSRV_1 and PGSRV_2 - and 3
different databases - DB_CLIENT, DB_PRODUCT and DB_ORDER.
DB_CLIENT is hosted on PGSRV_1. DB_PRODUCT and DB_ORDER are hosted on
PGSRV_2.
I want to set up replication of the databases for backup purpose.
DB_CLIENT should be replicated on PGSRV_2. DB_PRODUCT and DB_ORDER should be
replicated on PGSRV_1.

So, I would have DB_CLIENT master, DB_PRODUCT slave and DB_ORDER slave on
PGSRV_1 and DB_PRODUCT master, DB_ORDER master and DB_CLIENT slave on
PGSRV_2.

So for this replication system, I suppose that I must have 6 nodes, hence I
certainly must have 6 slons too (in the real project, I have 10 databases
and not only 3, so I suppose that would give me 20 slons running), but how
many clusters must/should/could I have?
One possible configuration - and certainly the simplest - is to define 3
clusters (of 2 nodes each): one for each database to replicate. But couldn't
it be possible to manage less clusters and shouldn't it be more practical?

(I in fact tried to define an unique cluster of 6 nodes, but maybe have did
it wrong because I get the following error at the INIT CLUSTER command:
loading of file /pgsql/share/xxid.v73.sql: PGRES_FATAL_ERROR ERROR:  current
transaction is aborted, queries ignored until end of transaction block)



J?r?mie.


> -----Message d'origine-----
> De : Christopher Browne [mailto:cbbrowne at ca.afilias.info] 
> Envoy? : mercredi 3 novembre 2004 20:18
> ? : J?r?mie LOPEZ
> Cc : slony1-general at gborg.postgresql.org
> Objet : Re: [Slony1-general] One cluster for all 
> replications... or not?
> 
> J?r?mie LOPEZ <jlo at mgn.net> writes:
> > What is the best perspective concerning several 
> replications running on shared machines?
> >
> > Let's consider I have - for the same project - 2 PostgreSQL servers 
> > (on 2 distinct machines): I want the server 1 to be the master for 
> > database A and the slave for database B, and the server 2 to be the 
> > master for database B and the slave for database A.
> >
> > What does follow the "slony way" the most between :
> > - defining two clusters (one for each database to replicate) with 2 
> > nodes each,
> > - defining a unique clusters with 4 nodes (and making the 
> correct subscriptions)?
> >
> > I first would have separated in different clusters the 
> groups of nodes 
> > which have no interactions, but maybe letting slony knows 
> of all the 
> > replication of the project can or could be useful/simpler?
> 
> You'll have to be a little more specific to get a specific answer.
> 
> Supposing we're talking about two otherwise-much-the-same DB 
> instances, then you could set this up via one "Slony-I 
> cluster," with two replication sets that have different 
> "shapes" for their networks of subscribers.
> 
> Thus, you might do...
> # Slonik script
> cluster name = mgndata;
>    node 1 admin conninfo='host=host1 dbname=a user=postgres 
> port=5432';
>    node 2 admin conninfo='host=host2 dbname=b user=postgres 
> port=5432';
>    init cluster (id = 1, comment = 'Node  - a at host1');
>    store node (id = 2, event node = 1, comment = 'Node 2 - b at host2');
>       store path (server = 1, client = 2, conninfo = 
> 'host=host1 dbname=a user=postgres port=5432');
>       store path (server = 2, client = 1, conninfo = 
> 'host=host2 dbname=b user=postgres port=5432');
>       store listen (origin = 1, receiver = 2, provider = 1);
>       store listen (origin = 2, receiver = 1, provider = 2);
> 
> Then you'd define two sets...
> 
> # Slonik to define set #1
> cluster name = mgndata;
>  node 1 admin conninfo='host=host1 dbname=a user=postgres 
> port=5432';  node 2 admin conninfo='host=host2 dbname=b 
> user=postgres port=5432'; table add key (
>     node id=1,
>     full qualified name='public.slony_test'
> );
> 
> try {
>       create set (id = 1, origin = 1, comment = 'Set 1 for 
> mgndata'); } on error {
>       exit -1;
> }
> 
> set add table (set id = 1, origin = 1, id = 1, full qualified 
> name = 'public.slony_test', comment = 'Table 
> public.slony_test without primary key', key=serial);
> 
> set add table (set id = 1, origin = 1, id = 2, full qualified 
> name = 'public.table4', comment = 'Table public.table4 with 
> primary key');
> 
> set add table (set id = 1, origin = 1, id = 3, full qualified 
> name = 'public.mgn_table_22', comment = 'Table 
> public.mgn_table_22 with primary key');
> 
> set add table (set id = 1, origin = 1, id = 4, full qualified 
> name = 'public.mgn_table_8', key='mgn_table_8_idx_', comment 
> = 'Table public.mgn_table_8 with candidate primary key 
> mgn_table_8_idx_');
> 
> set add table (set id = 1, origin = 1, id = 5, full qualified 
> name = 'public.mgn_table_4', key='mgn_table_4_idx_', comment 
> = 'Table public.mgn_table_4 with candidate primary key 
> mgn_table_4_idx_');
> 
> set add table (set id = 1, origin = 1, id = 6, full qualified 
> name = 'public.mgn_table_3', key='mgn_table_3_idx_', comment 
> = 'Table public.mgn_table_3 with candidate primary key 
> mgn_table_3_idx_');
> 
> set add table (set id = 1, origin = 1, id = 7, full qualified 
> name = 'public.mgn_table_5', key='mgn_table_5_idx_', comment 
> = 'Table public.mgn_table_5 with candidate primary key 
> mgn_table_5_idx_');
> 
> set add table (set id = 1, origin = 1, id = 8, full qualified 
> name = 'public.mgn_table_2', key='mgn_table_2_idx_', comment 
> = 'Table public.mgn_table_2 with candidate primary key 
> mgn_table_2_idx_');
> 
> set add table (set id = 1, origin = 1, id = 9, full qualified 
> name = 'public.mgn_table_6', key='mgn_table_6_idx_', comment 
> = 'Table public.mgn_table_6 with candidate primary key 
> mgn_table_6_idx_');
> 
> set add table (set id = 1, origin = 1, id = 10, full 
> qualified name = 'public.mgn_table_9', 
> key='mgn_table_9_idx_', comment = 'Table public.mgn_table_9 
> with candidate primary key mgn_table_9_idx_');
> 
> set add table (set id = 1, origin = 1, id = 11, full 
> qualified name = 'public.mgn_table_1', 
> key='mgn_table_1_idx_', comment = 'Table public.mgn_table_1 
> with candidate primary key mgn_table_1_idx_');
> 
> set add table (set id = 1, origin = 1, id = 12, full 
> qualified name = 'public.mgn_table_7', 
> key='mgn_table_7_idx_', comment = 'Table public.mgn_table_7 
> with candidate primary key mgn_table_7_idx_');
> 
> 
> set add sequence (set id = 1, origin = 1, id = 1, full 
> qualified name = 'public.mgn_table_seq_', comment = 'Sequence 
> public.mgn_table_seq_');
> 
> You'd subscribe node 2 to set 1 to get "B" to subscribe to 
> the tables provided on "A."
> 
> A similar script would set up "set 2" tables which "A" would 
> then subscribe to.
> 
> This requires that you can describe the connections to databases thus:
>       store path (server = 1, client = 2, conninfo = 
> 'host=host1 dbname=a user=postgres port=5432');
>       store path (server = 2, client = 1, conninfo = 
> 'host=host2 dbname=b user=postgres port=5432');
> 
> If the database name (e.g. - dbname=a or dbname=b) needs to 
> differ between the sets, then it would be necessary to set up 
> another Slony-I cluster to point to the appropriate databases.
> 
> You could set up two Slony-I clusters instead; I don't think 
> that people have tried this sort of configuration widely 
> enough for there to be any clear and evident "best practices" 
> on this yet.  
> 
> Having fewer slon processes ought to lead to fewer 
> connections and some savings of resources, which would point 
> to trying to "share" one Slony-I cluster for both of these 
> sets.  I can't point to obvious demerits to the idea.
> --
> "cbbrowne","@","ca.afilias.info"
> <http://dev6.int.libertyrms.com/>
> Christopher Browne
> (416) 673-4124 (land)
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://gborg.postgresql.org/pipermail/slony1-general/attachments/20041104/a49e23d9/attachment-0001.html


More information about the Slony1-general mailing list