Christopher Browne cbbrowne
Wed Nov 3 19:19:00 PST 2004
J?r?mie LOPEZ <jlo at mgn.net> writes:
> What is the best perspective concerning several replications running on shared machines?
>
> Let's consider I have - for the same project - 2 PostgreSQL servers
> (on 2 distinct machines): I want the server 1 to be the master for
> database A and the slave for database B, and the server 2 to be the
> master for database B and the slave for database A.
>
> What does follow the "slony way" the most between :
> - defining two clusters (one for each database to replicate) with 2 nodes each,
> - defining a unique clusters with 4 nodes (and making the correct subscriptions)?
>
> I first would have separated in different clusters the groups of
> nodes which have no interactions, but maybe letting slony knows of
> all the replication of the project can or could be useful/simpler?

You'll have to be a little more specific to get a specific answer.

Supposing we're talking about two otherwise-much-the-same DB
instances, then you could set this up via one "Slony-I cluster," with
two replication sets that have different "shapes" for their networks
of subscribers.

Thus, you might do...
# Slonik script
cluster name = mgndata;
   node 1 admin conninfo='host=host1 dbname=a user=postgres port=5432';
   node 2 admin conninfo='host=host2 dbname=b user=postgres port=5432';
   init cluster (id = 1, comment = 'Node  - a at host1');
   store node (id = 2, event node = 1, comment = 'Node 2 - b at host2');
      store path (server = 1, client = 2, conninfo = 'host=host1 dbname=a user=postgres port=5432');
      store path (server = 2, client = 1, conninfo = 'host=host2 dbname=b user=postgres port=5432');
      store listen (origin = 1, receiver = 2, provider = 1);
      store listen (origin = 2, receiver = 1, provider = 2);

Then you'd define two sets...

# Slonik to define set #1
cluster name = mgndata;
 node 1 admin conninfo='host=host1 dbname=a user=postgres port=5432';
 node 2 admin conninfo='host=host2 dbname=b user=postgres port=5432';
table add key (
    node id=1,
    full qualified name='public.slony_test'
);

try {
      create set (id = 1, origin = 1, comment = 'Set 1 for mgndata');
} on error {
      exit -1;
}

set add table (set id = 1, origin = 1, id = 1, full qualified name = 'public.slony_test', comment = 'Table public.slony_test without primary key', key=serial);

set add table (set id = 1, origin = 1, id = 2, full qualified name = 'public.table4', comment = 'Table public.table4 with primary key');

set add table (set id = 1, origin = 1, id = 3, full qualified name = 'public.mgn_table_22', comment = 'Table public.mgn_table_22 with primary key');

set add table (set id = 1, origin = 1, id = 4, full qualified name = 'public.mgn_table_8', key='mgn_table_8_idx_', comment = 'Table public.mgn_table_8 with candidate primary key mgn_table_8_idx_');

set add table (set id = 1, origin = 1, id = 5, full qualified name = 'public.mgn_table_4', key='mgn_table_4_idx_', comment = 'Table public.mgn_table_4 with candidate primary key mgn_table_4_idx_');

set add table (set id = 1, origin = 1, id = 6, full qualified name = 'public.mgn_table_3', key='mgn_table_3_idx_', comment = 'Table public.mgn_table_3 with candidate primary key mgn_table_3_idx_');

set add table (set id = 1, origin = 1, id = 7, full qualified name = 'public.mgn_table_5', key='mgn_table_5_idx_', comment = 'Table public.mgn_table_5 with candidate primary key mgn_table_5_idx_');

set add table (set id = 1, origin = 1, id = 8, full qualified name = 'public.mgn_table_2', key='mgn_table_2_idx_', comment = 'Table public.mgn_table_2 with candidate primary key mgn_table_2_idx_');

set add table (set id = 1, origin = 1, id = 9, full qualified name = 'public.mgn_table_6', key='mgn_table_6_idx_', comment = 'Table public.mgn_table_6 with candidate primary key mgn_table_6_idx_');

set add table (set id = 1, origin = 1, id = 10, full qualified name = 'public.mgn_table_9', key='mgn_table_9_idx_', comment = 'Table public.mgn_table_9 with candidate primary key mgn_table_9_idx_');

set add table (set id = 1, origin = 1, id = 11, full qualified name = 'public.mgn_table_1', key='mgn_table_1_idx_', comment = 'Table public.mgn_table_1 with candidate primary key mgn_table_1_idx_');

set add table (set id = 1, origin = 1, id = 12, full qualified name = 'public.mgn_table_7', key='mgn_table_7_idx_', comment = 'Table public.mgn_table_7 with candidate primary key mgn_table_7_idx_');


set add sequence (set id = 1, origin = 1, id = 1, full qualified name = 'public.mgn_table_seq_', comment = 'Sequence public.mgn_table_seq_');

You'd subscribe node 2 to set 1 to get "B" to subscribe to the tables provided on "A."

A similar script would set up "set 2" tables which "A" would then
subscribe to.

This requires that you can describe the connections to databases thus:
      store path (server = 1, client = 2, conninfo = 'host=host1 dbname=a user=postgres port=5432');
      store path (server = 2, client = 1, conninfo = 'host=host2 dbname=b user=postgres port=5432');

If the database name (e.g. - dbname=a or dbname=b) needs to differ
between the sets, then it would be necessary to set up another Slony-I
cluster to point to the appropriate databases.

You could set up two Slony-I clusters instead; I don't think that
people have tried this sort of configuration widely enough for there
to be any clear and evident "best practices" on this yet.  

Having fewer slon processes ought to lead to fewer connections and
some savings of resources, which would point to trying to "share" one
Slony-I cluster for both of these sets.  I can't point to obvious
demerits to the idea.
-- 
"cbbrowne","@","ca.afilias.info"
<http://dev6.int.libertyrms.com/>
Christopher Browne
(416) 673-4124 (land)


More information about the Slony1-general mailing list