Fri Jul 15 10:27:12 PDT 2005
- Previous message: [Slony1-general] Pthread detection for Win32
- Next message: [Slony1-general] proposal to change the install location of "share" files
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hello!
ALL!
Test SlonyI is or not have add node function report:
where is have two node replication cluster exists.
this cluster and set run is very nice.
now want to extend to 3 node.
1.reference "Replicating Your First Database" script
addThreeNode.sh
#!/bin/sh
CLUSTERNAME=ts2
MASTERDBNAME=ts2
MASTERPORT=8432
SLAVEDBNAME1=ts2
SLAVEPORT1=8432
SLAVEDBNAME2=ts2
SLAVEPORT2=8432
MASTERHOST=10.10.10.67
SLAVEHOST1=10.10.10.36
SLAVEHOST2=10.10.10.83
REPLICATIONUSER=master
PGBENCHUSER=master
export CLUSTERNAME MASTERDBNAME MASTERPORT SLAVEDBNAME1 SLAVEPORT1 MASTERHOST SLAVEHOST1 REPLICATIONUSER PGBENCHUSER SLAVEDBNAME2 SLAVEPORT2 SLAVEHOST2
slonik <<_EOF_
cluster name = $CLUSTERNAME;
node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER';
node 2 admin conninfo = 'dbname=$SLAVEDBNAME1 host=$SLAVEHOST1 user=$PGBENCHUSER';
node 3 admin conninfo = 'dbname=$SLAVEDBNAME2 host=$SLAVEHOST2 user=$PGBENCHUSER';
store node (id=3, comment = 'Slave node 2');
#store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER');
store path (server = 1, client = 3, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER');
#store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME1 host=$SLAVEHOST1 user=$PGBENCHUSER');
store path (server = 2, client = 3, conninfo='dbname=$SLAVEDBNAME1 host=$SLAVEHOST1 user=$PGBENCHUSER');
store path (server = 3, client = 1, conninfo='dbname=$SLAVEDBNAME2 host=$SLAVEHOST2 user=$PGBENCHUSER');
store path (server = 3, client = 2, conninfo='dbname=$SLAVEDBNAME2 host=$SLAVEHOST2 user=$PGBENCHUSER');
#store listen (origin = 1, receiver = 2, provider = 1);
store listen (origin = 1, receiver = 3, provider = 1);
#store listen (origin = 2, receiver = 1, provider = 2);
store listen (origin = 2, receiver = 3, provider = 1);
store listen (origin = 3, receiver = 1, provider = 3);
store listen (origin = 3, receiver = 2, provider = 1);
_EOF_
exec no error.
check node 1 and node 2 's slon process run nice.
start slon process of node 3.
run script : SubscribeSet.sh
#!/bin/sh
CLUSTERNAME=ts2
MASTERDBNAME=ts2
MASTERPORT=8432
SLAVEDBNAME1=ts2
SLAVEPORT1=8432
SLAVEDBNAME2=ts2
SLAVEPORT2=8432
MASTERHOST=10.10.10.67
SLAVEHOST1=10.10.10.36
SLAVEHOST2=10.10.10.83
REPLICATIONUSER=master
PGBENCHUSER=master
export CLUSTERNAME MASTERDBNAME MASTERPORT SLAVEDBNAME1 SLAVEPORT1 MASTERHOST SLAVEHOST1 REPLICATIONUSER PGBENCHUSER SLAVEDBNAME2 SLAVEPORT2 SLAVEHOST2
slonik <<_EOF_
cluster name = $CLUSTERNAME;
#--
# admin conninfo's are used by slonik to connect to the nodes one for each
# node on each side of the cluster, the syntax is that of PQconnectdb in
# the C-API
# --
node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER';
node 2 admin conninfo = 'dbname=$SLAVEDBNAME1 host=$SLAVEHOST1 user=$PGBENCHUSER';
node 3 admin conninfo = 'dbname=$SLAVEDBNAME2 host=$SLAVEHOST2 user=$PGBENCHUSER';
# ----
# Node 2 subscribes set 1
# ----
subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);
subscribe set ( id = 1, provider = 1, receiver = 3, forward = no);
_EOF_
exec no error show .
read table :
at 1,2,3 node :
select * from _ts2.sl_listen ;
li_origin | li_provider | li_receiver
-----------+-------------+-------------
1 | 1 | 2
1 | 1 | 3
2 | 2 | 1
2 | 2 | 3
3 | 3 | 1
3 | 3 | 2
(6 rows)
------==========================--------------
select * from _ts2.sl_node ;
no_id | no_active | no_comment | no_spool
-------+-----------+--------------+----------
1 | t | Master Node | f
2 | t | Slave node | f
3 | t | Slave node 2 | f
(3 rows)
------==========================--------------
select * from _ts2.sl_path ;
pa_server | pa_client | pa_conninfo | pa_connretry
-----------+-----------+-----------------------------------------+--------------
2 | 1 | dbname=ts2 host=10.10.10.36 user=master | 10
1 | 2 | dbname=ts2 host=10.10.10.67 user=master | 10
1 | 3 | dbname=ts2 host=10.10.10.67 user=master | 10
2 | 3 | dbname=ts2 host=10.10.10.36 user=master | 10
3 | 1 | dbname=ts2 host=10.10.10.83 user=master | 10
3 | 2 | dbname=ts2 host=10.10.10.83 user=master | 10
(6 rows)
------==========================--------------
select * from _ts2.sl_set ;
set_id | set_origin | set_locked | set_comment
--------+------------+------------+--------------------
1 | 1 | | All pgbench tables
(1 row)
------==========================--------------
select * from _ts2.sl_subscribe ;
sub_set | sub_provider | sub_receiver | sub_forward | sub_active
---------+--------------+--------------+-------------+------------
1 | 1 | 2 | f | t
1 | 1 | 3 | f | f
(2 rows)
------==========================--------------
Attention to :
at 1,2 node :
select * from _ts2.sl_table ;
tab_id | tab_reloid | tab_relname | tab_nspname | tab_set | tab_idxname | tab_altered | tab_comment
--------+------------+-------------+-------------+---------+---------------------------------+-------------+----------------
1 | 658992 | accounts | public | 1 | accounts_pkey | t | accounts table
2 | 658988 | branches | public | 1 | branches_pkey | t | branches table
3 | 658990 | tellers | public | 1 | tellers_pkey | t | tellers table
4 | 658994 | history | public | 1 | history__Slony-I_ts2_rowID_key | t | history table
5 | 762620 | tb_no_pk | public | 1 | tb_no_pk__Slony-I_ts2_rowID_key | t | tb_no_pk table
6 | 762616 | tb_ts1 | public | 1 | pk_tb_ts1 | t | tb_ts1 table
(6 rows)
but at node 3 :
select * from _ts2.sl_table ;
tab_id | tab_reloid | tab_relname | tab_nspname | tab_set | tab_idxname | tab_altered | tab_comment
--------+------------+-------------+-------------+---------+-------------+-------------+-------------
(0 rows)
I think about this set (set 1) create at "Initial origin of the set" (1) ?this origin should be node 1?
I create node 1 and node 2 and node3 between 3 node one another relation. Why at "Replicating Your First Database"
node 2 can catch up ? however node 3 cann't catch up ?
Why node 3 haven't table in set ? how to do let node 3 can run like node 2?
when I modify node 1 (origin) data of table. node 2 can catch up . but node 3 have no data available.
except sys table any use table have no data available.
if you know Please help me ? Thanks very much !
---------------------------------
DO YOU YAHOO!?
???????????2G?????pop3??????????
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://gborg.postgresql.org/pipermail/slony1-general/attachments/20050715/97e10f59/attachment.html
- Previous message: [Slony1-general] Pthread detection for Win32
- Next message: [Slony1-general] proposal to change the install location of "share" files
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list