Sun Oct 10 20:31:16 PDT 2004
- Previous message: [Slony1-general] Am I replicated correctly?
- Next message: [Slony1-general] 1-Master->2-Slaves replication help needed
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hello list,
I am new to this list. I was looking for a replication solution for
postgresql for quite some long time and in fact I am working on one in PERL.
But after seeing Slony, I am hopefull that this would solve problems of many
people
regarding postgres databases.
First here is my setup:-
Hardware: AMD Athlon 2000+ , 256 MB RAM
Software: OS: Redhat Linux 9.0 (Kernel 2.4.20-8)
Postgres 8.0 beta3 compiled from source code in /usr/local/pgsql (default)
Slony-I 1.0.2 , compiled from source code of-course
Postgres 7.3.2-3 installed but administratively shut down at the moment
(not running)
Status: Postgres 8.0 (from now on I will refer to it as simply postgres)
running.
Initial DB testing done. SLony installed successfully. Simple Master-Slave
setup
tested successfully.
Scenario: I want to replicate 1 master (pgbench) to 2 Databases
(pgbenchslave and
pgbenchslave2). I want master to replicate to both (all) slaves because my
setup
is like:
^^^^^^^^^^^^ ^^^^^^^^^^^ ^^^^^^^^^^^
( Network-A )------------( Network-B )-----------( Network-C )
vvvvvvvvvvvv vvvvvvvvvvv vvvvvvvvvvv
| | |
| | |
| | |
--- --- ---
| | | | | |
| | | | | |
| | | | | |
--- --- ---
/ \ / \ / \
Slave-DB-1 Master-DB Slave-DB-2
(pgbenchslave) (pgbench) (pgbenchslave2)
(Diagram got corrupted) Please read it as :
pgbenchslave is on NEtwork A. Network A is conencted to network-B , which
has MasterDB (pgbench). Network B is connected to Network-C which has 2nd
slave (pgbenchslave2). Master will replicate to the DB on it's left and on
it's right.
I followed the document slony-i-basic-mstr-slv.txt for initial
1-Master->1-Slave
replication and it worked successfully.
I created the (three major) scripts mentioned in this document as :
1) testsetup.sh
2) replicate.sh
3) dumpcompare.sh (will not discuss this here , because I cant reach here in
multi-slave testing.)
In order to implement 1-Master->2-Slaves replication , I modified the
scripts
as:
1) testsetup.sh
===============
#!/bin/bash
CLUSTERNAME=slony_example ; MASTERDBNAME=pgbench ; SLAVEDBNAME=pgbenchslave
; MASTERHOST=localhost ; SLAVEHOST=localhost ; REPLICATIONUSER=postgres ;
PGBENCHUSER=pgbench
/usr/local/pgsql/bin/slonik <<_EOF_
#--
# define the namespace the replication system uses in our example it is
# slony_example
#--
cluster name = $CLUSTERNAME;
#--
# admin conninfo's are used by slonik to connect to the nodes one for each
# node on each side of the cluster, the syntax is that of PQconnectdb in
# the C-API
# --
node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST
user=$REPLICATIONUSER';
node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST
user=$REPLICATIONUSER';
# line below is for 2nd slave (kami). Note I use hardcoded name for 2nd
slave DB
node 3 admin conninfo = 'dbname=pgbenchslave2 host=$SLAVEHOST
user=$REPLICATIONUSER';
#--
# init the first node. Its id MUST be 1. This creates the schema
# _$CLUSTERNAME containing all replication system specific database
# objects.
#--
init cluster ( id=1, comment = 'Master Node');
#--
# Because the history table does not have a primary key or other unique
# constraint that could be used to identify a row, we need to add one.
# The following command adds a bigint column named
# _Slony-I_$CLUSTERNAME_rowID to the table. It will have a default value
# of nextval('_$CLUSTERNAME.s1_rowid_seq'), and have UNIQUE and NOT NULL
# constraints applied. All existing rows will be initialized with a
# number
#--
table add key (node id = 1, fully qualified name = 'public.history');
#--
# Slony-I organizes tables into sets. The smallest unit a node can
# subscribe is a set. The following commands create one set containing
# all 4 pgbench tables. The master or origin of the set is node 1.
# you need to have a set add table() for each table you wish to replicate
#--
create set (id=1, origin=1, comment='All pgbench tables');
set add table (set id=1, origin=1, id=1, fully qualified name =
'public.accounts', comment='accounts table');
set add table (set id=1, origin=1, id=2, fully qualified name =
'public.branches', comment='branches table');
set add table (set id=1, origin=1, id=3, fully qualified name =
'public.tellers', comment='tellers table');
set add table (set id=1, origin=1, id=4, fully qualified name =
'public.history', comment='history table', key = serial);
#--
# Create the second node (the slave) tell the 2 nodes how to connect to
# each other and how they should listen for events.
#--
store node (id=2, comment = 'Slave node');
store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME
host=$MASTERHOST user=$REPLICATIONUSER');
store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME
host=$SLAVEHOST user=$REPLICATIONUSER');
store listen (origin=1, provider = 1, receiver =2);
store listen (origin=2, provider = 2, receiver =1);
# 2nd slave database setup below (kami)
store node (id=3, comment = 'Slave node 2');
store path (server = 1, client = 3, conninfo='dbname=$MASTERDBNAME
host=$MASTERHOST user=$REPLICATIONUSER');
store path (server = 3, client = 1, conninfo='dbname=pgbenchslave2
host=$SLAVEHOST user=$REPLICATIONUSER');
store listen (origin=1, provider = 1, receiver =3);
store listen (origin=3, provider = 3, receiver =1);
_EOF_
#================================================================
Notice this last section in testsetup.sh file (above)
2) replicate.sh
===============
#!/bin/bash
CLUSTERNAME=slony_example
MASTERDBNAME=pgbench
SLAVEDBNAME=pgbenchslave
MASTERHOST=localhost
SLAVEHOST=localhost
REPLICATIONUSER=postgres
PGBENCHUSER=pgbench
/usr/local/pgsql/bin/slonik <<_EOF_
# ----
# This defines which namespace the replication system uses
# ----
cluster name = $CLUSTERNAME;
# ----
# Admin conninfo's are used by the slonik program to connect
# to the node databases. So these are the PQconnectdb arguments
# that connect from the administrators workstation (where
# slonik is executed).
# ----
node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST
user=$REPLICATIONUSER';
node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST
user=$REPLICATIONUSER';
# below is slave db 2 (kami)
node 3 admin conninfo = 'dbname=pgbenchslave2 host=$SLAVEHOST
user=$REPLICATIONUSER';
# ----
# Node 2 subscribes set 1
# ----
subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);
# ---- below by kami
# Node 3 subscribes set 1
# ----
subscribe set ( id = 1, provider = 1, receiver = 3, forward = no);
_EOF_
# ==========================================
Notice the last section at the end of replicate.sh (above)
Now the Probelm :
I run three slon processes in three terminals and one pgbench process in the
4th terminal.
On Terminal 1 , on which slon master process is running, I am getting :
=======================================================================
[root at homeserver bin]# ./slon $CLUSTERNAME "dbname=$MASTERDBNAME
user=$REPLICATIONUSER host=$MASTERHOST"
CONFIG main: local node id = 1
CONFIG main: loading current cluster configuration
CONFIG storeNode: no_id=2 no_comment='Slave node'
CONFIG storeNode: no_id=3 no_comment='Slave node 2'
CONFIG storePath: pa_server=2 pa_client=1 pa_conninfo="dbname=pgbenchslave
host=localhost user=postgres" pa_connretry=10
CONFIG storePath: pa_server=3 pa_client=1 pa_conninfo="dbname=pgbenchslave2
host=localhost user=postgres" pa_connretry=10
CONFIG storeListen: li_origin=2 li_receiver=1 li_provider=2
CONFIG storeListen: li_origin=3 li_receiver=1 li_provider=3
CONFIG storeSet: set_id=1 set_origin=1 set_comment='All pgbench tables'
CONFIG main: configuration complete - starting threads
CONFIG enableNode: no_id=2
CONFIG enableNode: no_id=3
On Terminal 2 , on which slon slave-1 process is running, I am getting :
=======================================================================
[root at homeserver bin]# ./slon $CLUSTERNAME "dbname=$SLAVEDBNAME
user=$REPLICATIONUSER host=$SLAVEHOST"
CONFIG main: local node id = 2
CONFIG main: loading current cluster configuration
CONFIG storeNode: no_id=1 no_comment='Master Node'
CONFIG storePath: pa_server=1 pa_client=2 pa_conninfo="dbname=pgbench
host=localhost user=postgres" pa_connretry=10
CONFIG storeListen: li_origin=1 li_receiver=2 li_provider=1
CONFIG storeSet: set_id=1 set_origin=1 set_comment='All pgbench tables'
WARN remoteWorker_wakeup: node 1 - no worker thread
CONFIG main: configuration complete - starting threads
CONFIG enableNode: no_id=1
CONFIG storeNode: no_id=3 no_comment='Slave node 2'
CONFIG enableNode: no_id=3
CONFIG storeSubscribe: sub_set=1 sub_provider=1 sub_forward='f'
NOTICE: ALTER TABLE / ADD UNIQUE will create implicit index
"history__Slony-I_slony_example_rowID_key" for table "history"
CONTEXT: SQL query "alter table only public.history add unique
("_Slony-I_slony_example_rowID");"
PL/pgSQL function "determineattkindserial" line 52 at execute statement
CONFIG enableSubscription: sub_set=1
ERROR remoteWorkerThread_1: "begin transaction; set transaction isolation
level serializable; lock table "_slony_example".sl_config_lock; select
"_slony_example".enableSubscription(1, 1, 3); notify "_slony_example_Event";
notify "_slony_example_Confirm"; insert into "_slony_example".sl_event
(ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type ,
ev_data1, ev_data2, ev_data3, ev_data4 ) values ('1', '25', '2004-10-10
22:32:54.926018', '63373', '63386',
'''63383'',''63373'',''63382'',''63381'',''63380''', 'ENABLE_SUBSCRIPTION',
'1', '1', '3', 'f'); insert into "_slony_example".sl_confirm (con_origin,
con_received, con_seqno, con_timestamp) values (1, 2, '25',
CURRENT_TIMESTAMP); commit transaction;" PGRES_FATAL_ERROR ERROR: insert or
update on table "sl_subscribe" violates foreign key constraint
"sl_subscribe-sl_path-ref"
DETAIL: Key (sub_provider,sub_receiver)=(1,3) is not present in table
"sl_path".
CONTEXT: SQL query "insert into "_slony_example".sl_subscribe (sub_set,
sub_provider, sub_receiver, sub_forward, sub_active) values ( $1 , $2 , $3 ,
false, true)"
PL/pgSQL function "enablesubscription_int" line 22 at SQL statement
PL/pgSQL function "enablesubscription" line 6 at return
INFO remoteListenThread_1: disconnecting from 'dbname=pgbench host=localhost
user=postgres'
[root at homeserver bin]#
On Terminal 3 , on which slon slave-2 process is running, I am getting :
=======================================================================
[root at homeserver bin]# ./slon $CLUSTERNAME "dbname=pgbenchslave2
user=$REPLICATIONUSER host=$SLAVEHOST"
CONFIG main: local node id = 3
CONFIG main: loading current cluster configuration
CONFIG storeNode: no_id=1 no_comment='Master Node'
CONFIG storeNode: no_id=2 no_comment='Slave node'
CONFIG storePath: pa_server=1 pa_client=3 pa_conninfo="dbname=pgbench
host=localhost user=postgres" pa_connretry=10
CONFIG storeListen: li_origin=1 li_receiver=3 li_provider=1
CONFIG storeSet: set_id=1 set_origin=1 set_comment='All pgbench tables'
WARN remoteWorker_wakeup: node 1 - no worker thread
CONFIG main: configuration complete - starting threads
CONFIG enableNode: no_id=1
CONFIG enableNode: no_id=2
ERROR remoteWorkerThread_1: "begin transaction; set transaction isolation
level serializable; lock table "_slony_example".sl_config_lock; select
"_slony_example".enableSubscription(1, 1, 2); notify "_slony_example_Event";
notify "_slony_example_Confirm"; insert into "_slony_example".sl_event
(ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type ,
ev_data1, ev_data2, ev_data3, ev_data4 ) values ('1', '24', '2004-10-10
22:32:53.750662', '63367', '63369', '''63368''', 'ENABLE_SUBSCRIPTION', '1',
'1', '2', 'f'); insert into "_slony_example".sl_confirm (con_origin,
con_received, con_seqno, con_timestamp) values (1, 3, '24',
CURRENT_TIMESTAMP); commit transaction;" PGRES_FATAL_ERROR ERROR: insert or
update on table "sl_subscribe" violates foreign key constraint
"sl_subscribe-sl_path-ref"
DETAIL: Key (sub_provider,sub_receiver)=(1,2) is not present in table
"sl_path".
CONTEXT: SQL query "insert into "_slony_example".sl_subscribe (sub_set,
sub_provider, sub_receiver, sub_forward, sub_active) values ( $1 , $2 , $3 ,
false, true)"
PL/pgSQL function "enablesubscription_int" line 22 at SQL statement
PL/pgSQL function "enablesubscription" line 6 at return
INFO remoteListenThread_1: disconnecting from 'dbname=pgbench host=localhost
user=postgres'
[root at homeserver bin]#
On Terminal 4 , on which pgbench and replicate process is running, I am
getting :
============================================================================
=====
[root at homeserver bin]# ./pgbench -s 1 -c 5 -t 1000 -U $PGBENCHUSER -h
$MASTERHOST $MASTERDBNAME
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 5
number of transactions per client: 1000
number of transactions actually processed: 5000/5000
tps = 54.053985 (including connections establishing)
tps = 54.075800 (excluding connections establishing)
[root at homeserver bin]# ./replicate.sh
[root at homeserver bin]#
As you can notice, I see problems on Terminal 2 and Terminal 3 . And the
process
(failing) and giving the prompt back , which is not the default behaviour of
slony in 1-Master->1-Slave replication.
When I check my databases, I get these results:
==============================================
pgbench=# select count(*) from accounts;
count
--------
100000
(1 row)
pgbenchslave=# select count(*) from accounts;
count
--------
100000
(1 row)
pgbenchslave2=# select count(*) from accounts;
count
-------
0
(1 row)
My questions are: a) Where am I going wrong ?
b) Can you please (someone) describe / guide the setup
of 2 slaves in such a way that Master replicates
both slaveDBs. Not is the fashion that MAster replicates
to slave 1 and slave 1 replicates to slave 2.
I am working on a step by step tutorial of slony with postgres 8. I hope
that
would be helpful. But I would like to get this problem solved first.
Thanks for so much of your time reading this. And thanks for help.
Best,
Kamran
- Previous message: [Slony1-general] Am I replicated correctly?
- Next message: [Slony1-general] 1-Master->2-Slaves replication help needed
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list