Tue Feb 24 07:07:31 PST 2009
- Previous message: [Slony1-general] slony rep error on start up: Slony-I: setAddTable_int():
- Next message: [Slony1-general] duplicate key sl_nodelock-pkey
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
I have now resolved the problem. The primary key, and hence index did not exist on the subscriber node. I am not sure why this happened as I build the schema doing: pg_dump -s -h host1 tam1 -U user1 -W | psql -U user1 tam1 I dropped and recreated the database and it worked fine. Thanks. On Tue, Feb 24, 2009 at 9:12 AM, Tam McLaughlin <tam.mclaughlin at gmail.com>w= rote: > Thanks for your reply: > > Index "chart_data_pk" does seem to be there! > > tam1=3D# \d "uk7501".chart_data; > Table "uk7501.chart_data" > Column | Type | Modifiers > --------------+-----------------------------+----------- > chart | character varying(20) | not null > entry | integer | not null > value | real | > value_2 | real | > violations | character varying(8) | > violations_2 | character varying(8) | > flags | character varying(8) | > lots | text | > entity | character varying(20) | > operator | character varying(20) | > datetime | timestamp without time zone | > txid | bigint | > comments | text | > Indexes: > "chart_data_pk" PRIMARY KEY, btree (chart, entry) > Foreign-key constraints: > "chart_data_chart_fkey" FOREIGN KEY (chart) REFERENCES > uk7501.charts(chart) > Triggers: > _mesrep_logtrigger AFTER INSERT OR DELETE OR UPDATE ON > uk7501.chart_data FOR EACH ROW EXECUTE PROCEDURE > _mesrep.logtrigger('_mesrep', '1', 'kk') > Disabled triggers: > _mesrep_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON > uk7501.chart_data FOR EACH ROW EXECUTE PROCEDURE > _mesrep.denyaccess('_mesrep') > > > > > On Mon, Feb 23, 2009 at 5:23 PM, Jeff Frost <jeff at frostconsultingllc.com>= wrote: > >> On Mon, 23 Feb 2009, Tam McLaughlin wrote: >> >> Hello, >>> >>> I am having trouble with slony 1-2 as follows: >>> >>> I was testing slony a few months back and was able to get replication >>> working. However, after upgrading slony by dropping all the replications >>> and >>> recreating databases and configd, I keep getting a few errors, the most >>> recent and first error in the logs as follows: >>> >>> GMT ERROR remoteWorkerThread_1: "select "_mesrep".setAddTable_int(1, 1, >>> '"uk7501"."chart_data"', 'chart_data_pk', 'Table uk7501.chart_data with >>> primary key'); " PGRES_FATAL_ERROR ERROR: Slony-I: setAddTable_int(): >>> table >>> "uk7501"."chart_data" has no index chart_data_pk >>> >> >> That sure looks like you are missing the chart_data_pk (primary key?) >> index on chart_data. What does \d chart_data yield in psql? >> >> >> >>> Slony is: v 1-2.0.0 >>> Postgres is: 8.3.3 >>> OS: centos 5.2 >>> >>> I have rebuilt slony from source and relocated the binaries to a >>> different >>> directory. >>> The database I have used is a copy of a live db and have also dropped >>> this, >>> recreated the db a few times and the replication. The only thing that I >>> have >>> manually done is remove xxid.so so that it get's rebuilt but saw in the >>> docs >>> that it is no longer used. >>> >>> A previous error I kept getting was this: >>> >>> GMT,"mes","MES",19171,"10.191.2.123:40188 >>> ",48fdd267.4ae3,58,"UPDATE",2008-10-21 >>> 14:00:23 BST,5/526583,2916342,ERROR,42703,"column ""log_xid"" of relati= on >>> ""sl_log_1"" does not exist",,,"INSERT INTO _mescluster.sl_log_1 >>> (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, >>> log_cmddata) >>> VALUES (1, $1, $2, nextval('_mescluster.sl_action_seq'), $3, >>> $4);",47,,"update UK4628.entity_ >>> >>> I am not sure what info you need in order to offer help, so I have >>> included >>> extracts from my slony config and log files and the chart_data below. >>> Any help would be appreciated. >>> >>> Thanks >>> Tam >>> >>> Slony Config >>> ----------- >>> if ($ENV{"SLONYNODES"}) { >>> require $ENV{"SLONYNODES"}; >>> } else { >>> $CLUSTER_NAME =3D 'mesrep'; >>> $LOGDIR =3D '/var/log/slony'; >>> $MASTERNODE =3D 1; >>> $DEBUGLEVEL =3D 4; >>> add_node(node =3D> 1, >>> host =3D> 'uklnxmes-cl', >>> dbname =3D> 'tam1', >>> port =3D> 5432, >>> parent =3D> 1, >>> user =3D> 'xxxxxx', >>> password =3D> 'xxxxxx'); >>> add_node(node =3D> 2, >>> host =3D> 'uklnxdisp1', >>> dbname =3D> 'tam1', >>> port =3D> 5432, >>> parent =3D> 1, >>> user =3D> 'xxxxxx', >>> password =3D> 'xxxxxx'); >>> } >>> $SLONY_SETS =3D { >>> "set1" =3D> { "set_id" =3D> 1, >>> "table_id" =3D> 1, >>> "sequence_id" =3D> 1, >>> "pkeyedtables" =3D> [ "uk7501.chart_data", >>> "uk7501.chart_error_actions", >>> "uk7501.charts", >>> "uk7501.entities", >>> "uk7501.entities_history_data", >>> "uk7501.entity_attr_data", >>> <snip> >>> "uk7501.tables", >>> "uk4628.calendar", >>> "uk4628.chart_data", >>> "uk4628.chart_error_actions", >>> "uk4628.charts", >>> <snip> >>> "uk4628.table_data", >>> "uk4628.tables" >>> ], >>> "keyedtables" =3D> {}, >>> "serialtables" =3D> [], >>> "sequences" =3D> [ "uk7501.dbkeys", >>> "uk7501.txid", >>> "uk4628.dbkeys", >>> "uk4628.txid" >>> ] >>> } >>> }; >>> if ($ENV{"SLONYSET"}) { >>> require $ENV{"SLONYSET"}; >>> } >>> # Please do not add or change anything below this point. >>> 1; >>> >>> >>> log file for node2: error at bottom >>> -------------------- >>> -------------------- >>> 2009-02-23 15:34:58 GMT CONFIG main: slon version 2.0.0 starting up >>> 2009-02-23 15:34:58 GMT INFO slon: watchdog process started >>> 2009-02-23 15:34:58 GMT CONFIG slon: watchdog ready - pid =3D 28747 >>> <snip> >>> 2009-02-23 15:34:58 GMT CONFIG main: Boolean option log_timestamp =3D 1 >>> 2009-02-23 15:34:58 GMT CONFIG main: Boolean option cleanup_deletelogs = =3D >>> 0 >>> 2009-02-23 15:34:58 GMT CONFIG main: Real option real_placeholder =3D >>> 0.000000 >>> 2009-02-23 15:34:58 GMT CONFIG main: String option cluster_name =3D mes= rep >>> 2009-02-23 15:34:58 GMT CONFIG main: String option conn_info =3D >>> host=3Duklnxdisp1 dbname=3Dtam1 user=3Dxxxxxx port=3D5432 password=3Dxx= xxxx >>> 2009-02-23 15:34:58 GMT CONFIG main: String option pid_file =3D (null) >>> 2009-02-23 15:34:58 GMT CONFIG main: String option log_timestamp_format= =3D >>> %Y-%m-%d %H:%M:%S %Z >>> 2009-02-23 15:34:58 GMT CONFIG main: String option archive_dir =3D (nul= l) >>> 2009-02-23 15:34:58 GMT CONFIG main: String option sql_on_connection = =3D >>> (null) >>> 2009-02-23 15:34:58 GMT CONFIG main: String option lag_interval =3D (nu= ll) >>> 2009-02-23 15:34:58 GMT CONFIG main: String option command_on_logarchive >>> =3D >>> (null) >>> 2009-02-23 15:34:58 GMT CONFIG main: String option syslog_facility =3D >>> LOCAL0 >>> 2009-02-23 15:34:58 GMT CONFIG main: String option syslog_ident =3D slon >>> 2009-02-23 15:34:58 GMT CONFIG main: String option cleanup_interval =3D= 10 >>> minutes >>> 2009-02-23 15:34:58 GMT CONFIG slon: worker process created - pid =3D 2= 8749 >>> 2009-02-23 15:34:58 GMT CONFIG main: local node id =3D 2 >>> 2009-02-23 15:34:58 GMT INFO main: main process started >>> 2009-02-23 15:34:58 GMT CONFIG main: launching sched_start_mainloop >>> 2009-02-23 15:34:58 GMT CONFIG main: loading current cluster >>> configuration >>> 2009-02-23 15:34:58 GMT CONFIG storeNode: no_id=3D1 no_comment=3D'Node = 1 - >>> tam1 at uklnxmes-cl' >>> 2009-02-23 15:34:58 GMT DEBUG2 setNodeLastEvent: no_id=3D1 event_seq=3D1 >>> 2009-02-23 15:34:58 GMT CONFIG storePath: pa_server=3D1 pa_client=3D2 >>> pa_conninfo=3D"host=3Duklnxmes-cl dbname=3Dtam1 user=3Dxxxxxx port=3D54= 32 >>> password=3Dxxxxxx" pa_connretry=3D10 >>> 2009-02-23 15:34:58 GMT CONFIG storeListen: li_origin=3D1 li_receiver= =3D2 >>> li_provider=3D1 >>> 2009-02-23 15:34:58 GMT CONFIG main: last local event sequence =3D 1 >>> 2009-02-23 15:34:58 GMT CONFIG main: configuration complete - starting >>> threads >>> 2009-02-23 15:34:58 GMT INFO localListenThread: thread starts >>> 2009-02-23 15:34:58 GMT CONFIG version for "host=3Duklnxdisp1 dbname=3D= tam1 >>> user=3Dxxxxxx port=3D5432 password=3Dxxxxxx" is 80303 >>> 2009-02-23 15:34:58 GMT DEBUG1 local_listen "host=3Duklnxdisp1 dbname= =3Dtam1 >>> user=3Dxx2009-02-23 15:34:58 GMT INFO remoteWorkerThread_1: thread st= arts >>> 2009-02-23 15:34:58 GMT CONFIG cleanupThread: thread starts >>> 2009-02-23 15:34:58 GMT CONFIG cleanupThread: bias =3D 35383 >>> 2009-02-23 15:34:58 GMT INFO remoteListenThread_1: thread starts >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteListenThread_1: start listening for >>> event origin 1 >>> 2009-02-23 15:34:58 GMT INFO main: running scheduler mainloop >>> 2009-02-23 15:34:58 GMT INFO syncThread: thread starts >>> 2009-02-23 15:34:58 GMT CONFIG version for "host=3Duklnxmes-cl dbname= =3Dtam1 >>> user=3Dxxxxxx port=3D5432 password=3Dxxxxxx" is 80303 >>> 2009-02-23 15:34:58 GMT DEBUG1 node_1_listen "host=3Duklnxmes-cl >>> dbname=3Dtam1 >>> user=3Dxxxxxx port=3D5432 password=3Dxxxxxx": backend pid =3D 28764 >>> 2009-02-23 15:34:58 GMT CONFIG version for "host=3Duklnxdisp1 dbname=3D= tam1 >>> user=3Dxxxxxx port=3D5432 password=3Dxxxxxx" is 80303 >>> 2009-02-23 15:34:58 GMT DEBUG1 local_sync "host=3Duklnxdisp1 dbname=3Dt= am1 >>> user=3Dxxxxxx port=3D5432 password=3Dxxxxxx": backend pid =3D 8067 >>> 2009-02-23 15:34:58 GMT CONFIG version for "host=3Duklnxdisp1 dbname=3D= tam1 >>> user=3Dxxxxxx port=3D5432 password=3Dxxxxxx" is 80303 >>> 2009-02-23 15:34:58 GMT CONFIG version for "host=3Duklnxdisp1 dbname=3D= tam1 >>> user=3Dxxxxxx port=3D5432 password=3Dxxxxxx" is 80303 >>> 2009-02-23 15:34:58 GMT DEBUG1 local_cleanup "host=3Duklnxdisp1 dbname= =3Dtam1 >>> user=3Dxxxxxx port=3D5432 password=3Dxxxxxx": backend pid =3D 8068 >>> 2009-02-23 15:34:58 GMT DEBUG1 remoteListenThread_1: connected to >>> 'host=3Duklnxmes-cl dbname=3Dtam1 user=3Dxxxxxx port=3D5432 password=3D= xxxxxx' >>> 2009-02-23 15:34:58 GMT DEBUG1 remoteWorkerThread_1 "host=3Duklnxdisp1 >>> dbname=3Dtam1 user=3Dxxxxxx port=3D5432 password=3Dxxxxxx": backend pid= =3D 8066 >>> 2009-02-23 15:34:58 GMT CONFIG remoteWorkerThread_1: update provider >>> configuration >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteListenThread_1: queue event 1,2 >>> STORE_NODE >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteListenThread_1: queue event 1,3 >>> ENABLE_NODE >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteListenThread_1: queue event 1,4 >>> STORE_PATH >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteListenThread_1: queue event 1,5 SY= NC >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 2 type:STORE_NODE >>> TODO: ********** remoteWorkerThread: node 1 - EVENT 1,2 STORE_NODE - >>> unknown >>> event type >>> 2009-02-23 15:34:58 GMT CONFIG storeListen: li_origin=3D1 li_receiver= =3D2 >>> li_provider=3D1 >>> 2009-02-23 15:34:58 GMT DEBUG2 sched_wakeup_node(): no_id=3D1 (0 thread= s + >>> worker signaled) >>> 2009-02-23 15:34:58 GMT DEBUG2 sched_wakeup_node(): no_id=3D1 (0 thread= s + >>> worker signaled) >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteWorkerThread_1: forward confirm 2,1 >>> received by 1 >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 3 type:ENABLE_NODE >>> xxxx port=3D5432 password=3Dxxxxxx": backend pid =3D 8065 >>> TODO: ********** remoteWorkerThread: node 1 - EVENT 1,3 ENABLE_NODE - >>> unknown event type >>> 2009-02-23 15:34:58 GMT CONFIG storeListen: li_origin=3D1 li_receiver= =3D2 >>> li_provider=3D1 >>> 2009-02-23 15:34:58 GMT DEBUG2 sched_wakeup_node(): no_id=3D1 (0 thread= s + >>> worker signaled) >>> 2009-02-23 15:34:58 GMT DEBUG2 sched_wakeup_node(): no_id=3D1 (0 thread= s + >>> worker signaled) >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 4 type:STORE_PATH >>> 2009-02-23 15:34:58 GMT CONFIG storeListen: li_origin=3D1 li_receiver= =3D2 >>> li_provider=3D1 >>> 2009-02-23 15:34:58 GMT DEBUG2 sched_wakeup_node(): no_id=3D1 (0 thread= s + >>> worker signaled) >>> 2009-02-23 15:34:58 GMT DEBUG2 sched_wakeup_node(): no_id=3D1 (0 thread= s + >>> worker signaled) >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 5 type:SYNC >>> 2009-02-23 15:34:58 GMT DEBUG2 remoteWorkerThread_1: SYNC 5 processing >>> 2009-02-23 15:34:58 GMT DEBUG1 remoteWorkerThread_1: no sets need synci= ng >>> for this event >>> 2009-02-23 15:34:58 GMT CONFIG remoteWorkerThread_1: update provider >>> configuration >>> 2009-02-23 15:34:59 GMT DEBUG2 syncThread: new sl_action_seq 1 - SYNC 2 >>> 2009-02-23 15:35:01 GMT DEBUG2 localListenThread: Received event 2,2 SY= NC >>> 2009-02-23 15:35:02 GMT DEBUG2 remoteWorkerThread_1: forward confirm 2,2 >>> received by 1 >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,6 >>> STORE_SET >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,7 >>> SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 6 type:STORE_SET >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,8 >>> SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,9 >>> SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,10 >>> SET_ADD_TABLE >>> <snip >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,18 >>> SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,19 >>> SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT CONFIG storeSet: set_id=3D1 set_origin=3D1 >>> set_comment=3D'Set 1 for mesrep' >>> 2009-02-23 15:35:29 GMT DEBUG2 sched_wakeup_node(): no_id=3D1 (0 thread= s + >>> worker signaled) >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,20 >>> SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,21 >>> SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,22 >>> SET_ADD_TABLE >>> <snip> >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,117 >>> SET_ADD_SEQUENCE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,118 >>> SET_ADD_SEQUENCE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,119 >>> SET_ADD_SEQUENCE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteListenThread_1: queue event 1,120 >>> SYNC >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 7 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 8 type:SET_ADD_TABLE >>> >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 12 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 13 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 14 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 15 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 16 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 17 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 18 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT CONFIG remoteWorkerThread_1: update provider >>> configuration >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 19 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 20 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 21 type:SET_ADD_TABLE >>> <snip> >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 113 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 114 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 115 type:SET_ADD_TABLE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 116 type:SET_ADD_SEQUENCE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 117 type:SET_ADD_SEQUENCE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 118 type:SET_ADD_SEQUENCE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 119 type:SET_ADD_SEQUENCE >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 120 type:SYNC >>> 2009-02-23 15:35:29 GMT DEBUG1 calc sync size - last time: 1 last lengt= h: >>> 31050 ideal: 1 proposed size: 1 >>> 2009-02-23 15:35:29 GMT DEBUG2 remoteWorkerThread_1: SYNC 120 processing >>> 2009-02-23 15:35:29 GMT DEBUG1 remoteWorkerThread_1: no sets need synci= ng >>> for this event >>> 2009-02-23 15:35:40 GMT DEBUG2 remoteListenThread_1: queue event 1,121 >>> SYNC >>> 2009-02-23 15:35:40 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 121 type:SYNC >>> 2009-02-23 15:35:40 GMT DEBUG1 calc sync size - last time: 1 last lengt= h: >>> 10941 ideal: 5 proposed size: 3 >>> 2009-02-23 15:35:40 GMT DEBUG2 remoteWorkerThread_1: SYNC 121 processing >>> 2009-02-23 15:35:40 GMT DEBUG1 remoteWorkerThread_1: no sets need synci= ng >>> for this event >>> 2009-02-23 15:35:51 GMT DEBUG2 remoteListenThread_1: queue event 1,122 >>> SYNC >>> 2009-02-23 15:35:51 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 122 type:SYNC >>> 2009-02-23 15:35:51 GMT DEBUG1 calc sync size - last time: 1 last lengt= h: >>> 11002 ideal: 5 proposed size: 3 >>> 2009-02-23 15:35:51 GMT DEBUG2 remoteWorkerThread_1: SYNC 122 processing >>> 2009-02-23 15:35:51 GMT DEBUG1 remoteWorkerThread_1: no sets need synci= ng >>> for this event >>> 2009-02-23 15:36:02 GMT DEBUG2 remoteListenThread_1: queue event 1,123 >>> SYNC >>> 2009-02-23 15:36:02 GMT DEBUG2 remoteListenThread_1: queue event 1,124 >>> SUBSCRIBE_SET >>> 2009-02-23 15:36:02 GMT DEBUG2 remoteListenThread_1: queue event 1,125 >>> ENABLE_SUBSCRIPTION >>> 2009-02-23 15:36:02 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 123 type:SYNC >>> 2009-02-23 15:36:02 GMT DEBUG1 calc sync size - last time: 1 last lengt= h: >>> 11002 ideal: 5 proposed size: 3 >>> 2009-02-23 15:36:02 GMT DEBUG2 remoteWorkerThread_1: SYNC 123 processing >>> 2009-02-23 15:36:02 GMT DEBUG1 remoteWorkerThread_1: no sets need synci= ng >>> for this event >>> 2009-02-23 15:36:02 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 124 type:SUBSCRIBE_SET >>> 2009-02-23 15:36:02 GMT CONFIG storeSubscribe: sub_set=3D1 sub_provider= =3D1 >>> sub_forward=3D't' >>> 2009-02-23 15:36:02 GMT DEBUG2 sched_wakeup_node(): no_id=3D1 (0 thread= s + >>> worker signaled) >>> 2009-02-23 15:36:02 GMT CONFIG storeListen: li_origin=3D1 li_receiver= =3D2 >>> li_provider=3D1 >>> 2009-02-23 15:36:02 GMT DEBUG2 sched_wakeup_node(): no_id=3D1 (0 thread= s + >>> worker signaled) >>> 2009-02-23 15:36:02 GMT DEBUG2 sched_wakeup_node(): no_id=3D1 (0 thread= s + >>> worker signaled) >>> 2009-02-23 15:36:02 GMT DEBUG2 remoteWorkerThread_1: Received event #1 >>> from >>> 125 type:ENABLE_SUBSCRIPTION >>> 2009-02-23 15:36:02 GMT INFO copy_set 1 >>> 2009-02-23 15:36:02 GMT CONFIG version for "host=3Duklnxmes-cl dbname= =3Dtam1 >>> user=3Dxxxxxx port=3D5432 password=3Dxxxxxx" is 80303 >>> 2009-02-23 15:36:02 GMT DEBUG1 copy_set_1 "host=3Duklnxmes-cl dbname=3D= tam1 >>> user=3Dxxxxxx port=3D5432 password=3Dxxxxxx": backend pid =3D 29445 >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: connected to >>> provider >>> DB >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: prepare to copy >>> table >>> "uk7501"."chart_data" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: prepare to copy >>> table >>> "uk7501"."chart_error_actions" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: prepare to copy >>> table >>> "uk7501"."charts" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: prepare to copy >>> table >>> "uk7501"."entities" >>> <snip> >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: prepare to copy >>> table >>> "uk4628"."script_version" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: prepare to copy >>> table >>> "uk4628"."scripts" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: prepare to copy >>> table "uk4628"."scriptsteps" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: prepare to copy >>> table >>> "uk4628"."table_data" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: prepare to copy >>> table >>> "uk4628"."tables" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: all tables for set= 1 >>> found on subscriber >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: copy sequence >>> "uk7501"."dbkeys" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: copy sequence >>> "uk7501"."txid" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: copy sequence >>> "uk4628"."dbkeys" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: copy sequence >>> "uk4628"."txid" >>> 2009-02-23 15:36:02 GMT CONFIG remoteWorkerThread_1: copy table >>> "uk7501"."chart_data" >>> 2009-02-23 15:36:02 GMT ERROR remoteWorkerThread_1: "select >>> "_mesrep".setAddTable_int(1, 1, '"uk7501"."chart_data"', 'chart_data_pk= ', >>> 'Table uk7501.chart_data with primary key'); " PGRES_FATAL_ERROR ERROR: >>> Slony-I: setAddTable_int(): table "uk7501"."chart_data" has no index >>> chart_data_pk >>> 2009-02-23 15:36:02 GMT WARN remoteWorkerThread_1: data copy for set 1 >>> failed - sleep 15 seconds >>> 2009-02-23 15:36:13 GMT DEBUG2 remoteListenThread_1: queue event 1,126 >>> SYNC >>> 2009-02-23 15:36:17 GMT INFO copy_set 1 >>> >>> >>> >>> >>> >>> >>> chart_date schema >>> ----------------- >>> SET default_with_oids =3D false; >>> CREATE TABLE chart_data ( >>> chart character varying(20) NOT NULL, >>> entry integer NOT NULL, >>> value real, >>> value_2 real, >>> violations character varying(8), >>> violations_2 character varying(8), >>> flags character varying(8), >>> lots text, >>> entity character varying(20), >>> operator character varying(20), >>> datetime timestamp without time zone, >>> txid bigint, >>> comments text >>> ); >>> >>> ALTER TABLE uk7501.chart_data OWNER TO mes; >>> ALTER TABLE ONLY chart_data >>> ADD CONSTRAINT chart_data_pk PRIMARY KEY (chart, entry); >>> >>> CREATE TRIGGER _mesrep_denyaccess >>> BEFORE INSERT OR DELETE OR UPDATE ON chart_data >>> FOR EACH ROW >>> EXECUTE PROCEDURE _mesrep.denyaccess('_mesrep'); >>> ALTER TABLE chart_data DISABLE TRIGGER _mesrep_denyaccess; >>> CREATE TRIGGER _mesrep_logtrigger >>> AFTER INSERT OR DELETE OR UPDATE ON chart_data >>> FOR EACH ROW >>> EXECUTE PROCEDURE _mesrep.logtrigger('_mesrep', '1', 'kk'); >>> ALTER TABLE ONLY chart_data >>> REVOKE ALL ON TABLE chart_data FROM PUBLIC; >>> REVOKE ALL ON TABLE chart_data FROM mes; >>> GRANT ALL ON TABLE chart_data TO mes; >>> GRANT SELECT ON TABLE chart_data TO mesview; >>> >>> >> -- >> Jeff Frost, Owner <jeff at frostconsultingllc.com> >> Frost Consulting, LLC http://www.frostconsultingllc.com/ >> Phone: 916-647-6411 FAX: 916-405-4032 >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.slony.info/pipermail/slony1-general/attachments/20090224/= 38c53b62/attachment-0001.htm
- Previous message: [Slony1-general] slony rep error on start up: Slony-I: setAddTable_int():
- Next message: [Slony1-general] duplicate key sl_nodelock-pkey
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list