Wed Mar 5 00:42:29 PST 2008
- Previous message: [Slony1-general] Slony 1.2.13 tests failing?
- Next message: [Slony1-general] configure-replication.sh and SEQUENCES
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Skipped content of type multipart/alternative-------------- next part -----=
---------
testmultiplemovies
PGBINDIR=3D/export/home/tmp/satya/postgres-bin/bin/ PGPORT=3D5432 PGUSER=
=3Dnb199489 ./run_test.sh testmultiplemoves/
test: testmultiplemoves/
----------------------------------------------------
$Id: README,v 1.2.2.1 2007-06-12 18:14:32 cbbrowne Exp $
testmultiplemoves is a test that exercises MOVE SET on a 3 node
cluster with 2 replication sets.
=
The interesting bit is that it requests MOVE SET on all the sets...
This may be expected to "stress" things such as...
- Auto generation of partial indexes on sl_log_? tables
- Locking and unlocking sets
----------------------------------------------------
creating origin DB: nb199489 -h localhost -U nb199489 -p 5432 slonyregress1
add plpgsql to Origin
loading origin DB with testmultiplemoves//init_schema.sql
done
creating subscriber 2 DB: nb199489 -h localhost -U nb199489 -p 5432 slonyre=
gress2
add plpgsql to subscriber
loading subscriber 2 DB from slonyregress1
done
creating subscriber 3 DB: nb199489 -h localhost -U nb199489 -p 5432 slonyre=
gress3
add plpgsql to subscriber
loading subscriber 3 DB from slonyregress1
done
creating cluster
done
storing nodes
done
storing paths
done
launching originnode : /export/home/tmp/satya/postgres-bin/bin//slon -s500 =
-g10 -d2 slony_regress1 "dbname=3Dslonyregress1 host=3Dlocalhost user=3Dnb1=
99489 port=3D5432"
Creating log shipping directory - /tmp/slony-regress.vbdksudmh/archive_logs=
_2
launching: /export/home/tmp/satya/postgres-bin/bin//slon -s500 -g10 -d2 -a =
/tmp/slony-regress.vbdksudmh/archive_logs_2 slony_regress1 "dbname=3Dslonyr=
egress2 host=3Dlocalhost user=3Dnb199489 port=3D5432"
Creating log shipping directory - /tmp/slony-regress.vbdksudmh/archive_logs=
_3
launching: /export/home/tmp/satya/postgres-bin/bin//slon -s500 -g10 -d2 -a =
/tmp/slony-regress.vbdksudmh/archive_logs_3 slony_regress1 "dbname=3Dslonyr=
egress3 host=3Dlocalhost user=3Dnb199489 port=3D5432"
subscribing
./run_test.sh: ERROR: Slonik error see /tmp/slony-regress.vbdksudmh/slonik.=
log for details
------
less /tmp/slony-regress.vbdksudmh/slonik.log
stdin>:25: timeout exceeded while waiting for event confirmation
testlogship
[/export/home/tmp/satya/slony1-1.2.13/tests] 13:30:31 $ PGBINDIR=3D/export/=
home/tmp/satya/postgres-bin/bin/ PGPORT=3D5432 PGUSER=3Dnb199489 ./run_te=
st.sh testlogship/
test: testlogship/
----------------------------------------------------
$Id: README,v 1.1.2.3 2007-06-05 14:38:06 cbbrowne Exp $
=
testlogship is a basic test that replication generally functions with
log shipping. =
=
It creates three simple tables as one replication set, and replicates
them from one database to another.
=
The three tables are of the three interesting types:
=
1. table1 has a formal primary key
2. table2 lacks a formal primary key, but has a candidate primary key
It tries replicating a third table, which has an invalid candidate
primary key (columns not defined NOT NULL), which should cause it to
be rejected. That is done in a slonik TRY {} block.
It also creates...
3. table4 which has columns of all sorts of vaguely esoteric types to
exercise that points, paths, bitmaps, mac addresses, and inet types
replicate properly.
It then loads data into these tables.
The test proceeds to run a DDL script which alters the schema for
table 4, adding two new columns, one to be populated via a default,
for new tuples; the other has no default, but we assign the value 42
to all tuples existing at the time that the DDL script runs.
Surrounding that DDL script are several STORE TRIGGER requests in
order to try to ensure that we have a series of non-SYNC events in a
row.
----------------------------------------------------
creating origin DB: nb199489 -h localhost -U nb199489 -p 5432 slonyregress1
add plpgsql to Origin
loading origin DB with testlogship//init_schema.sql
done
creating subscriber 2 DB: nb199489 -h localhost -U nb199489 -p 5432 slonyre=
gress2
add plpgsql to subscriber
loading subscriber 2 DB from slonyregress1
done
creating subscriber 3 DB: nb199489 -h localhost -U nb199489 -p 5432 slonyre=
gress3
add plpgsql to subscriber
loading subscriber 3 DB from slonyregress1
done
creating subscriber 4 DB: nb199489 -h localhost -U nb199489 -p 5432 slonyre=
gress4
add plpgsql to subscriber
loading subscriber 4 DB from slonyregress1
done
creating cluster
done
storing nodes
Node 3 is a log shipping node - no need for STORE NODE
done
storing paths
log shipping between nodes(1/3) - ls(/true) - omit STORE PATH
log shipping between nodes(2/3) - ls(/true) - omit STORE PATH
log shipping between nodes(3/1) - ls(true/) - omit STORE PATH
log shipping between nodes(3/2) - ls(true/) - omit STORE PATH
log shipping between nodes(3/4) - ls(true/) - omit STORE PATH
log shipping between nodes(4/3) - ls(/true) - omit STORE PATH
done
launching originnode : /export/home/tmp/satya/postgres-bin/bin//slon -s500 =
-g10 -d2 slony_regress1 "dbname=3Dslonyregress1 host=3Dlocalhost user=3Dnb1=
99489 port=3D5432"
Creating log shipping directory - /tmp/slony-regress.mqgltsdtx/archive_logs=
_2
launching: /export/home/tmp/satya/postgres-bin/bin//slon -s500 -g10 -d2 -a =
/tmp/slony-regress.mqgltsdtx/archive_logs_2 slony_regress1 "dbname=3Dslonyr=
egress2 host=3Dlocalhost user=3Dnb199489 port=3D5432"
Creating log shipping directory - /tmp/slony-regress.mqgltsdtx/archive_logs=
_3
do not launch slon for node 3 - it receives data via log shipping
Creating log shipping directory - /tmp/slony-regress.mqgltsdtx/archive_logs=
_4
launching: /export/home/tmp/satya/postgres-bin/bin//slon -s500 -g10 -d2 -a =
/tmp/slony-regress.mqgltsdtx/archive_logs_4 slony_regress1 "dbname=3Dslonyr=
egress4 host=3Dlocalhost user=3Dnb199489 port=3D5432"
subscribing
done
generating 146 transactions of random data
0 %
5 %
10 %
15 %
20 %
25 %
30 %
35 %
40 %
45 %
50 %
55 %
60 %
65 %
70 %
75 %
80 %
85 %
90 %
95 %
100 %
done
launching polling script
loading data
data load complete - nodes are seeded reasonably
purge archive log files up to present in order to eliminate those that cann=
ot be used
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000001.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000002.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000003.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000004.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000005.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000006.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000007.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000008.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000009.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000010.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000011.sql
purge /tmp/slony-regress.mqgltsdtx/archive_logs_2/slony1_log_2_000000000000=
00000012.sql
pull log shipping dump
WARNING: nonstandard use of \\ in a string literal at character 8
HINT: Use the escape string syntax for backslashes, e.g., E'\\'.
load schema for replicated tables into node #3
psql:testlogship/init_schema.sql:4: NOTICE: CREATE TABLE will create impli=
cit sequence "table1_id_seq1" for serial column "table1.id"
psql:testlogship/init_schema.sql:4: ERROR: relation "table1" already exists
psql:testlogship/init_schema.sql:11: NOTICE: CREATE TABLE will create impl=
icit sequence "table2_id_seq1" for serial column "table2.id"
psql:testlogship/init_schema.sql:11: ERROR: relation "table2" already exis=
ts
psql:testlogship/init_schema.sql:16: NOTICE: CREATE TABLE will create impl=
icit sequence "table3_id_seq1" for serial column "table3.id"
psql:testlogship/init_schema.sql:16: ERROR: relation "table3" already exis=
ts
psql:testlogship/init_schema.sql:18: ERROR: relation "no_good_candidate_pk=
" already exists
psql:testlogship/init_schema.sql:31: NOTICE: CREATE TABLE will create impl=
icit sequence "table4_id_seq1" for serial column "table4.id"
psql:testlogship/init_schema.sql:31: ERROR: relation "table4" already exis=
ts
load log shipping dump into node #3
START TRANSACTION
CREATE SCHEMA
psql:/tmp/slony-regress.mqgltsdtx/logship_dump.sql:18: NOTICE: CREATE TABL=
E / PRIMARY KEY will create implicit index "sl_sequence-pkey" for table "sl=
_sequence_offline"
CREATE TABLE
CREATE TABLE
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
INSERT 0 1
COMMIT
generate more data to test log shipping
generating 187 transactions of random data
0 %
5 %
10 %
15 %
20 %
25 %
30 %
14 rows behind
35 %
40 %
45 %
slony is caught up
50 %
55 %
60 %
65 %
70 %
75 %
80 %
85 %
90 %
95 %
100 %
done
launching polling script
loading data
waiting for nodes to catchup
st_origin | st_received | st_last_event | st_last_event_ts | st_=
last_
received | st_last_received_ts | st_last_received_event_ts | st_lag=
_num_
events | st_lag_time =
-----------+-------------+---------------+----------------------------+----=
-----
---------+----------------------------+----------------------------+-------=
-----
-------+-----------------
1 | 2 | 34 | 2008-03-05 13:33:34.290387 | =
=
33 | 2008-03-05 13:33:33.984572 | 2008-03-05 13:33:33.770364 | =
=
1 | 00:00:00.630055
1 | 4 | 34 | 2008-03-05 13:33:34.290387 | =
=
28 | 2008-03-05 13:33:26.620473 | 2008-03-05 13:33:26.590639 | =
=
6 | 00:00:07.80978
(2 rows)
1 rows behind
st_origin | st_received | st_last_event | st_last_event_ts | st_l=
ast_r
eceived | st_last_received_ts | st_last_received_event_ts | st_lag_n=
um_ev
ents | st_lag_time =
-----------+-------------+---------------+---------------------------+-----=
-----
--------+----------------------------+---------------------------+---------=
-----
-----+-----------------
1 | 2 | 36 | 2008-03-05 13:33:45.00072 | =
=
36 | 2008-03-05 13:33:46.052075 | 2008-03-05 13:33:45.00072 | =
=
0 | 00:00:09.479582
1 | 4 | 36 | 2008-03-05 13:33:45.00072 | =
=
36 | 2008-03-05 13:33:53.212123 | 2008-03-05 13:33:45.00072 | =
=
0 | 00:00:09.479582
(2 rows)
done
execute DDL script
completed DDL script
Generate some more data
generating 74 transactions of random data
0 %
5 %
10 %
8 rows behind
15 %
20 %
25 %
30 %
35 %
40 %
45 %
50 %
55 %
slony is caught up
60 %
65 %
70 %
75 %
80 %
85 %
90 %
95 %
100 %
105 %
110 %
115 %
120 %
done
loading extra data to node slonyregress1
waiting for nodes to catchup
st_origin | st_received | st_last_event | st_last_event_ts | st_=
last_
received | st_last_received_ts | st_last_received_event_ts | st_lag=
_num_
events | st_lag_time =
-----------+-------------+---------------+----------------------------+----=
-----
---------+----------------------------+----------------------------+-------=
-----
-------+-----------------
1 | 4 | 47 | 2008-03-05 13:34:12.990829 | =
=
45 | 2008-03-05 13:34:06.303909 | 2008-03-05 13:34:05.360853 | =
=
2 | 00:00:07.966857
1 | 2 | 47 | 2008-03-05 13:34:12.990829 | =
=
46 | 2008-03-05 13:34:12.663501 | 2008-03-05 13:34:12.480887 | =
=
1 | 00:00:00.846823
(2 rows)
st_origin | st_received | st_last_event | st_last_event_ts | st_=
last_
received | st_last_received_ts | st_last_received_event_ts | st_lag=
_num_
events | st_lag_time =
-----------+-------------+---------------+----------------------------+----=
-----
---------+----------------------------+----------------------------+-------=
-----
-------+----------------
1 | 4 | 49 | 2008-03-05 13:34:23.701003 | =
=
49 | 2008-03-05 13:34:26.202799 | 2008-03-05 13:34:23.701003 | =
=
0 | 00:00:09.70844
1 | 2 | 49 | 2008-03-05 13:34:23.701003 | =
=
49 | 2008-03-05 13:34:30.722688 | 2008-03-05 13:34:23.701003 | =
=
0 | 00:00:09.70844
(2 rows)
done
move set to node 4
origin moved
generating 601 transactions of random data
0 %
5 %
10 %
15 %
20 %
25 %
30 %
35 %
40 %
45 %
50 %
55 %
60 %
65 %
70 %
75 %
80 %
85 %
90 %
95 %
100 %
done
loading extra data to node slonyregress4
waiting for nodes to catchup
st_origin | st_received | st_last_event | st_last_event_ts | st_=
last_
received | st_last_received_ts | st_last_received_event_ts | st_lag=
_num_
events | st_lag_time =
-----------+-------------+---------------+----------------------------+----=
-----
---------+----------------------------+----------------------------+-------=
-----
-------+-----------------
1 | 4 | 64 | 2008-03-05 13:36:36.062688 | =
=
64 | 2008-03-05 13:36:36.664834 | 2008-03-05 13:36:36.062688 | =
=
0 | 00:00:06.800465
1 | 2 | 64 | 2008-03-05 13:36:36.062688 | =
=
64 | 2008-03-05 13:36:36.165865 | 2008-03-05 13:36:36.062688 | =
=
0 | 00:00:06.800465
(2 rows)
st_origin | st_received | st_last_event | st_last_event_ts | st_=
last_
received | st_last_received_ts | st_last_received_event_ts | st_lag=
_num_
events | st_lag_time =
-----------+-------------+---------------+----------------------------+----=
-----
---------+----------------------------+----------------------------+-------=
-----
-------+-----------------
1 | 2 | 66 | 2008-03-05 13:36:56.392763 | =
=
66 | 2008-03-05 13:36:56.424345 | 2008-03-05 13:36:56.392763 | =
=
0 | 00:00:06.548586
1 | 4 | 66 | 2008-03-05 13:36:56.392763 | =
=
66 | 2008-03-05 13:36:57.084712 | 2008-03-05 13:36:56.392763 | =
=
0 | 00:00:06.548586
(2 rows)
done
final data load complete - now load files into log shipped node
/usr/bin/find: illegal option -- n
/usr/bin/find: [-H | -L] path-list predicate-list
/usr/bin/find: illegal option -- n
/usr/bin/find: [-H | -L] path-list predicate-list
Logs numbered from to =
current sequence value: 00000000000000000000
done
getting data from origin DB for diffing
done
getting data from node 2 for diffing against origin
comparing
subscriber node 2 is the same as origin node 1
done
getting data from node 3 for diffing against origin
ERROR: column "newcol" does not exist
LINE 1: ...col,pathcol,polycol,circcol,ipcol,maccol, bitcol, newcol, ne...
^
comparing
./run_test.sh: WARNING: /tmp/slony-regress.mqgltsdtx/db_1.dmp /tmp/slony-re=
gress.mqgltsdtx/db_3.dmp differ, see /tmp/slony-regress.mqgltsdtx/db_diff.3=
for details
done
getting data from node 4 for diffing against origin
comparing
subscriber node 4 is the same as origin node 1
done
**** killing slon node 1
**** killing slon node 2
**** killing slon node 4
waiting for slons to die
done
dropping database
slonyregress1
slonyregress2
slonyregress3
slonyregress4
done
there were 1 warnings during the run of testlogship/, check the files in /t=
mp/slony-regress.mqgltsdtx for more details
- Previous message: [Slony1-general] Slony 1.2.13 tests failing?
- Next message: [Slony1-general] configure-replication.sh and SEQUENCES
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list