From cvsuser Wed Jul 5 11:12:51 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Per discussion on list about FORWARDING, add more details Message-ID: <20060705181251.8FDEB11BF2C3@gborg.postgresql.org> Log Message: ----------- Per discussion on list about FORWARDING, add more details to SUBSCRIBE SET indicating why you may want FORWARDING turned to true/false. Modified Files: -------------- slony1-engine/doc/adminguide: man.sgml (r1.7 -> r1.8) slonik_ref.sgml (r1.50 -> r1.51) slony.sgml (r1.29 -> r1.30) -------------- next part -------------- Index: man.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/man.sgml,v retrieving revision 1.7 retrieving revision 1.8 diff -Ldoc/adminguide/man.sgml -Ldoc/adminguide/man.sgml -u -w -r1.7 -r1.8 --- doc/adminguide/man.sgml +++ doc/adminguide/man.sgml @@ -42,6 +42,8 @@ storetrigger(integer,name)"> subscribeset(integer,integer,integer,boolean)"> sl_node"> + sl_log_1"> + sl_log_2"> sl_confirm"> pg_listener"> Index: slonik_ref.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v retrieving revision 1.50 retrieving revision 1.51 diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.50 -r1.51 --- doc/adminguide/slonik_ref.sgml +++ doc/adminguide/slonik_ref.sgml @@ -1923,6 +1923,49 @@ ); + + Forwarding Behaviour + + The FORWARD=boolean flag indicates + whether the subscriber will store log information in tables + &sllog1; and &sllog2;. Several implications fall from + this... + + By storing the data in these tables on the subscriber, + there is some additional processing burden. If you are certain + that you would never want to or to a particular subscriber, it is worth + considering turning off forwarding on that node. + + There is, however, a case where having forwarding turned + off opens up a perhaps-unexpected failure condition; a rule of + thumb should be that all nodes that connect directly to + the origin should have forwarding turned on. Supposing + one such direct subscriber has forwarding turned + off, it is possible for that node to be forcibly lost in a case of + failover. The problem comes if that node gets ahead of other + nodes. + + Let's suppose that the origin, node 1 is at SYNC number + 88901, a non-forwarding node, node 2 has processed up to SYNC + 88897, and other forwarding nodes, 3, 4, and 5, have only + processed data up to SYNC 88895. At that moment, the disk system + on the origin node catches fire. Node 2 has the + data up to SYNC 88897, but there is no + remaining node that contains, in &sllog1; or &sllog2;, the data + for SYNCs 88896 and 88897, so there is no way to bring nodes 3-5 + up to that point. + + At that point, there are only two choices: To drop node 2, + because there is no way to continue managing it, or to drop all + nodes but 2, because there is no way to bring + them up to SYNC 88897. + + That dilemma may be avoided by making sure that all nodes + directly subscribing to the origin have forwarding turned + on. + + Locking Behaviour This operation does not require Index: slony.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v retrieving revision 1.29 retrieving revision 1.30 diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.29 -r1.30 --- doc/adminguide/slony.sgml +++ doc/adminguide/slony.sgml @@ -42,6 +42,8 @@ Best Practice"> error messages indicating missing OIDs"> "> + "> + "> "> "> "> From cvsuser Wed Jul 5 13:57:43 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Trigger unhiding change - look for cases where there will Message-ID: <20060705205743.221F411BF2CB@gborg.postgresql.org> Log Message: ----------- Trigger unhiding change - look for cases where there will be a conflict between hidden triggers and those that are on Slony-I-replicated tables, and generate an exception. This should be less mystifying than the present situation where the subsequent query, to update pg_trigger, fails with a "not-unique" complaint about the PK index on pg_trigger. Also added an FAQ entry to document what you might do if this happens... Modified Files: -------------- slony1-engine/doc/adminguide: faq.sgml (r1.58 -> r1.59) slony1-engine/src/backend: slony1_funcs.sql (r1.86 -> r1.87) -------------- next part -------------- Index: faq.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v retrieving revision 1.58 retrieving revision 1.59 diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.58 -r1.59 --- doc/adminguide/faq.sgml +++ doc/adminguide/faq.sgml @@ -1777,6 +1777,57 @@ + I was trying to request or , and found +messages as follows on one of the subscribers: + + +NOTICE: Slony-I: multiple instances of trigger defrazzle on table frobozz +NOTICE: Slony-I: multiple instances of trigger derez on table tron +ERROR: Slony-I: Unable to disable triggers + + + + The trouble would seem to be that you have added +triggers on tables whose names conflict with triggers that were hidden +by &slony1;. + + &slony1; hides triggers (save for those unhidden +via ) by repointing them to the +primary key of the table. In the case of foreign key triggers, or +other triggers used to do data validation, it should be quite +unnecessary to run them on a subscriber, as equivalent triggers should +have been invoked on the origin node. In contrast, triggers that do +some form of cache invalidation are ones you might want +to have run on a subscriber. + + The Right Way to handle such triggers is +normally to use , which tells +&slony1; that a trigger should not get deactivated. + + But some intrepid DBA might take matters into their +own hands and install a trigger by hand on a subscriber, and the above +condition generally has that as the cause. What to do? What to do? + + + The answer is normally fairly simple: Drop out the +extra trigger on the subscriber before the event that +tries to restore them runs. Ideally, if the DBA is particularly +intrepid, and aware of this issue, that should take place +before there is ever a chance for the error +message to appear. + + If the DBA is not that intrepid, the answer is to connect to +the offending node and drop the visible version of the +trigger using the SQL DROP +TRIGGER command. That should allow the event to proceed. +If the event was , then the +not-so-intrepid DBA may need to add the trigger back, +by hand, or, if they are wise, they should consider activating it +using . + + + Behaviour - all the subscriber nodes start to fall Index: slony1_funcs.sql =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/backend/slony1_funcs.sql,v retrieving revision 1.86 retrieving revision 1.87 diff -Lsrc/backend/slony1_funcs.sql -Lsrc/backend/slony1_funcs.sql -u -w -r1.86 -r1.87 --- src/backend/slony1_funcs.sql +++ src/backend/slony1_funcs.sql @@ -3774,6 +3774,8 @@ v_tab_fqname text; v_tab_attkind text; v_n int4; + v_trec record; + v_tgbad boolean; begin -- ---- -- Grab the central configuration lock @@ -3840,6 +3842,32 @@ -- ---- + -- Check to see if there are any trigger conflicts... + -- ---- + v_tgbad := ''false''; + for v_trec in + select pc.relname, tg1.tgname from + "pg_catalog".pg_trigger tg1, + "pg_catalog".pg_trigger tg2, + "pg_catalog".pg_class pc, + "pg_catalog".pg_index pi, + @NAMESPACE at .sl_table tab + where + tg1.tgname = tg2.tgname and -- Trigger names match + tg1.tgrelid = tab.tab_reloid and -- trigger 1 is on the table + pi.indexrelid = tg2.tgrelid and -- trigger 2 is on the index + pi.indrelid = tab.tab_reloid and -- indexes table is this table + pc.oid = tab.tab_reloid + loop + raise notice ''Slony-I: multiple instances of trigger % on table %'', + v_trec.tgname, v_trec.relname; + v_tgbad := ''true''; + end loop; + if v_tgbad then + raise exception ''Slony-I: Unable to disable triggers''; + end if; + + -- ---- -- Disable all existing triggers -- ---- update "pg_catalog".pg_trigger From cvsuser Wed Jul 5 14:00:20 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: In Slony-I 1.1.x, there is a problem with the Message-ID: <20060705210020.363CD11BF2C7@gborg.postgresql.org> Log Message: ----------- In Slony-I 1.1.x, there is a problem with the launch_cluster.sh script where it is easy for a thread to fail and lead to the .PID file getting removed. launch_cluster.sh now searches to see if there is a "slon -f $CONFFILE" running, and doesn't bother trying to start another slon if there is a slon still running. In 1.2, the slon should be less fragile and shouldn't be as prone to removing .PID files, but this shouldn't actually damage behaviour... Modified Files: -------------- slony1-engine/tools: launch_clusters.sh (r1.1 -> r1.2) -------------- next part -------------- Index: launch_clusters.sh =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/launch_clusters.sh,v retrieving revision 1.1 retrieving revision 1.2 diff -Ltools/launch_clusters.sh -Ltools/launch_clusters.sh -u -w -r1.1 -r1.2 --- tools/launch_clusters.sh +++ tools/launch_clusters.sh @@ -79,8 +79,13 @@ echo "Slon already running - $SLONPID" fi else + + if [[ PID=ps auxww | egrep "[s]lon -f $CONFIGPATH/conf/node${NODENUM}.conf" > /dev/null ]] ; then + echo "Slon already running - but PID marked dead" + else invoke_slon $LOGHOME $NODENUM $CLUSTER $SLONCONF fi + fi } for cluster in `echo $CLUSTERS`; do From cvsuser Wed Jul 5 14:35:02 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Updating INSTALL notes for 1.2 Message-ID: <20060705213502.295B311BF2C7@gborg.postgresql.org> Log Message: ----------- Updating INSTALL notes for 1.2 Modified Files: -------------- slony1-engine: INSTALL (r1.9 -> r1.10) -------------- next part -------------- Index: INSTALL =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/INSTALL,v retrieving revision 1.9 retrieving revision 1.10 diff -LINSTALL -LINSTALL -u -w -r1.9 -r1.10 --- INSTALL +++ INSTALL @@ -3,16 +3,25 @@ $Id$ -Slony currently supports PostgreSQL 7.3.3 (and higher), 7.4.x, and 8.x. +Slony-I currently supports PostgreSQL 7.4.0 (and higher), 8.0.x, and +8.1.x. + +Note that earlier versions supported versions in the 7.3.x series; as +of Slony-I 1.2.0, 7.3 support has been dropped. + +If you require 7.3 support, please avail yourself of an earlier +Slony-I release; seeing as how 7.3 is very old, dating back to 2002, +you really should consider upgrading to a newer version of PostgreSQL. Important Configuration parameters ==================================== -Slony normally needs to be built and installed by the PostgreSQL Unix -user. The installation target must be identical to the existing -PostgreSQL installation particularly in view of the fact that several -Slony-I components represent libraries and SQL scripts that need to be -in the PostgreSQL lib and share directories. +Slony-I normally needs to be built and installed by the same user that +owns the PostgreSQL binaries. The installation target must be +identical to the existing PostgreSQL installation particularly in view +of the fact that several Slony-I components represent libraries and +SQL scripts that need to be in the PostgreSQL lib/ and share/ +directories. On certain platforms (AIX and Solaris are known to need this), PostgreSQL must be configured with the option --enable-thread-safety @@ -62,7 +71,7 @@ The .sql files are not fully substituted yet. And yes, both the 7.3 and the 7.4 files get installed on a system, irrespective of its version. The slonik admin utility does namespace/cluster -substitutions within the files, and loads them files when creating +substitutions within the files, and loads those files when creating replication nodes. At that point in time, the database being initialized may be remote and may run a different version of PostgreSQL than that of the local host. From cvsuser Thu Jul 6 10:00:38 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: change to setAddTable_int(): Reject the table if the Message-ID: <20060706170038.3023D11BF2DF@gborg.postgresql.org> Log Message: ----------- change to setAddTable_int(): Reject the table if the proposed PK candidate contains any nullable columns. Modified Files: -------------- slony1-engine/src/backend: slony1_funcs.sql (r1.87 -> r1.88) -------------- next part -------------- Index: slony1_funcs.sql =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/backend/slony1_funcs.sql,v retrieving revision 1.87 retrieving revision 1.88 diff -Lsrc/backend/slony1_funcs.sql -Lsrc/backend/slony1_funcs.sql -u -w -r1.87 -r1.88 --- src/backend/slony1_funcs.sql +++ src/backend/slony1_funcs.sql @@ -2648,6 +2648,8 @@ v_sub_provider int4; v_relkind char; v_tab_reloid oid; + v_pkcand_nn boolean; + v_prec record; begin -- ---- -- Grab the central configuration lock @@ -2704,6 +2706,22 @@ p_fqname, p_tab_idxname; end if; + v_pkcand_nn := ''f''; + for v_prec in select attname from "pg_catalog".pg_attribute where attrelid = + (select oid from "pg_catalog".pg_class where oid = v_tab_reloid) + and attname in (select attname from "pg_catalog".pg_attribute where + attrelid = (select oid from "pg_catalog".pg_class PGC, + "pg_catalog".pg_index PGX where + PGC.relname = p_tab_idxname and PGX.indexrelid=PGC.oid and + PGX.indrelid = v_tab_reloid)) and attnotnull <> ''t'' + loop + raise notice ''Slony-I: setAddTable_int: table % PK column % nullable'', p_fqname, v_prec.attname; + v_pkcand_nn := ''t''; + end loop; + if v_pkcand_nn then + raise exception ''Slony-I: setAddTable_int: table % not replicable!'', p_fqname; + end if; + -- ---- -- Add the table to sl_table and create the trigger on it. -- ---- From cvsuser Thu Jul 6 11:26:09 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Add a regression test to verify that tables where the Message-ID: <20060706182609.86CFE11BF2DF@gborg.postgresql.org> Log Message: ----------- Add a regression test to verify that tables where the proposed candidate primary key has nullable columns are rejected... Modified Files: -------------- slony1-engine/tests/test1: init_add_tables.ik (r1.4 -> r1.5) init_schema.sql (r1.3 -> r1.4) -------------- next part -------------- Index: init_add_tables.ik =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/init_add_tables.ik,v retrieving revision 1.4 retrieving revision 1.5 diff -Ltests/test1/init_add_tables.ik -Ltests/test1/init_add_tables.ik -u -w -r1.4 -r1.5 --- tests/test1/init_add_tables.ik +++ tests/test1/init_add_tables.ik @@ -2,3 +2,12 @@ set add table (id=2, set id=1, origin=1, fully qualified name = 'public.table2', key='table2_id_key'); table add key (node id = 1, fully qualified name = 'public.table3'); set add table (id=3, set id=1, origin=1, fully qualified name = 'public.table3', key = SERIAL); + +try { + set add table (id=4, set id=1, origin=1, fully qualified name = 'public.table4', key = 'no_good_candidate_pk'); +} on error { + echo 'Tried to replicate table4 with no good candidate PK - rejected'; +} on success { + echo 'Tried to replicate table4 with no good candidate PK - accepted'; + exit 1; +} Index: init_schema.sql =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/init_schema.sql,v retrieving revision 1.3 retrieving revision 1.4 diff -Ltests/test1/init_schema.sql -Ltests/test1/init_schema.sql -u -w -r1.3 -r1.4 --- tests/test1/init_schema.sql +++ tests/test1/init_schema.sql @@ -19,3 +19,9 @@ CONSTRAINT table3_date_check CHECK (mod_date <= now()) ); +create table table4 ( + id serial NOT NULL, + id2 integer +); + +create unique index no_good_candidate_pk on table4 (id, id2); \ No newline at end of file From cvsuser Thu Jul 6 11:31:28 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Add more description to test1 README, as it's testing more Message-ID: <20060706183128.75B9911BF2DF@gborg.postgresql.org> Log Message: ----------- Add more description to test1 README, as it's testing more things than it used to. Modified Files: -------------- slony1-engine/tests/test1: README (r1.1 -> r1.2) -------------- next part -------------- Index: README =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/README,v retrieving revision 1.1 retrieving revision 1.2 diff -Ltests/test1/README -Ltests/test1/README -u -w -r1.1 -r1.2 --- tests/test1/README +++ tests/test1/README @@ -1,5 +1,20 @@ $Id$ -test1 is a basic test that replication generally functions. It -creates three simple tables as one replication set, and replicates -them from one database to another. +test1 is a basic test that replication generally functions. + +It doesn't try to do anything too terribly fancy: It creates three +simple tables as one replication set, and replicates them from one +database to another. + +The three tables are of the three interesting types: + +1. table1 has a formal primary key + +2. table2 lacks a formal primary key, but has a candidate primary key + +3. table3 has no candidate primary key; Slony-I is expected to + generate one on its own. + +It actually tries replicating a fourth table, which has an invalid +candidate primary key (columns not defined NOT NULL), which should +cause it to be rejected. That is done in a slonik TRY {} block. From cvsuser Tue Jul 11 07:34:02 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Cosmetic changes to comments and such Message-ID: <20060711143402.6D65211BF0DD@gborg.postgresql.org> Log Message: ----------- Cosmetic changes to comments and such Modified Files: -------------- slony1-engine/src/backend: README.events (r1.6 -> r1.7) slony1_funcs.sql (r1.88 -> r1.89) -------------- next part -------------- Index: README.events =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/backend/README.events,v retrieving revision 1.6 retrieving revision 1.7 diff -Lsrc/backend/README.events -Lsrc/backend/README.events -u -w -r1.6 -r1.7 --- src/backend/README.events +++ src/backend/README.events @@ -1,3 +1,7 @@ +Event Documentation +--------------------- +$Id$ + STORE_NODE ev_data1 no_id ev_data2 no_comment Index: slony1_funcs.sql =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/backend/slony1_funcs.sql,v retrieving revision 1.88 retrieving revision 1.89 diff -Lsrc/backend/slony1_funcs.sql -Lsrc/backend/slony1_funcs.sql -u -w -r1.88 -r1.89 --- src/backend/slony1_funcs.sql +++ src/backend/slony1_funcs.sql @@ -661,7 +661,8 @@ -- ---------------------------------------------------------------------- -- FUNCTION registerNodeConnection (nodeid) -- --- +-- register a node connection for a slon so that only that slon services +-- the node -- ---------------------------------------------------------------------- create or replace function @NAMESPACE at .registerNodeConnection (int4) returns int4 @@ -1017,7 +1018,7 @@ if exists (select true from @NAMESPACE at .sl_subscribe where sub_provider = p_no_id) then - raise exception ''Slony-I: Node % is still configured as data provider'', + raise exception ''Slony-I: Node % is still configured as a data provider'', p_no_id; end if; @@ -1125,7 +1126,7 @@ -- ---- -- All consistency checks first - -- Check that every system that has a path to the failed node + -- Check that every node that has a path to the failed node -- also has a path to the backup node. -- ---- for v_row in select P.pa_client @@ -1149,7 +1150,7 @@ loop -- ---- -- Check that the backup node is subscribed to all sets - -- that origin on the failed node + -- that originate on the failed node -- ---- select into v_row2 sub_forward, sub_active from @NAMESPACE at .sl_subscribe @@ -5372,7 +5373,7 @@ perform "pg_catalog".setval(''@NAMESPACE at .sl_log_status'', 3); perform @NAMESPACE at .registry_set_timestamp( ''logswitch.laststart'', now()::timestamp); - raise notice ''Logswitch to sl_log_2 initiated''; + raise notice ''Slony-I: Logswitch to sl_log_2 initiated''; return 2; end if; @@ -5384,7 +5385,7 @@ perform "pg_catalog".setval(''@NAMESPACE at .sl_log_status'', 2); perform @NAMESPACE at .registry_set_timestamp( ''logswitch.laststart'', now()::timestamp); - raise notice ''Logswitch to sl_log_1 initiated''; + raise notice ''Slony-I: Logswitch to sl_log_1 initiated''; return 1; end if; From cvsuser Tue Jul 11 07:35:19 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Barrel of changes to documentation added during Conference Message-ID: <20060711143519.2E68311BF0DD@gborg.postgresql.org> Log Message: ----------- Barrel of changes to documentation added during Conference Modified Files: -------------- slony1-engine/doc/adminguide: addthings.sgml (r1.14 -> r1.15) maintenance.sgml (r1.20 -> r1.21) slonik_ref.sgml (r1.51 -> r1.52) slony.sgml (r1.30 -> r1.31) testbed.sgml (r1.8 -> r1.9) -------------- next part -------------- Index: addthings.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v retrieving revision 1.14 retrieving revision 1.15 diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.14 -r1.15 --- doc/adminguide/addthings.sgml +++ doc/adminguide/addthings.sgml @@ -4,7 +4,6 @@ adding objects to replication - You may discover that you have missed replicating things that you wish you were replicating. @@ -76,12 +75,12 @@ have to interrupt normal activity to introduce replication. - Instead, you can add the table via + Instead, you may add the table via psql on each node. - Create a new replication set + Create a new replication set Add the table to the new set @@ -110,17 +109,39 @@ You absolutely must not include transaction control commands, particularly BEGIN and COMMIT, inside these DDL scripts. &slony1; wraps DDL scripts with a BEGIN/COMMIT pair; adding extra transaction control will mean that parts of the DDL will commit outside the control of &slony1; - Avoid, if possible, having quotes in the DDL script + Before version 1.2, it was necessary to be +exceedingly restrictive about what you tried to process using +. + + You could not have anything 'quoted' in the +script, as this would not be stored and forwarded properly. As of +1.2, quoting is now handled properly. + + If you submitted a series of DDL statements, the later ones +could not make reference to objects created in the earlier ones, as +the entire set of statements was submitted as a single query, where +the query plan was based on the state of the database at +the beginning, before any modifications had been +made. As of 1.2, if there are 12 SQL statements, they are each +submitted individually, so that alter table x add column c1 +integer; may now be followed by alter table x +alter column c1 set not null; . + + How to remove replication for a node + You will want to remove the various &slony1; components connected to the database(s). - We will just consider, for now, doing this to one node. If you have multiple nodes, you will have to repeat this as many times as necessary. + We will just consider, for now, doing this to one node. If you +have multiple nodes, you will have to repeat this as many times as +necessary. Components to be Removed: + Log Triggers / Update Denial Triggers @@ -136,48 +157,102 @@ How To Conveniently Handle Removal - - You may use the Slonik command to remove the node from the cluster. This will lead to the triggers and everything in the cluster schema being dropped from the node. The process will automatically die off. - - - - In the case of a failed node (where you used to switch to another node), you may need to use to drop out the triggers and schema and functions. - - - If the above things work out particularly badly, you could submit the SQL command DROP SCHEMA "_ClusterName" CASCADE;, which will drop out &slony1; functions, tables, and triggers alike. + You may use the Slonik +command to remove the node from the cluster. This will lead to the +triggers and everything in the cluster schema being dropped from the +node. The process will automatically die +off. + + In the case of a failed node (where you +used to switch to another node), you may +need to use to drop out the +triggers and schema and functions. + + If the node failed due to some dramatic hardware failure +(e.g. disk drives caught fire), there may not be +a database left on the failed node; it would only be expected to +survive if the failure was one involving a network failure where +the database was fine, but you were forced to +drop it from replication due to (say) some persistent network outage. + + + If the above things work out particularly badly, you +could submit the SQL command DROP SCHEMA "_ClusterName" +CASCADE;, which will drop out &slony1; functions, tables, +and triggers alike. That is generally less suitable +than , because that command not only +drops the schema and its contents, but also removes any columns added +in using . Adding A Node To Replication -Things are not fundamentally different whether you are adding a brand new, fresh node, or if you had previously dropped a node and are recreating it. In either case, you are adding a node to replication. +Things are not fundamentally different whether you are adding a +brand new, fresh node, or if you had previously dropped a node and are +recreating it. In either case, you are adding a node to +replication. The needful steps are thus... - - Determine the node number and any relevant DSNs for the new node. Use &postgres; command createdb to create the database; add the table definitions for the tables that are to be replicated, as &slony1; does not automatically propagate that information. - - - If the node had been a failed node, you may need to issue the command in order to get rid of its vestiges in the cluster, and to drop out the schema that &slony1; creates. + + Determine the node number and any relevant DSNs for +the new node. Use &postgres; command createdb to +create the database; add the table definitions for the tables that are +to be replicated, as &slony1; does not automatically propagate that +information. + + + If you do not have a perfectly clean SQL script to add in the +tables, then run the tool slony1_extract_schema.sh +from the tools directory to get the user schema +from the origin node with all &slony1; cruft +removed. + + + If the node had been a failed node, you may need to +issue the +command in order to get rid of its +vestiges in the cluster, and to drop out the schema that &slony1; +creates. - - Issue the slonik command to establish the new node. + + Issue the slonik +command to establish the new node. - - At this point, you may start a daemon against the new node. It may not know much about the other nodes yet, so the logs for this node may be pretty quiet. + + At this point, you may start a &lslon; daemon against +the new node. It may not know much about the other nodes yet, so the +logs for this node may be pretty quiet. - - Issue the slonik command to indicate how processes are to communicate with the new node. In &slony1; version 1.1 and later, this will then automatically generate listen path entries; in earlier versions, you will need to use to generate them manually. + + Issue the slonik +command to indicate +how processes are to communicate with the new +node. In &slony1; version 1.1 and later, this will then automatically +generate listen path entries; in +earlier versions, you will need to +use to generate them manually. - - Issue the slonik command to subscribe the node to some replication set. + + At this point, it is an excellent idea to run +the tools +script test_slony_state-dbi.pl, which rummages +through the state of the entire cluster, pointing out any anomalies +that it finds. This includes a variety of sorts of communications +problems. + + Issue the slonik +command to subscribe the node to +some replication set. + - How do I reshape the subscriptions? + How do I reshape subscriptions? For instance, I want subscriber node 3 to draw data from node 1, when it is presently drawing data from node 2. @@ -190,6 +265,53 @@ the subscriptions. Subscriptions will not be started from scratch; they will merely be reconfigured. + How do I use + + How do I know replication is working? + + The ultimate proof is in looking at whether data added at the +origin makes it to the subscribers. That's a simply matter of +querying. + + There are several ways of examining replication status, however: + + Look in the &lslon; logs. + + They won't say too much, even at very high debugging levels, on +an origin node; at debugging level 2, you should see, on subscribers, +that SYNCs are being processed. As of version 1.2, the information +reported for SYNC processing includes counts of the numbers of tables +processed, as well as numbers of tuples inserted, deleted, and +updated. + + Look in the view sl_status , on +the origin node. + + This view will tell how far behind the various subscribing +nodes are in processing events from the node where you run the query. +It will only be very informative on a node that +originates a replication set. + + Run the tools +script test_slony_state-dbi.pl, which rummages +through the state of the entire cluster, pointing out any anomalies +that it notices, as well as some information on the status of each +node. + + + + + + What happens when I fail over? + + To be written... + + How do I move master to a new node? + + Obviously, use ; more details +should be added... + + 2 --> 4 and set 2 has subscriptions 1 --> 3 --> 4 There's no reason for trouble with nodes 1, 2, or 3; the "odd" case is with node 4, which is drawing data from node 1 from two different places. Modified Files: -------------- slony1-engine/tests/test1: README (r1.2 -> r1.3) generate_dml.sh (r1.6 -> r1.7) init_add_tables.ik (r1.5 -> r1.6) init_create_set.ik (r1.1 -> r1.2) init_subscribe_set.ik (r1.1 -> r1.2) settings.ik (r1.1 -> r1.2) -------------- next part -------------- Index: settings.ik =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/settings.ik,v retrieving revision 1.1 retrieving revision 1.2 diff -Ltests/test1/settings.ik -Ltests/test1/settings.ik -u -w -r1.1 -r1.2 --- tests/test1/settings.ik +++ tests/test1/settings.ik @@ -1,4 +1,4 @@ NUMCLUSTERS=${NUMCLUSTERS:-"1"} -NUMNODES=${NUMNODES:-"2"} +NUMNODES=${NUMNODES:-"4"} ORIGINNODE=1 WORKERS=${WORKERS:-"1"} Index: init_add_tables.ik =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/init_add_tables.ik,v retrieving revision 1.5 retrieving revision 1.6 diff -Ltests/test1/init_add_tables.ik -Ltests/test1/init_add_tables.ik -u -w -r1.5 -r1.6 --- tests/test1/init_add_tables.ik +++ tests/test1/init_add_tables.ik @@ -1,5 +1,6 @@ set add table (id=1, set id=1, origin=1, fully qualified name = 'public.table1', comment='accounts table'); -set add table (id=2, set id=1, origin=1, fully qualified name = 'public.table2', key='table2_id_key'); +set add table (id=2, set id=2, origin=1, fully qualified name = 'public.table2', key='table2_id_key'); + table add key (node id = 1, fully qualified name = 'public.table3'); set add table (id=3, set id=1, origin=1, fully qualified name = 'public.table3', key = SERIAL); Index: init_subscribe_set.ik =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/init_subscribe_set.ik,v retrieving revision 1.1 retrieving revision 1.2 diff -Ltests/test1/init_subscribe_set.ik -Ltests/test1/init_subscribe_set.ik -u -w -r1.1 -r1.2 --- tests/test1/init_subscribe_set.ik +++ tests/test1/init_subscribe_set.ik @@ -1 +1,8 @@ subscribe set ( id = 1, provider = 1, receiver = 2, forward = no); +wait for event (origin=ALL, confirmed=ALL, wait on 1, timeout=200); +subscribe set ( id = 1, provider = 2, receiver = 4, forward = no); +wait for event (origin=ALL, confirmed=ALL, wait on 1, timeout=200); +subscribe set ( id = 2, provider = 1, receiver = 3, forward = no); +wait for event (origin=ALL, confirmed=ALL, wait on 1, timeout=200); +subscribe set ( id = 2, provider = 3, receiver = 4, forward = no); + Index: init_create_set.ik =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/init_create_set.ik,v retrieving revision 1.1 retrieving revision 1.2 diff -Ltests/test1/init_create_set.ik -Ltests/test1/init_create_set.ik -u -w -r1.1 -r1.2 --- tests/test1/init_create_set.ik +++ tests/test1/init_create_set.ik @@ -1 +1,2 @@ create set (id=1, origin=1, comment='All test1 tables'); +create set (id=2, origin=1, comment='All test2 tables'); Index: generate_dml.sh =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/generate_dml.sh,v retrieving revision 1.6 retrieving revision 1.7 diff -Ltests/test1/generate_dml.sh -Ltests/test1/generate_dml.sh -u -w -r1.6 -r1.7 Index: README =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/README,v retrieving revision 1.2 retrieving revision 1.3 diff -Ltests/test1/README -Ltests/test1/README -u -w -r1.2 -r1.3 --- tests/test1/README +++ tests/test1/README @@ -1,20 +1,13 @@ $Id$ -test1 is a basic test that replication generally functions. +test-odd-subscribes creates a "multi-flow" situation that does not, at +this time, function. -It doesn't try to do anything too terribly fancy: It creates three -simple tables as one replication set, and replicates them from one -database to another. +It doesn't try to do anything too terribly fancy: It creates two +replication sets, and replicates them in two ways: -The three tables are of the three interesting types: +set 1: +1 --> 2 --> 4 -1. table1 has a formal primary key - -2. table2 lacks a formal primary key, but has a candidate primary key - -3. table3 has no candidate primary key; Slony-I is expected to - generate one on its own. - -It actually tries replicating a fourth table, which has an invalid -candidate primary key (columns not defined NOT NULL), which should -cause it to be rejected. That is done in a slonik TRY {} block. +set 2: +1 --> 3 --> 4 From cvsuser Tue Jul 11 09:28:43 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Retitle "addthings" as "Task Oriented Guide" Message-ID: <20060711162843.D3EE811BF0DD@gborg.postgresql.org> Log Message: ----------- Retitle "addthings" as "Task Oriented Guide" Modified Files: -------------- slony1-engine/doc/adminguide: addthings.sgml (r1.15 -> r1.16) -------------- next part -------------- Index: addthings.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v retrieving revision 1.15 retrieving revision 1.16 diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.15 -r1.16 --- doc/adminguide/addthings.sgml +++ doc/adminguide/addthings.sgml @@ -1,13 +1,17 @@ -Adding Things to Replication +A Task-Oriented View of &slony1; adding objects to replication You may discover that you have missed replicating things that you wish you were replicating. -This can generally be fairly easily remedied. +This can generally be fairly easily remedied. This section +attempts to provide a task-oriented view of how to use +&slony1;; in effect, to answer the question How do I do +X with &slony1;?, for various values of +X. You cannot directly use or in From cvsuser Tue Jul 11 11:33:59 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By darcyb: Move variable declications to the top of the block as per C Message-ID: <20060711183359.3BF3211BF0DD@gborg.postgresql.org> Log Message: ----------- Move variable declications to the top of the block as per C spec Modified Files: -------------- slony1-engine/src/parsestatements: scanner.c (r1.1 -> r1.2) test-scanner.c (r1.2 -> r1.3) slony1-engine/src/slon: remote_worker.c (r1.115 -> r1.116) slony1-engine/src/slonik: slonik.c (r1.64 -> r1.65) -------------- next part -------------- Index: scanner.c =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/parsestatements/scanner.c,v retrieving revision 1.1 retrieving revision 1.2 diff -Lsrc/parsestatements/scanner.c -Lsrc/parsestatements/scanner.c -u -w -r1.1 -r1.2 --- src/parsestatements/scanner.c +++ src/parsestatements/scanner.c @@ -75,7 +75,7 @@ state = Q_DOLLAR_QUOTING; /* Return to dollar quoting mode */ break; } - int d1stemp = d1start; + d1stemp = d1start; while (d1stemp < d1end) { if (extended_statement[d1stemp] != extended_statement[d2start]) { /* mismatch - these aren't the droids... */ Index: test-scanner.c =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/parsestatements/test-scanner.c,v retrieving revision 1.2 retrieving revision 1.3 diff -Lsrc/parsestatements/test-scanner.c -Lsrc/parsestatements/test-scanner.c -u -w -r1.2 -r1.3 --- src/parsestatements/test-scanner.c +++ src/parsestatements/test-scanner.c @@ -7,13 +7,15 @@ extern int statements; int main (int argc, char *const argv[]) { + + int i, j, START; int nstatements = 0; + fread(foo, sizeof(char), 65536, stdin); printf("Input: %s\n", foo); nstatements = scan_for_statements (foo); - int i, j, START; START = 0; for (i = 0; i < nstatements; i++) { printf("\nstatement %d\n-------------------------------------------\n", i); Index: remote_worker.c =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/slon/remote_worker.c,v retrieving revision 1.115 retrieving revision 1.116 diff -Lsrc/slon/remote_worker.c -Lsrc/slon/remote_worker.c -u -w -r1.115 -r1.116 --- src/slon/remote_worker.c +++ src/slon/remote_worker.c @@ -1461,6 +1461,8 @@ int ddl_setid = (int)strtol(event->ev_data1, NULL, 10); char *ddl_script = event->ev_data2; int ddl_only_on_node = (int)strtol(event->ev_data3, NULL, 10); + int num_statements = -1, stmtno, startpos; + PGresult *res; ExecStatusType rstat; @@ -1476,7 +1478,6 @@ slon_retry(); } - int num_statements = -1, stmtno, startpos; num_statements = scan_for_statements (ddl_script); slon_log(SLON_CONFIG, "remoteWorkerThread_%d: DDL request with %d statements\n", node->no_id, num_statements); Index: slonik.c =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/slonik/slonik.c,v retrieving revision 1.64 retrieving revision 1.65 diff -Lsrc/slonik/slonik.c -Lsrc/slonik/slonik.c -u -w -r1.64 -r1.65 --- src/slonik/slonik.c +++ src/slonik/slonik.c @@ -3813,6 +3813,7 @@ SlonDString query; SlonDString script; int rc; + int num_statements = -1, stmtno, startpos; char buf[4096]; char rex1[256]; char rex2[256]; @@ -3821,6 +3822,12 @@ PGresult *res; ExecStatusType rstat; +#define PARMCOUNT 1 + + const char *params[PARMCOUNT]; + int paramlens[PARMCOUNT]; + int paramfmts[PARMCOUNT]; + adminfo1 = get_active_adminfo((SlonikStmt *) stmt, stmt->ev_origin); if (adminfo1 == NULL) return -1; @@ -3859,7 +3866,6 @@ /* Split the script into a series of SQL statements - each needs to be submitted separately */ - int num_statements = -1, stmtno, startpos; num_statements = scan_for_statements (dstring_data(&script)); printf("DDL script consisting of %d SQL statements\n", num_statements); @@ -3909,12 +3915,6 @@ stmt->ddl_setid, stmt->only_on_node); -#define PARMCOUNT 1 - - const char *params[PARMCOUNT]; - int paramlens[PARMCOUNT]; - int paramfmts[PARMCOUNT]; - paramlens[PARMCOUNT-1] = 0; paramfmts[PARMCOUNT-1] = 0; params[PARMCOUNT-1] = dstring_data(&script); From cvsuser Tue Jul 11 11:57:09 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By darcyb: fix warning: unused value: $2 as pointed out by mastermind Message-ID: <20060711185709.BB3B411BF0DD@gborg.postgresql.org> Log Message: ----------- fix warning: unused value: $2 as pointed out by mastermind Modified Files: -------------- slony1-engine/src/slonik: parser.y (r1.24 -> r1.25) -------------- next part -------------- Index: parser.y =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/slonik/parser.y,v retrieving revision 1.24 retrieving revision 1.25 diff -Lsrc/slonik/parser.y -Lsrc/slonik/parser.y -u -w -r1.24 -r1.25 --- src/slonik/parser.y +++ src/slonik/parser.y @@ -473,8 +473,8 @@ | stmt_ddl_script { $$ = $1; } | stmt_update_functions - | stmt_repair_config { $$ = $1; } + | stmt_repair_config { $$ = $1; } | stmt_wait_event { $$ = $1; } From cvsuser Tue Jul 11 14:22:34 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By darcyb: Add note about Datestyle fix Message-ID: <20060711212234.3D25811BF0D8@gborg.postgresql.org> Log Message: ----------- Add note about Datestyle fix Modified Files: -------------- slony1-engine: RELEASE-1.2.0 (r1.5 -> r1.6) -------------- next part -------------- Index: RELEASE-1.2.0 =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/RELEASE-1.2.0,v retrieving revision 1.5 retrieving revision 1.6 diff -LRELEASE-1.2.0 -LRELEASE-1.2.0 -u -w -r1.5 -r1.6 --- RELEASE-1.2.0 +++ RELEASE-1.2.0 @@ -128,3 +128,7 @@ Logic added to cleanupevent() to clear out old sl_event entries if there is just one node. That then allows the cleanup thread to clear sl_log_1 etc. + +- Bug 1566 - Force all replication to occure in the ISO datestyle. + This ensures that we can apply date/timestamps regardless of the datestyle + they were entered in. From cvsuser Tue Jul 11 15:22:29 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By darcyb: In older versions of PG we can not call functional output in Message-ID: <20060711222229.5BB1E11BF0E5@gborg.postgresql.org> Log Message: ----------- In older versions of PG we can not call functional output in raise notice. Modified Files: -------------- slony1-engine/src/backend: slony1_funcs.sql (r1.89 -> r1.90) -------------- next part -------------- Index: slony1_funcs.sql =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/backend/slony1_funcs.sql,v retrieving revision 1.89 retrieving revision 1.90 diff -Lsrc/backend/slony1_funcs.sql -Lsrc/backend/slony1_funcs.sql -u -w -r1.89 -r1.90 --- src/backend/slony1_funcs.sql +++ src/backend/slony1_funcs.sql @@ -181,10 +181,13 @@ 'Returns the compiled-in version number of the Slony-I shared object'; create or replace function @NAMESPACE at .checkmoduleversion () returns text as ' +declare + moduleversion text; begin - if @NAMESPACE at .getModuleVersion() <> ''@MODULEVERSION@'' then - raise exception ''Slonik version: % != Slony-I version in PG build %'', - ''@MODULEVERSION@'', @NAMESPACE at .getModuleVersion(); + select into moduleversion @NAMESPACE at .getModuleVersion(); + if moduleversion <> ''@MODULEVERSION@'' then + raise exception ''Slonik version: @MODULEVERSION@ != Slony-I version in PG build %'', + moduleversion; end if; return null; end;' language plpgsql; From cvsuser Wed Jul 12 01:41:18 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By xfade: Fix SGML errors in addthings.sgml Message-ID: <20060712084118.64E5A11BF0B5@gborg.postgresql.org> Log Message: ----------- Fix SGML errors in addthings.sgml Modified Files: -------------- slony1-engine/doc/adminguide: addthings.sgml (r1.16 -> r1.17) -------------- next part -------------- Index: addthings.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v retrieving revision 1.16 retrieving revision 1.17 diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.16 -r1.17 --- doc/adminguide/addthings.sgml +++ doc/adminguide/addthings.sgml @@ -129,7 +129,7 @@ made. As of 1.2, if there are 12 SQL statements, they are each submitted individually, so that alter table x add column c1 integer; may now be followed by alter table x -alter column c1 set not null; . +alter column c1 set not null; . @@ -288,7 +288,7 @@ processed, as well as numbers of tuples inserted, deleted, and updated. - Look in the view sl_status , on + Look in the view sl_status , on the origin node. This view will tell how far behind the various subscribing From cvsuser Thu Jul 13 14:38:02 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Version checking code in slonik.c broke in the way it Message-ID: <20060713213802.D28EC11BF0BD@gborg.postgresql.org> Log Message: ----------- Version checking code in slonik.c broke in the way it looked for elderly versions; apparently a < and > got swapped... Fixed that, as well as adding to the "your version of PostgreSQL is too old" message some indication as to how bad that situation is. For instance, if you're on PG 7.3, then Slony-I 1.1.5 is still an option. But if you're on < 7.3, Slony-I never was an option (at least not for those that lack near-PG-core-level hacking-on-slon abilities...) Modified Files: -------------- slony1-engine/src/slonik: slonik.c (r1.65 -> r1.66) -------------- next part -------------- Index: slonik.c =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/slonik/slonik.c,v retrieving revision 1.65 retrieving revision 1.66 diff -Lsrc/slonik/slonik.c -Lsrc/slonik/slonik.c -u -w -r1.65 -r1.66 --- src/slonik/slonik.c +++ src/slonik/slonik.c @@ -1671,6 +1671,7 @@ get_active_adminfo(SlonikStmt * stmt, int no_id) { SlonikAdmInfo *adminfo; + int version; if ((adminfo = get_adminfo(stmt, no_id)) == NULL) { @@ -1684,12 +1685,14 @@ if (db_connect(stmt, adminfo) < 0) return NULL; - if (adminfo->pg_version = db_get_version(stmt, adminfo) < 0) + version = db_get_version(stmt, adminfo); + if (version < 0) { PQfinish(adminfo->dbconn); adminfo->dbconn = NULL; return NULL; } + adminfo->pg_version = version; if (db_rollback_xact(stmt, adminfo) < 0) { @@ -1849,18 +1852,17 @@ } /* determine what schema version we should load */ - - if (adminfo->pg_version > 70300) /* 7.2 and lower */ + if (adminfo->pg_version < 70300) /* 7.3 and lower */ { printf("%s:%d: unsupported PostgreSQL " - "version %d.%d\n", + "version %d.%d (versions < 7.3 were never supported by Slony-I)\n", stmt->stmt_filename, stmt->stmt_lno, (adminfo->pg_version/10000), ((adminfo->pg_version%10000)/100)); } else if ((adminfo->pg_version >= 70300) && (adminfo->pg_version<70400)) /* 7.3 */ { printf("%s:%d: unsupported PostgreSQL " - "version %d.%d\n", + "version %d.%d (try Slony-I 1.1.5)\n", stmt->stmt_filename, stmt->stmt_lno, (adminfo->pg_version/10000), ((adminfo->pg_version%10000)/100)); } From cvsuser Thu Jul 13 14:39:25 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: slony1-funcs.sql used a CONTINUE statement only supported Message-ID: <20060713213925.7710F11BF0BD@gborg.postgresql.org> Log Message: ----------- slony1-funcs.sql used a CONTINUE statement only supported in PG 8.0+; unrolled the logic... Modified Files: -------------- slony1-engine/src/backend: slony1_funcs.sql (r1.90 -> r1.91) -------------- next part -------------- Index: slony1_funcs.sql =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/backend/slony1_funcs.sql,v retrieving revision 1.90 retrieving revision 1.91 diff -Lsrc/backend/slony1_funcs.sql -Lsrc/backend/slony1_funcs.sql -u -w -r1.90 -r1.91 --- src/backend/slony1_funcs.sql +++ src/backend/slony1_funcs.sql @@ -341,13 +341,13 @@ while v_i <= v_l loop if substr(p_tab_fqname, v_i, 1) != ''"'' then v_i := v_i + 1; - continue; - end if; + else v_i := v_i + 1; if substr(p_tab_fqname, v_i, 1) != ''"'' then exit; end if; v_i := v_i + 1; + end if; end loop; else -- first part of ident is not quoted, search for the dot directly @@ -2710,6 +2710,10 @@ p_fqname, p_tab_idxname; end if; + -- ---- + -- Verify that the columns in the PK (or candidate) are not NULLABLE + -- ---- + v_pkcand_nn := ''f''; for v_prec in select attname from "pg_catalog".pg_attribute where attrelid = (select oid from "pg_catalog".pg_class where oid = v_tab_reloid) From cvsuser Fri Jul 14 14:29:02 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Fix up test1; it had gotten "blathered on" by a would-be Message-ID: <20060714212902.2F6E511BF0C2@gborg.postgresql.org> Log Message: ----------- Fix up test1; it had gotten "blathered on" by a would-be new test... Modified Files: -------------- slony1-engine/tests/test1: README (r1.3 -> r1.4) init_add_tables.ik (r1.6 -> r1.7) init_cluster.ik (r1.1 -> r1.2) init_create_set.ik (r1.2 -> r1.3) init_subscribe_set.ik (r1.2 -> r1.3) settings.ik (r1.2 -> r1.3) -------------- next part -------------- Index: settings.ik =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/settings.ik,v retrieving revision 1.2 retrieving revision 1.3 diff -Ltests/test1/settings.ik -Ltests/test1/settings.ik -u -w -r1.2 -r1.3 --- tests/test1/settings.ik +++ tests/test1/settings.ik @@ -1,4 +1,4 @@ NUMCLUSTERS=${NUMCLUSTERS:-"1"} -NUMNODES=${NUMNODES:-"4"} +NUMNODES=${NUMNODES:-"2"} ORIGINNODE=1 WORKERS=${WORKERS:-"1"} Index: init_add_tables.ik =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/init_add_tables.ik,v retrieving revision 1.6 retrieving revision 1.7 diff -Ltests/test1/init_add_tables.ik -Ltests/test1/init_add_tables.ik -u -w -r1.6 -r1.7 --- tests/test1/init_add_tables.ik +++ tests/test1/init_add_tables.ik @@ -1,5 +1,5 @@ set add table (id=1, set id=1, origin=1, fully qualified name = 'public.table1', comment='accounts table'); -set add table (id=2, set id=2, origin=1, fully qualified name = 'public.table2', key='table2_id_key'); +set add table (id=2, set id=1, origin=1, fully qualified name = 'public.table2', key='table2_id_key'); table add key (node id = 1, fully qualified name = 'public.table3'); set add table (id=3, set id=1, origin=1, fully qualified name = 'public.table3', key = SERIAL); Index: init_subscribe_set.ik =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/init_subscribe_set.ik,v retrieving revision 1.2 retrieving revision 1.3 diff -Ltests/test1/init_subscribe_set.ik -Ltests/test1/init_subscribe_set.ik -u -w -r1.2 -r1.3 --- tests/test1/init_subscribe_set.ik +++ tests/test1/init_subscribe_set.ik @@ -1,8 +1 @@ subscribe set ( id = 1, provider = 1, receiver = 2, forward = no); -wait for event (origin=ALL, confirmed=ALL, wait on 1, timeout=200); -subscribe set ( id = 1, provider = 2, receiver = 4, forward = no); -wait for event (origin=ALL, confirmed=ALL, wait on 1, timeout=200); -subscribe set ( id = 2, provider = 1, receiver = 3, forward = no); -wait for event (origin=ALL, confirmed=ALL, wait on 1, timeout=200); -subscribe set ( id = 2, provider = 3, receiver = 4, forward = no); - Index: init_cluster.ik =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/init_cluster.ik,v retrieving revision 1.1 retrieving revision 1.2 diff -Ltests/test1/init_cluster.ik -Ltests/test1/init_cluster.ik -u -w -r1.1 -r1.2 Index: init_create_set.ik =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/init_create_set.ik,v retrieving revision 1.2 retrieving revision 1.3 diff -Ltests/test1/init_create_set.ik -Ltests/test1/init_create_set.ik -u -w -r1.2 -r1.3 --- tests/test1/init_create_set.ik +++ tests/test1/init_create_set.ik @@ -1,2 +1,2 @@ create set (id=1, origin=1, comment='All test1 tables'); -create set (id=2, origin=1, comment='All test2 tables'); + Index: README =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/tests/test1/README,v retrieving revision 1.3 retrieving revision 1.4 diff -Ltests/test1/README -Ltests/test1/README -u -w -r1.3 -r1.4 --- tests/test1/README +++ tests/test1/README @@ -1,13 +1,20 @@ $Id$ -test-odd-subscribes creates a "multi-flow" situation that does not, at -this time, function. +test1 is a basic test that replication generally functions. -It doesn't try to do anything too terribly fancy: It creates two -replication sets, and replicates them in two ways: +It doesn't try to do anything too terribly fancy: It creates three +simple tables as one replication set, and replicates them from one +database to another. -set 1: -1 --> 2 --> 4 +The three tables are of the three interesting types: -set 2: -1 --> 3 --> 4 +1. table1 has a formal primary key + +2. table2 lacks a formal primary key, but has a candidate primary key + +3. table3 has no candidate primary key; Slony-I is expected to + generate one on its own. + +It actually tries replicating a fourth table, which has an invalid +candidate primary key (columns not defined NOT NULL), which should +cause it to be rejected. That is done in a slonik TRY {} block. From cvsuser Mon Jul 17 09:27:06 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By devrim: - Updated spec and cleaned up rpmlint errors and warnings Message-ID: <20060717162706.BD2AE11BF0CF@gborg.postgresql.org> Log Message: ----------- - Updated spec and cleaned up rpmlint errors and warnings Modified Files: -------------- slony1-engine: postgresql-slony1-engine.spec.in (r1.29 -> r1.30) -------------- next part -------------- Index: postgresql-slony1-engine.spec.in =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/postgresql-slony1-engine.spec.in,v retrieving revision 1.29 retrieving revision 1.30 diff -Lpostgresql-slony1-engine.spec.in -Lpostgresql-slony1-engine.spec.in -u -w -r1.29 -r1.30 --- postgresql-slony1-engine.spec.in +++ postgresql-slony1-engine.spec.in @@ -5,16 +5,16 @@ %define pg_version %(rpm -q --queryformat '%{VERSION}' postgresql-devel) -Summary: A "master to multiple slaves" replication system with cascading and failover. +Summary: A "master to multiple slaves" replication system with cascading and failover Name: @PACKAGE_NAME@ Version: @PACKAGE_VERSION@ -Release: 1_PG%{pg_version} -License: Berkeley/BSD +Release: 2_PG%{pg_version} +License: BSD Group: Applications/Databases URL: http://slony.info/ -Packager: Devrim Gunduz +Packager: Devrim Gunduz Source0: @PACKAGE_NAME at -%{version}.tar.gz -BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot +BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) BuildRequires: postgresql-devel Requires: postgresql-server = %{pg_version} @@ -71,36 +71,36 @@ autoconf -make +make %{?_smp_mflags} %if %perltools - make -C tools + make %{?_smp_mflags} -C tools %endif %install rm -rf $RPM_BUILD_ROOT -install -d $RPM_BUILD_ROOT%{_bindir} install -d $RPM_BUILD_ROOT%{_sysconfdir} install -d $RPM_BUILD_ROOT%{_datadir}/pgsql/ install -d $RPM_BUILD_ROOT%{_libdir}/pgsql/ -make DESTDIR=$RPM_BUILD_ROOT install +make %{?_smp_mflags} DESTDIR=$RPM_BUILD_ROOT install install -m 0755 src/backend/slony1_funcs.so $RPM_BUILD_ROOT%{_libdir}/pgsql/slony1_funcs.so install -m 0755 src/xxid/xxid.so $RPM_BUILD_ROOT%{_libdir}/pgsql/xxid.so -install -m 0755 src/backend/*.sql $RPM_BUILD_ROOT%{_datadir}/pgsql/ -install -m 0755 src/xxid/*.sql $RPM_BUILD_ROOT%{_datadir}/pgsql/ +install -m 0644 src/backend/*.sql $RPM_BUILD_ROOT%{_datadir}/pgsql/ +install -m 0644 src/xxid/*.sql $RPM_BUILD_ROOT%{_datadir}/pgsql/ install -m 0755 tools/*.sh $RPM_BUILD_ROOT%{_bindir}/ -install -m 0755 share/slon.conf-sample $RPM_BUILD_ROOT%{_sysconfdir}/slon.conf +install -m 0644 share/slon.conf-sample $RPM_BUILD_ROOT%{_sysconfdir}/slon.conf %if %perltools cd tools -make DESTDIR=$RPM_BUILD_ROOT install +make %{?_smp_mflags} DESTDIR=$RPM_BUILD_ROOT install /bin/rm -rf altperl/*.pl altperl/ToDo altperl/README altperl/Makefile altperl/CVS -install -m 0755 altperl/slon_tools.conf-sample $RPM_BUILD_ROOT%{_sysconfdir}/slon_tools.conf +install -m 0644 altperl/slon_tools.conf-sample $RPM_BUILD_ROOT%{_sysconfdir}/slon_tools.conf install -m 0755 altperl/* $RPM_BUILD_ROOT%{_bindir}/ -install -m 0755 altperl/slon-tools $RPM_BUILD_ROOT%{_libdir}/pgsql/slon-tools.pm +install -m 0644 altperl/slon-tools $RPM_BUILD_ROOT%{_libdir}/pgsql/slon-tools /bin/rm -f $RPM_BUILD_ROOT%{_sysconfdir}/slon_tools.conf-sample /bin/rm -f $RPM_BUILD_ROOT%{_bindir}/slon_tools.conf-sample -/bin/rm -f $RPM_BUILD_ROOT%{_libdir}/slon-tools.pm +/bin/rm -f $RPM_BUILD_ROOT%{_libdir}/pgsql/slon-tools.pm /bin/rm -f $RPM_BUILD_ROOT%{_bindir}/slon-tools.pm +/bin/rm -f $RPM_BUILD_ROOT%{_bindir}/slon-tools %endif %clean @@ -108,23 +108,24 @@ %files %defattr(-,root,root,-) +%doc COPYRIGHT UPGRADING HISTORY-1.1 INSTALL SAMPLE RELEASE-1.1.5 +%if %docs +%doc doc/adminguide doc/concept doc/howto doc/implementation doc/support +%endif %{_bindir}/* %{_libdir}/pgsql/slony1_funcs.so %{_libdir}/pgsql/xxid.so %{_datadir}/pgsql/*.sql %config(noreplace) %{_sysconfdir}/slon.conf %if %perltools -%{_libdir}/pgsql/slon-tools.pm +%{_libdir}/pgsql/slon-tools %config(noreplace) %{_sysconfdir}/slon_tools.conf %endif -%if %docs -%files docs -%defattr(-,root,root) -%doc COPYRIGHT UPGRADING HISTORY-1.1 INSTALL SAMPLE doc/adminguide doc/concept doc/howto doc/implementation doc/support -%endif - %changelog +* Mon Jul 17 2006 Devrim Gunduz postgresql-slony1-engine +- Updated spec and cleaned up rpmlint errors and warnings + * Wed Dec 21 2005 Devrim Gunduz postgresql-slony1-engine - Added a buildrhel3 macro to fix RHEL 3 RPM builds - Added a kerbdir macro From cvsuser Mon Jul 17 09:28:42 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By devrim: Improve description, by Joshua D. Message-ID: <20060717162842.6E15711BF0CF@gborg.postgresql.org> Log Message: ----------- Improve description, by Joshua D. Drake Modified Files: -------------- slony1-engine: postgresql-slony1-engine.spec.in (r1.30 -> r1.31) -------------- next part -------------- Index: postgresql-slony1-engine.spec.in =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/postgresql-slony1-engine.spec.in,v retrieving revision 1.30 retrieving revision 1.31 diff -Lpostgresql-slony1-engine.spec.in -Lpostgresql-slony1-engine.spec.in -u -w -r1.30 -r1.31 --- postgresql-slony1-engine.spec.in +++ postgresql-slony1-engine.spec.in @@ -25,15 +25,15 @@ %define prefix /usr %description -Slony-I will be a "master to multiple slaves" replication -system with cascading and failover. +Slony-I is a "master to multiple slaves" replication +system for PostgreSQL with cascading and failover. The big picture for the development of Slony-I is to build a master-slave system that includes all features and capabilities needed to replicate large databases to a reasonably limited number of slave systems. -Slony-I is planned as a system for data centers and backup +Slony-I is a system for data centers and backup sites, where the normal mode of operation is that all nodes are available From cvsuser Mon Jul 17 14:06:50 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By darcyb: Teach autoconf about standard_conforming_strings GUC Message-ID: <20060717210650.4316211BF0C7@gborg.postgresql.org> Log Message: ----------- Teach autoconf about standard_conforming_strings GUC Modified Files: -------------- slony1-engine: config.h.in (r1.16 -> r1.17) configure (r1.67 -> r1.68) slony1-engine/config: acx_libpq.m4 (r1.21 -> r1.22) -------------- next part -------------- Index: config.h.in =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/config.h.in,v retrieving revision 1.16 retrieving revision 1.17 diff -Lconfig.h.in -Lconfig.h.in -u -w -r1.16 -r1.17 --- config.h.in +++ config.h.in @@ -87,6 +87,9 @@ /* Set to 1 if typenameTypeId() takes 2 args */ #undef HAVE_TYPENAMETYPEID_2 +/* Set to 1 if standard_conforming_strings available */ +#undef HAVE_STANDARDCONFORMINGSTRINGS + /* For PostgreSQL 8.0 and up we need to use GetTopTransactionId() */ #undef HAVE_DECL_GETTOPTRANSACTIONID #if !HAVE_DECL_GETTOPTRANSACTIONID Index: configure =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/configure,v retrieving revision 1.67 retrieving revision 1.68 diff -Lconfigure -Lconfigure -u -w -r1.67 -r1.68 --- configure +++ configure @@ -1,10 +1,19 @@ #! /bin/sh # Guess values for system-dependent variables and create Makefiles. -# Generated by GNU Autoconf 2.59 for postgresql-slony1-engine HEAD_20060621. +# Generated by GNU Autoconf 2.53 for postgresql-slony1-engine HEAD_20060717. # -# Copyright (C) 2003 Free Software Foundation, Inc. +# Copyright 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001, 2002 +# Free Software Foundation, Inc. # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. + +if expr a : '\(a\)' >/dev/null 2>&1; then + as_expr=expr +else + as_expr=false +fi + + ## --------------------- ## ## M4sh Initialization. ## ## --------------------- ## @@ -13,57 +22,46 @@ if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: - # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which - # is contrary to our usage. Disable this feature. - alias -g '${1+"$@"}'='"$@"' elif test -n "${BASH_VERSION+set}" && (set -o posix) >/dev/null 2>&1; then set -o posix fi -DUALCASE=1; export DUALCASE # for MKS sh +# NLS nuisances. # Support unset when possible. -if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then +if (FOO=FOO; unset FOO) >/dev/null 2>&1; then as_unset=unset else as_unset=false fi - -# Work around bugs in pre-3.0 UWIN ksh. -$as_unset ENV MAIL MAILPATH -PS1='$ ' -PS2='> ' -PS4='+ ' - -# NLS nuisances. -for as_var in \ - LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \ - LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \ - LC_TELEPHONE LC_TIME -do - if (set +x; test -z "`(eval $as_var=C; export $as_var) 2>&1`"); then - eval $as_var=C; export $as_var - else - $as_unset $as_var - fi -done - -# Required to use basename. -if expr a : '\(a\)' >/dev/null 2>&1; then - as_expr=expr -else - as_expr=false -fi - -if (basename /) >/dev/null 2>&1 && test "X`basename / 2>&1`" = "X/"; then - as_basename=basename -else - as_basename=false -fi +(set +x; test -n "`(LANG=C; export LANG) 2>&1`") && + { $as_unset LANG || test "${LANG+set}" != set; } || + { LANG=C; export LANG; } +(set +x; test -n "`(LC_ALL=C; export LC_ALL) 2>&1`") && + { $as_unset LC_ALL || test "${LC_ALL+set}" != set; } || + { LC_ALL=C; export LC_ALL; } +(set +x; test -n "`(LC_TIME=C; export LC_TIME) 2>&1`") && + { $as_unset LC_TIME || test "${LC_TIME+set}" != set; } || + { LC_TIME=C; export LC_TIME; } +(set +x; test -n "`(LC_CTYPE=C; export LC_CTYPE) 2>&1`") && + { $as_unset LC_CTYPE || test "${LC_CTYPE+set}" != set; } || + { LC_CTYPE=C; export LC_CTYPE; } +(set +x; test -n "`(LANGUAGE=C; export LANGUAGE) 2>&1`") && + { $as_unset LANGUAGE || test "${LANGUAGE+set}" != set; } || + { LANGUAGE=C; export LANGUAGE; } +(set +x; test -n "`(LC_COLLATE=C; export LC_COLLATE) 2>&1`") && + { $as_unset LC_COLLATE || test "${LC_COLLATE+set}" != set; } || + { LC_COLLATE=C; export LC_COLLATE; } +(set +x; test -n "`(LC_NUMERIC=C; export LC_NUMERIC) 2>&1`") && + { $as_unset LC_NUMERIC || test "${LC_NUMERIC+set}" != set; } || + { LC_NUMERIC=C; export LC_NUMERIC; } +(set +x; test -n "`(LC_MESSAGES=C; export LC_MESSAGES) 2>&1`") && + { $as_unset LC_MESSAGES || test "${LC_MESSAGES+set}" != set; } || + { LC_MESSAGES=C; export LC_MESSAGES; } # Name of the executable. -as_me=`$as_basename "$0" || +as_me=`(basename "$0") 2>/dev/null || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)$' \| \ @@ -74,7 +72,6 @@ /^X\/\(\/\).*/{ s//\1/; q; } s/.*/./; q'` - # PATH needs CR, and LINENO needs CR and PATH. # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' @@ -85,15 +82,15 @@ # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then - echo "#! /bin/sh" >conf$$.sh - echo "exit 0" >>conf$$.sh - chmod +x conf$$.sh - if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then + echo "#! /bin/sh" >conftest.sh + echo "exit 0" >>conftest.sh + chmod +x conftest.sh + if (PATH=".;."; conftest.sh) >/dev/null 2>&1; then PATH_SEPARATOR=';' else PATH_SEPARATOR=: fi - rm -f conf$$.sh + rm -f conftest.sh fi @@ -141,8 +138,6 @@ as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null` test "x$as_lineno_1" != "x$as_lineno_2" && test "x$as_lineno_3" = "x$as_lineno_2" ') 2>/dev/null; then - $as_unset BASH_ENV || test "${BASH_ENV+set}" != set || { BASH_ENV=; export BASH_ENV; } - $as_unset ENV || test "${ENV+set}" != set || { ENV=; export ENV; } CONFIG_SHELL=$as_dir/$as_base export CONFIG_SHELL exec "$CONFIG_SHELL" "$0" ${1+"$@"} @@ -215,20 +210,13 @@ fi rm -f conf$$ conf$$.exe conf$$.file -if mkdir -p . 2>/dev/null; then - as_mkdir_p=: -else - test -d ./-p && rmdir ./-p - as_mkdir_p=false -fi - as_executable_p="test -f" # Sed expression to map a string onto a valid CPP name. -as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" +as_tr_cpp="sed y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g" # Sed expression to map a string onto a valid variable name. -as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" +as_tr_sh="sed y%*+%pp%;s%[^_$as_cr_alnum]%_%g" # IFS @@ -238,7 +226,7 @@ IFS=" $as_nl" # CDPATH. -$as_unset CDPATH +$as_unset CDPATH || test "${CDPATH+set}" != set || { CDPATH=$PATH_SEPARATOR; export CDPATH; } # Name of the host. @@ -252,7 +240,6 @@ # Initializations. # ac_default_prefix=/usr/local -ac_config_libobj_dir=. cross_compiling=no subdirs= MFLAGS= @@ -267,8 +254,8 @@ # Identity of this package. PACKAGE_NAME='postgresql-slony1-engine' PACKAGE_TARNAME='postgresql-slony1-engine' -PACKAGE_VERSION='HEAD_20060621' -PACKAGE_STRING='postgresql-slony1-engine HEAD_20060621' +PACKAGE_VERSION='HEAD_20060717' +PACKAGE_STRING='postgresql-slony1-engine HEAD_20060717' PACKAGE_BUGREPORT='' ac_unique_file="src" @@ -309,8 +296,6 @@ # include #endif" -ac_subst_vars='SHELL PATH_SEPARATOR PACKAGE_NAME PACKAGE_TARNAME PACKAGE_VERSION PACKAGE_STRING PACKAGE_BUGREPORT exec_prefix prefix program_transform_name bindir sbindir libexecdir datadir sysconfdir sharedstatedir localstatedir libdir includedir oldincludedir infodir mandir build_alias host_alias target_alias DEFS ECHO_C ECHO_N ECHO_T LIBS build build_cpu build_vendor build_os host host_cpu host_vendor host_os enable_debug CC CFLAGS LDFLAGS CPPFLAGS ac_ct_CC EXEEXT OBJEXT PERL TAR LEX YACC SED LD YFLAGS LEXFLAGS HEAD_20060621 with_gnu_ld enable_rpath acx_pthread_config PTHREAD_CC PTHREAD_LIBS PTHREAD_CFLAGS CPP EGREP HAVE_POSIX_SIGNALS NLSLIB PGINCLUDEDIR PGINCLUDESERVERDIR PGLIBDIR PGPKGLIBDIR PGSHAREDIR PGBINDIR HAVE_NETSNMP NETSNMP_CFLAGS NETSNMP_AGENTLIBS TOOLSBIN SLONYPATH HOST_OS PORTNAME SLONBINDIR with_docs GROFF PS2PDF DJPEG PNMTOPS CONVERT PGAUTODOC NSGMLS SGMLSPL d2mdir JADE have_docbook DOCBOOKSTYLE COLLATEINDEX docdir perlsharedir LIBOBJS LTLIBOBJS' -ac_subst_files='' # Initialize some variables set by options. ac_init_help= @@ -734,9 +719,6 @@ { (exit 1); exit 1; }; } fi fi -(cd $srcdir && test -r ./$ac_unique_file) 2>/dev/null || - { echo "$as_me: error: sources are in $srcdir, but \`cd $srcdir' does not work" >&2 - { (exit 1); exit 1; }; } srcdir=`echo "$srcdir" | sed 's%\([^\\/]\)[\\/]*$%\1%'` ac_env_build_alias_set=${build_alias+set} ac_env_build_alias_value=$build_alias @@ -782,7 +764,7 @@ # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF -\`configure' configures postgresql-slony1-engine HEAD_20060621 to adapt to many kinds of systems. +\`configure' configures postgresql-slony1-engine HEAD_20060717 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... @@ -843,7 +825,7 @@ if test -n "$ac_init_help"; then case $ac_init_help in - short | recursive ) echo "Configuration of postgresql-slony1-engine HEAD_20060621:";; + short | recursive ) echo "Configuration of postgresql-slony1-engine HEAD_20060717:";; esac cat <<\_ACEOF @@ -919,45 +901,12 @@ ac_srcdir=$ac_top_builddir$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_builddir$srcdir ;; esac - -# Do not use `cd foo && pwd` to compute absolute paths, because -# the directories may not exist. -case `pwd` in -.) ac_abs_builddir="$ac_dir";; -*) - case "$ac_dir" in - .) ac_abs_builddir=`pwd`;; - [\\/]* | ?:[\\/]* ) ac_abs_builddir="$ac_dir";; - *) ac_abs_builddir=`pwd`/"$ac_dir";; - esac;; -esac -case $ac_abs_builddir in -.) ac_abs_top_builddir=${ac_top_builddir}.;; -*) - case ${ac_top_builddir}. in - .) ac_abs_top_builddir=$ac_abs_builddir;; - [\\/]* | ?:[\\/]* ) ac_abs_top_builddir=${ac_top_builddir}.;; - *) ac_abs_top_builddir=$ac_abs_builddir/${ac_top_builddir}.;; - esac;; -esac -case $ac_abs_builddir in -.) ac_abs_srcdir=$ac_srcdir;; -*) - case $ac_srcdir in - .) ac_abs_srcdir=$ac_abs_builddir;; - [\\/]* | ?:[\\/]* ) ac_abs_srcdir=$ac_srcdir;; - *) ac_abs_srcdir=$ac_abs_builddir/$ac_srcdir;; - esac;; -esac -case $ac_abs_builddir in -.) ac_abs_top_srcdir=$ac_top_srcdir;; -*) - case $ac_top_srcdir in - .) ac_abs_top_srcdir=$ac_abs_builddir;; - [\\/]* | ?:[\\/]* ) ac_abs_top_srcdir=$ac_top_srcdir;; - *) ac_abs_top_srcdir=$ac_abs_builddir/$ac_top_srcdir;; - esac;; -esac +# Don't blindly perform a `cd "$ac_dir"/$ac_foo && pwd` since $ac_foo can be +# absolute. +ac_abs_builddir=`cd "$ac_dir" && cd $ac_builddir && pwd` +ac_abs_top_builddir=`cd "$ac_dir" && cd $ac_top_builddir && pwd` +ac_abs_srcdir=`cd "$ac_dir" && cd $ac_srcdir && pwd` +ac_abs_top_srcdir=`cd "$ac_dir" && cd $ac_top_srcdir && pwd` cd $ac_dir # Check for guested configure; otherwise get Cygnus style configure. @@ -981,10 +930,11 @@ test -n "$ac_init_help" && exit 0 if $ac_init_version; then cat <<\_ACEOF -postgresql-slony1-engine configure HEAD_20060621 -generated by GNU Autoconf 2.59 +postgresql-slony1-engine configure HEAD_20060717 +generated by GNU Autoconf 2.53 -Copyright (C) 2003 Free Software Foundation, Inc. +Copyright 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001, 2002 +Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF @@ -995,8 +945,8 @@ This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. -It was created by postgresql-slony1-engine $as_me HEAD_20060621, which was -generated by GNU Autoconf 2.59. Invocation command line was +It was created by postgresql-slony1-engine $as_me HEAD_20060717, which was +generated by GNU Autoconf 2.53. Invocation command line was $ $0 $@ @@ -1048,54 +998,27 @@ # Keep a trace of the command line. # Strip out --no-create and --no-recursion so they do not pile up. -# Strip out --silent because we don't want to record it for future runs. # Also quote any args containing shell meta-characters. -# Make two passes to allow for proper duplicate-argument suppression. ac_configure_args= -ac_configure_args0= -ac_configure_args1= ac_sep= -ac_must_keep_next=false -for ac_pass in 1 2 -do for ac_arg do case $ac_arg in - -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;; - -q | -quiet | --quiet | --quie | --qui | --qu | --q \ - | -silent | --silent | --silen | --sile | --sil) + -no-create | --no-create | --no-creat | --no-crea | --no-cre \ + | --no-cr | --no-c | -n ) continue ;; + -no-recursion | --no-recursion | --no-recursio | --no-recursi \ + | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) continue ;; *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?\"\']*) ac_arg=`echo "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac - case $ac_pass in - 1) ac_configure_args0="$ac_configure_args0 '$ac_arg'" ;; - 2) - ac_configure_args1="$ac_configure_args1 '$ac_arg'" - if test $ac_must_keep_next = true; then - ac_must_keep_next=false # Got value, back to normal. - else - case $ac_arg in - *=* | --config-cache | -C | -disable-* | --disable-* \ - | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \ - | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \ - | -with-* | --with-* | -without-* | --without-* | --x) - case "$ac_configure_args0 " in - "$ac_configure_args1"*" '$ac_arg' "* ) continue ;; - esac - ;; - -* ) ac_must_keep_next=true ;; + case " $ac_configure_args " in + *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy. + *) ac_configure_args="$ac_configure_args$ac_sep'$ac_arg'" + ac_sep=" " ;; esac - fi - ac_configure_args="$ac_configure_args$ac_sep'$ac_arg'" # Get rid of the leading space. - ac_sep=" " - ;; - esac done -done -$as_unset ac_configure_args0 || test "${ac_configure_args0+set}" != set || { ac_configure_args0=; export ac_configure_args0; } -$as_unset ac_configure_args1 || test "${ac_configure_args1+set}" != set || { ac_configure_args1=; export ac_configure_args1; } # When interrupted or exit'd, cleanup temporary files, and complete # config.log. We remove comments because anyway the quotes in there @@ -1106,7 +1029,6 @@ # Save into config.log some information that might help in debugging. { echo - cat <<\_ASBOX ## ---------------- ## ## Cache variables. ## @@ -1129,35 +1051,6 @@ esac; } echo - - cat <<\_ASBOX -## ----------------- ## -## Output variables. ## -## ----------------- ## -_ASBOX - echo - for ac_var in $ac_subst_vars - do - eval ac_val=$`echo $ac_var` - echo "$ac_var='"'"'$ac_val'"'"'" - done | sort - echo - - if test -n "$ac_subst_files"; then - cat <<\_ASBOX -## ------------- ## -## Output files. ## -## ------------- ## -_ASBOX - echo - for ac_var in $ac_subst_files - do - eval ac_val=$`echo $ac_var` - echo "$ac_var='"'"'$ac_val'"'"'" - done | sort - echo - fi - if test -s confdefs.h; then cat <<\_ASBOX ## ----------- ## @@ -1165,14 +1058,14 @@ ## ----------- ## _ASBOX echo - sed "/^$/d" confdefs.h | sort + sed "/^$/d" confdefs.h echo fi test "$ac_signal" != 0 && echo "$as_me: caught signal $ac_signal" echo "$as_me: exit $exit_status" } >&5 - rm -f core *.core && + rm -f core core.* *.core && rm -rf conftest* confdefs* conf$$* $ac_clean_files && exit $exit_status ' 0 @@ -1330,7 +1223,6 @@ - ac_config_headers="$ac_config_headers config.h" ac_aux_dir= @@ -1715,7 +1607,9 @@ # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift - ac_cv_prog_CC="$as_dir/$ac_word${1+' '}$@" + set dummy "$as_dir/$ac_word" ${1+"$@"} + shift + ac_cv_prog_CC="$@" fi fi fi @@ -1820,10 +1714,8 @@ fi -test -z "$CC" && { { echo "$as_me:$LINENO: error: no acceptable C compiler found in \$PATH -See \`config.log' for more details." >&5 -echo "$as_me: error: no acceptable C compiler found in \$PATH -See \`config.log' for more details." >&2;} +test -z "$CC" && { { echo "$as_me:$LINENO: error: no acceptable C compiler found in \$PATH" >&5 +echo "$as_me: error: no acceptable C compiler found in \$PATH" >&2;} { (exit 1); exit 1; }; } # Provide some information about the compiler. @@ -1847,12 +1739,15 @@ (exit $ac_status); } cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -1862,12 +1757,12 @@ } _ACEOF ac_clean_files_save=$ac_clean_files -ac_clean_files="$ac_clean_files a.out a.exe b.out" +ac_clean_files="$ac_clean_files a.out a.exe" # Try to create an executable without -o first, disregard a.out. # It will help us diagnose broken compilers, and finding out an intuition # of exeext. -echo "$as_me:$LINENO: checking for C compiler default output file name" >&5 -echo $ECHO_N "checking for C compiler default output file name... $ECHO_C" >&6 +echo "$as_me:$LINENO: checking for C compiler default output" >&5 +echo $ECHO_N "checking for C compiler default output... $ECHO_C" >&6 ac_link_default=`echo "$ac_link" | sed 's/ -o *conftest[^ ]*//'` if { (eval echo "$as_me:$LINENO: \"$ac_link_default\"") >&5 (eval $ac_link_default) 2>&5 @@ -1881,39 +1776,26 @@ # Be careful to initialize this variable, since it used to be cached. # Otherwise an old cache value of `no' led to `EXEEXT = no' in a Makefile. ac_cv_exeext= -# b.out is created by i960 compilers. -for ac_file in a_out.exe a.exe conftest.exe a.out conftest a.* conftest.* b.out -do - test -f "$ac_file" || continue +for ac_file in `ls a_out.exe a.exe conftest.exe 2>/dev/null; + ls a.out conftest 2>/dev/null; + ls a.* conftest.* 2>/dev/null`; do case $ac_file in - *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.o | *.obj ) - ;; - conftest.$ac_ext ) - # This is the source file. - ;; - [ab].out ) - # We found the default executable, but exeext='' is most + *.$ac_ext | *.o | *.obj | *.xcoff | *.tds | *.d | *.pdb | *.xSYM ) ;; + a.out ) # We found the default executable, but exeext='' is most # certainly right. break;; - *.* ) - ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` - # FIXME: I believe we export ac_cv_exeext for Libtool, - # but it would be cool to find out if it's true. Does anybody - # maintain Libtool? --akim. + *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` + # FIXME: I believe we export ac_cv_exeext for Libtool --akim. export ac_cv_exeext break;; - * ) - break;; + * ) break;; esac done else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - -{ { echo "$as_me:$LINENO: error: C compiler cannot create executables -See \`config.log' for more details." >&5 -echo "$as_me: error: C compiler cannot create executables -See \`config.log' for more details." >&2;} +cat conftest.$ac_ext >&5 +{ { echo "$as_me:$LINENO: error: C compiler cannot create executables" >&5 +echo "$as_me: error: C compiler cannot create executables" >&2;} { (exit 77); exit 77; }; } fi @@ -1940,11 +1822,9 @@ cross_compiling=yes else { { echo "$as_me:$LINENO: error: cannot run C compiled programs. -If you meant to cross compile, use \`--host'. -See \`config.log' for more details." >&5 +If you meant to cross compile, use \`--host'." >&5 echo "$as_me: error: cannot run C compiled programs. -If you meant to cross compile, use \`--host'. -See \`config.log' for more details." >&2;} +If you meant to cross compile, use \`--host'." >&2;} { (exit 1); exit 1; }; } fi fi @@ -1952,7 +1832,7 @@ echo "$as_me:$LINENO: result: yes" >&5 echo "${ECHO_T}yes" >&6 -rm -f a.out a.exe conftest$ac_cv_exeext b.out +rm -f a.out a.exe conftest$ac_cv_exeext ac_clean_files=$ac_clean_files_save # Check the compiler produces executables we can run. If not, either # the compiler is broken, or we cross compile. @@ -1972,10 +1852,9 @@ # catch `conftest.exe'. For instance with Cygwin, `ls conftest' will # work properly (i.e., refer to `conftest.exe'), while it won't with # `rm'. -for ac_file in conftest.exe conftest conftest.*; do - test -f "$ac_file" || continue +for ac_file in `(ls conftest.exe; ls conftest; ls conftest.*) 2>/dev/null`; do case $ac_file in - *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.o | *.obj ) ;; + *.$ac_ext | *.o | *.obj | *.xcoff | *.tds | *.d | *.pdb ) ;; *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` export ac_cv_exeext break;; @@ -1983,10 +1862,8 @@ esac done else - { { echo "$as_me:$LINENO: error: cannot compute suffix of executables: cannot compile and link -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot compute suffix of executables: cannot compile and link -See \`config.log' for more details." >&2;} + { { echo "$as_me:$LINENO: error: cannot compute suffix of executables: cannot compile and link" >&5 +echo "$as_me: error: cannot compute suffix of executables: cannot compile and link" >&2;} { (exit 1); exit 1; }; } fi @@ -2003,12 +1880,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -2025,19 +1905,16 @@ (exit $ac_status); }; then for ac_file in `(ls conftest.o conftest.obj; ls conftest.*) 2>/dev/null`; do case $ac_file in - *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg ) ;; + *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb ) ;; *) ac_cv_objext=`expr "$ac_file" : '.*\.\(.*\)'` break;; esac done else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - -{ { echo "$as_me:$LINENO: error: cannot compute suffix of object files: cannot compile -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot compute suffix of object files: cannot compile -See \`config.log' for more details." >&2;} +cat conftest.$ac_ext >&5 +{ { echo "$as_me:$LINENO: error: cannot compute suffix of object files: cannot compile" >&5 +echo "$as_me: error: cannot compute suffix of object files: cannot compile" >&2;} { (exit 1); exit 1; }; } fi @@ -2053,12 +1930,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -2072,20 +1952,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -2095,11 +1965,10 @@ ac_compiler_gnu=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_compiler_gnu=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext ac_cv_c_compiler_gnu=$ac_compiler_gnu fi @@ -2115,12 +1984,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -2131,20 +2003,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -2154,11 +2016,10 @@ ac_cv_prog_cc_g=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_prog_cc_g=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_prog_cc_g" >&5 echo "${ECHO_T}$ac_cv_prog_cc_g" >&6 @@ -2177,121 +2038,6 @@ CFLAGS= fi fi -echo "$as_me:$LINENO: checking for $CC option to accept ANSI C" >&5 -echo $ECHO_N "checking for $CC option to accept ANSI C... $ECHO_C" >&6 -if test "${ac_cv_prog_cc_stdc+set}" = set; then - echo $ECHO_N "(cached) $ECHO_C" >&6 -else - ac_cv_prog_cc_stdc=no -ac_save_CC=$CC -cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -#include -#include -#include -#include -/* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */ -struct buf { int x; }; -FILE * (*rcsopen) (struct buf *, struct stat *, int); -static char *e (p, i) - char **p; - int i; -{ - return p[i]; -} -static char *f (char * (*g) (char **, int), char **p, ...) -{ - char *s; - va_list v; - va_start (v,p); - s = g (p, va_arg (v,int)); - va_end (v); - return s; -} - -/* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has - function prototypes and stuff, but not '\xHH' hex character constants. - These don't provoke an error unfortunately, instead are silently treated - as 'x'. The following induces an error, until -std1 is added to get - proper ANSI mode. Curiously '\x00'!='x' always comes out true, for an - array size at least. It's necessary to write '\x00'==0 to get something - that's true only with -std1. */ -int osf4_cc_array ['\x00' == 0 ? 1 : -1]; - -int test (int i, double x); -struct s1 {int (*f) (int a);}; -struct s2 {int (*f) (double a);}; -int pairnames (int, char **, FILE *(*)(struct buf *, struct stat *, int), int, int); -int argc; -char **argv; -int -main () -{ -return f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]; - ; - return 0; -} -_ACEOF -# Don't try gcc -ansi; that turns off useful extensions and -# breaks some systems' header files. -# AIX -qlanglvl=ansi -# Ultrix and OSF/1 -std1 -# HP-UX 10.20 and later -Ae -# HP-UX older versions -Aa -D_HPUX_SOURCE -# SVR4 -Xc -D__EXTENSIONS__ -for ac_arg in "" -qlanglvl=ansi -std1 -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__" -do - CC="$ac_save_CC $ac_arg" - rm -f conftest.$ac_objext -if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 - ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && - { ac_try='test -s conftest.$ac_objext' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; }; then - ac_cv_prog_cc_stdc=$ac_arg -break -else - echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - -fi -rm -f conftest.err conftest.$ac_objext -done -rm -f conftest.$ac_ext conftest.$ac_objext -CC=$ac_save_CC - -fi - -case "x$ac_cv_prog_cc_stdc" in - x|xno) - echo "$as_me:$LINENO: result: none needed" >&5 -echo "${ECHO_T}none needed" >&6 ;; - *) - echo "$as_me:$LINENO: result: $ac_cv_prog_cc_stdc" >&5 -echo "${ECHO_T}$ac_cv_prog_cc_stdc" >&6 - CC="$CC $ac_cv_prog_cc_stdc" ;; -esac - # Some people use a C++ compiler to compile C. Since we use `exit', # in C++ we need to declare it. In case someone uses the same compiler # for both compiling C and C++ we need to have the C++ compiler decide @@ -2303,20 +2049,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -2325,6 +2061,7 @@ (exit $ac_status); }; }; then for ac_declaration in \ '' \ + '#include ' \ 'extern "C" void std::exit (int) throw (); using std::exit;' \ 'extern "C" void std::exit (int); using std::exit;' \ 'extern "C" void exit (int) throw ();' \ @@ -2332,13 +2069,16 @@ 'void exit (int);' do cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -$ac_declaration +#line $LINENO "configure" +#include "confdefs.h" #include +$ac_declaration +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -2349,20 +2089,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -2372,18 +2102,20 @@ : else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 continue fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_declaration +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -2394,20 +2126,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -2417,10 +2139,9 @@ break else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done rm -f conftest* if test -n "$ac_declaration"; then @@ -2431,10 +2152,9 @@ else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' @@ -2806,11 +2526,8 @@ echo "$as_me:$LINENO: checking for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS" >&5 echo $ECHO_N "checking for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus @@ -2819,6 +2536,12 @@ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char pthread_join (); +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -2829,20 +2552,10 @@ _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -2852,11 +2565,9 @@ acx_pthread_ok=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext echo "$as_me:$LINENO: result: $acx_pthread_ok" >&5 echo "${ECHO_T}$acx_pthread_ok" >&6 if test x"$acx_pthread_ok" = xno; then @@ -2998,12 +2709,15 @@ # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -3016,20 +2730,10 @@ _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -3039,11 +2743,9 @@ acx_pthread_ok=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" @@ -3072,12 +2774,15 @@ attr_name=unknown for attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -3088,20 +2793,10 @@ _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -3111,11 +2806,9 @@ attr_name=$attr; break else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext done echo "$as_me:$LINENO: result: $attr_name" >&5 echo "${ECHO_T}$attr_name" >&6 @@ -3232,34 +2925,24 @@ do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. - # Prefer to if __STDC__ is defined, since - # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -#ifdef __STDC__ -# include -#else +#line $LINENO "configure" +#include "confdefs.h" # include -#endif Syntax error _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -3270,8 +2953,7 @@ : else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 # Broken: fails on valid input. continue fi @@ -3280,24 +2962,20 @@ # OK, works on sane cases. Now check whether non-existent headers # can be detected and how. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -3309,8 +2987,7 @@ continue else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 # Passes both tests. ac_preproc_ok=: break @@ -3339,34 +3016,24 @@ do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. - # Prefer to if __STDC__ is defined, since - # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -#ifdef __STDC__ -# include -#else +#line $LINENO "configure" +#include "confdefs.h" # include -#endif Syntax error _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -3377,8 +3044,7 @@ : else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 # Broken: fails on valid input. continue fi @@ -3387,24 +3053,20 @@ # OK, works on sane cases. Now check whether non-existent headers # can be detected and how. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -3416,8 +3078,7 @@ continue else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 # Passes both tests. ac_preproc_ok=: break @@ -3430,10 +3091,8 @@ if $ac_preproc_ok; then : else - { { echo "$as_me:$LINENO: error: C preprocessor \"$CPP\" fails sanity check -See \`config.log' for more details." >&5 -echo "$as_me: error: C preprocessor \"$CPP\" fails sanity check -See \`config.log' for more details." >&2;} + { { echo "$as_me:$LINENO: error: C preprocessor \"$CPP\" fails sanity check" >&5 +echo "$as_me: error: C preprocessor \"$CPP\" fails sanity check" >&2;} { (exit 1); exit 1; }; } fi @@ -3444,89 +3103,55 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu -echo "$as_me:$LINENO: checking for egrep" >&5 -echo $ECHO_N "checking for egrep... $ECHO_C" >&6 -if test "${ac_cv_prog_egrep+set}" = set; then - echo $ECHO_N "(cached) $ECHO_C" >&6 -else - if echo a | (grep -E '(a|b)') >/dev/null 2>&1 - then ac_cv_prog_egrep='grep -E' - else ac_cv_prog_egrep='egrep' - fi -fi -echo "$as_me:$LINENO: result: $ac_cv_prog_egrep" >&5 -echo "${ECHO_T}$ac_cv_prog_egrep" >&6 - EGREP=$ac_cv_prog_egrep - - echo "$as_me:$LINENO: checking for ANSI C header files" >&5 echo $ECHO_N "checking for ANSI C header files... $ECHO_C" >&6 if test "${ac_cv_header_stdc+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include #include #include #include -int -main () -{ - - ; - return 0; -} _ACEOF -rm -f conftest.$ac_objext -if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 +if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 + (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && - { ac_try='test -s conftest.$ac_objext' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; }; then + (exit $ac_status); } >/dev/null; then + if test -s conftest.err; then + ac_cpp_err=$ac_c_preproc_warn_flag + else + ac_cpp_err= + fi +else + ac_cpp_err=yes +fi +if test -z "$ac_cpp_err"; then ac_cv_header_stdc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_cv_header_stdc=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.err conftest.$ac_ext if test $ac_cv_header_stdc = yes; then # SunOS 4.x string.h does not declare mem*, contrary to ANSI. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | - $EGREP "memchr" >/dev/null 2>&1; then + egrep "memchr" >/dev/null 2>&1; then : else ac_cv_header_stdc=no @@ -3538,16 +3163,13 @@ if test $ac_cv_header_stdc = yes; then # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | - $EGREP "free" >/dev/null 2>&1; then + egrep "free" >/dev/null 2>&1; then : else ac_cv_header_stdc=no @@ -3562,18 +3184,14 @@ : else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include #if ((' ' & 0x0FF) == 0x020) # define ISLOWER(c) ('a' <= (c) && (c) <= 'z') # define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) #else -# define ISLOWER(c) \ - (('a' <= (c) && (c) <= 'i') \ +# define ISLOWER(c) (('a' <= (c) && (c) <= 'i') \ || ('j' <= (c) && (c) <= 'r') \ || ('s' <= (c) && (c) <= 'z')) # define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c)) @@ -3606,12 +3224,11 @@ else echo "$as_me: program exited with status $ac_status" >&5 echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ( exit $ac_status ) ac_cv_header_stdc=no fi -rm -f core *.core gmon.out bb.out conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext +rm -f core core.* *.core conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext fi fi fi @@ -3645,31 +3262,18 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include <$ac_header> _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -3679,11 +3283,10 @@ eval "$as_ac_Header=yes" else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 eval "$as_ac_Header=no" fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6 @@ -3714,30 +3317,17 @@ echo "$as_me:$LINENO: checking $ac_header usability" >&5 echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include <$ac_header> _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -3747,11 +3337,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -3759,24 +3348,20 @@ echo "$as_me:$LINENO: checking $ac_header presence" >&5 echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include <$ac_header> _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -3787,8 +3372,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -3796,43 +3380,26 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5 echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for $ac_header" >&5 echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6 if eval "test \"\${$as_ac_Header+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - eval "$as_ac_Header=\$ac_header_preproc" + eval "$as_ac_Header=$ac_header_preproc" fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6 @@ -3864,30 +3431,17 @@ echo "$as_me:$LINENO: checking $ac_header usability" >&5 echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include <$ac_header> _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -3897,11 +3451,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -3909,24 +3462,20 @@ echo "$as_me:$LINENO: checking $ac_header presence" >&5 echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include <$ac_header> _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -3937,8 +3486,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -3946,43 +3494,26 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5 echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for $ac_header" >&5 echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6 if eval "test \"\${$as_ac_Header+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - eval "$as_ac_Header=\$ac_header_preproc" + eval "$as_ac_Header=$ac_header_preproc" fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6 @@ -4014,30 +3545,17 @@ echo "$as_me:$LINENO: checking $ac_header usability" >&5 echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include <$ac_header> _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -4047,11 +3565,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -4059,24 +3576,20 @@ echo "$as_me:$LINENO: checking $ac_header presence" >&5 echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include <$ac_header> _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -4087,8 +3600,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -4096,43 +3608,26 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5 echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for $ac_header" >&5 echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6 if eval "test \"\${$as_ac_Header+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - eval "$as_ac_Header=\$ac_header_preproc" + eval "$as_ac_Header=$ac_header_preproc" fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6 @@ -4164,30 +3659,17 @@ echo "$as_me:$LINENO: checking $ac_header usability" >&5 echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include <$ac_header> _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -4197,11 +3679,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -4209,24 +3690,20 @@ echo "$as_me:$LINENO: checking $ac_header presence" >&5 echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include <$ac_header> _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -4237,8 +3714,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -4246,43 +3722,26 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5 echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for $ac_header" >&5 echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6 if eval "test \"\${$as_ac_Header+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - eval "$as_ac_Header=\$ac_header_preproc" + eval "$as_ac_Header=$ac_header_preproc" fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6 @@ -4314,30 +3773,17 @@ echo "$as_me:$LINENO: checking $ac_header usability" >&5 echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include <$ac_header> _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -4347,11 +3793,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -4359,24 +3804,20 @@ echo "$as_me:$LINENO: checking $ac_header presence" >&5 echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include <$ac_header> _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -4387,8 +3828,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -4396,43 +3836,26 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5 echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for $ac_header" >&5 echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6 if eval "test \"\${$as_ac_Header+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - eval "$as_ac_Header=\$ac_header_preproc" + eval "$as_ac_Header=$ac_header_preproc" fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6 @@ -4464,30 +3887,17 @@ echo "$as_me:$LINENO: checking $ac_header usability" >&5 echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include <$ac_header> _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -4497,11 +3907,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -4509,24 +3918,20 @@ echo "$as_me:$LINENO: checking $ac_header presence" >&5 echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include <$ac_header> _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -4537,8 +3942,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -4546,43 +3950,26 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5 echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for $ac_header" >&5 echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6 if eval "test \"\${$as_ac_Header+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - eval "$as_ac_Header=\$ac_header_preproc" + eval "$as_ac_Header=$ac_header_preproc" fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6 @@ -4608,72 +3995,48 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -/* Define $ac_func to an innocuous variant, in case declares $ac_func. - For example, HP-UX 11i declares gettimeofday. */ -#define $ac_func innocuous_$ac_func - +#line $LINENO "configure" +#include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, - which can conflict with char $ac_func (); below. - Prefer to if __STDC__ is defined, since - exists even on freestanding compilers. */ - -#ifdef __STDC__ -# include -#else + which can conflict with char $ac_func (); below. */ # include -#endif - -#undef $ac_func - /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" -{ #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func (); +char (*f) (); + +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif +int +main () +{ /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else -char (*f) () = $ac_func; -#endif -#ifdef __cplusplus -} +f = $ac_func; #endif -int -main () -{ -return f != $ac_func; ; return 0; } _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -4683,12 +4046,10 @@ eval "$as_ac_var=yes" else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 eval "$as_ac_var=no" fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_var'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_var'}'`" >&6 @@ -4710,72 +4071,48 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -/* Define $ac_func to an innocuous variant, in case declares $ac_func. - For example, HP-UX 11i declares gettimeofday. */ -#define $ac_func innocuous_$ac_func - +#line $LINENO "configure" +#include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, - which can conflict with char $ac_func (); below. - Prefer to if __STDC__ is defined, since - exists even on freestanding compilers. */ - -#ifdef __STDC__ -# include -#else + which can conflict with char $ac_func (); below. */ # include -#endif - -#undef $ac_func - /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" -{ #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func (); +char (*f) (); + +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif +int +main () +{ /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else -char (*f) () = $ac_func; -#endif -#ifdef __cplusplus -} +f = $ac_func; #endif -int -main () -{ -return f != $ac_func; ; return 0; } _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -4785,12 +4122,10 @@ eval "$as_ac_var=yes" else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 eval "$as_ac_var=no" fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_var'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_var'}'`" >&6 @@ -4812,72 +4147,48 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -/* Define $ac_func to an innocuous variant, in case declares $ac_func. - For example, HP-UX 11i declares gettimeofday. */ -#define $ac_func innocuous_$ac_func - +#line $LINENO "configure" +#include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, - which can conflict with char $ac_func (); below. - Prefer to if __STDC__ is defined, since - exists even on freestanding compilers. */ - -#ifdef __STDC__ -# include -#else + which can conflict with char $ac_func (); below. */ # include -#endif - -#undef $ac_func - /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" -{ #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func (); +char (*f) (); + +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif +int +main () +{ /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else -char (*f) () = $ac_func; -#endif -#ifdef __cplusplus -} +f = $ac_func; #endif -int -main () -{ -return f != $ac_func; ; return 0; } _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -4887,12 +4198,10 @@ eval "$as_ac_var=yes" else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 eval "$as_ac_var=no" fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_var'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_var'}'`" >&6 @@ -4914,72 +4223,48 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -/* Define $ac_func to an innocuous variant, in case declares $ac_func. - For example, HP-UX 11i declares gettimeofday. */ -#define $ac_func innocuous_$ac_func - +#line $LINENO "configure" +#include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, - which can conflict with char $ac_func (); below. - Prefer to if __STDC__ is defined, since - exists even on freestanding compilers. */ - -#ifdef __STDC__ -# include -#else + which can conflict with char $ac_func (); below. */ # include -#endif - -#undef $ac_func - /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" -{ #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func (); +char (*f) (); + +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif +int +main () +{ /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else -char (*f) () = $ac_func; -#endif -#ifdef __cplusplus -} +f = $ac_func; #endif -int -main () -{ -return f != $ac_func; ; return 0; } _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -4989,12 +4274,10 @@ eval "$as_ac_var=yes" else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 eval "$as_ac_var=no" fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_var'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_var'}'`" >&6 @@ -5016,72 +4299,48 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -/* Define $ac_func to an innocuous variant, in case declares $ac_func. - For example, HP-UX 11i declares gettimeofday. */ -#define $ac_func innocuous_$ac_func - +#line $LINENO "configure" +#include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, - which can conflict with char $ac_func (); below. - Prefer to if __STDC__ is defined, since - exists even on freestanding compilers. */ - -#ifdef __STDC__ -# include -#else + which can conflict with char $ac_func (); below. */ # include -#endif - -#undef $ac_func - /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" -{ #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func (); +char (*f) (); + +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif +int +main () +{ /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else -char (*f) () = $ac_func; -#endif -#ifdef __cplusplus -} +f = $ac_func; #endif -int -main () -{ -return f != $ac_func; ; return 0; } _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5091,12 +4350,10 @@ eval "$as_ac_var=yes" else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 eval "$as_ac_var=no" fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_var'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_var'}'`" >&6 @@ -5118,72 +4375,48 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -/* Define $ac_func to an innocuous variant, in case declares $ac_func. - For example, HP-UX 11i declares gettimeofday. */ -#define $ac_func innocuous_$ac_func - +#line $LINENO "configure" +#include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, - which can conflict with char $ac_func (); below. - Prefer to if __STDC__ is defined, since - exists even on freestanding compilers. */ - -#ifdef __STDC__ -# include -#else + which can conflict with char $ac_func (); below. */ # include -#endif - -#undef $ac_func - /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" -{ #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func (); +char (*f) (); + +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif +int +main () +{ /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else -char (*f) () = $ac_func; -#endif -#ifdef __cplusplus -} +f = $ac_func; #endif -int -main () -{ -return f != $ac_func; ; return 0; } _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5193,12 +4426,10 @@ eval "$as_ac_var=yes" else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 eval "$as_ac_var=no" fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_var'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_var'}'`" >&6 @@ -5220,72 +4451,48 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -/* Define $ac_func to an innocuous variant, in case declares $ac_func. - For example, HP-UX 11i declares gettimeofday. */ -#define $ac_func innocuous_$ac_func - +#line $LINENO "configure" +#include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, - which can conflict with char $ac_func (); below. - Prefer to if __STDC__ is defined, since - exists even on freestanding compilers. */ - -#ifdef __STDC__ -# include -#else + which can conflict with char $ac_func (); below. */ # include -#endif - -#undef $ac_func - /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" -{ #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func (); +char (*f) (); + +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif +int +main () +{ /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else -char (*f) () = $ac_func; -#endif -#ifdef __cplusplus -} +f = $ac_func; #endif -int -main () -{ -return f != $ac_func; ; return 0; } _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5295,12 +4502,10 @@ eval "$as_ac_var=yes" else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 eval "$as_ac_var=no" fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_var'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_var'}'`" >&6 @@ -5322,72 +4527,48 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -/* Define $ac_func to an innocuous variant, in case declares $ac_func. - For example, HP-UX 11i declares gettimeofday. */ -#define $ac_func innocuous_$ac_func - +#line $LINENO "configure" +#include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, - which can conflict with char $ac_func (); below. - Prefer to if __STDC__ is defined, since - exists even on freestanding compilers. */ - -#ifdef __STDC__ -# include -#else + which can conflict with char $ac_func (); below. */ # include -#endif - -#undef $ac_func - /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" -{ #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func (); +char (*f) (); + +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif +int +main () +{ /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else -char (*f) () = $ac_func; -#endif -#ifdef __cplusplus -} +f = $ac_func; #endif -int -main () -{ -return f != $ac_func; ; return 0; } _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5397,12 +4578,10 @@ eval "$as_ac_var=yes" else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 eval "$as_ac_var=no" fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_var'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_var'}'`" >&6 @@ -5424,72 +4603,48 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ -/* Define $ac_func to an innocuous variant, in case declares $ac_func. - For example, HP-UX 11i declares gettimeofday. */ -#define $ac_func innocuous_$ac_func - +#line $LINENO "configure" +#include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, - which can conflict with char $ac_func (); below. - Prefer to if __STDC__ is defined, since - exists even on freestanding compilers. */ - -#ifdef __STDC__ -# include -#else + which can conflict with char $ac_func (); below. */ # include -#endif - -#undef $ac_func - /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" -{ #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func (); +char (*f) (); + +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif +int +main () +{ /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else -char (*f) () = $ac_func; -#endif -#ifdef __cplusplus -} +f = $ac_func; #endif -int -main () -{ -return f != $ac_func; ; return 0; } _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5499,12 +4654,10 @@ eval "$as_ac_var=yes" else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 eval "$as_ac_var=no" fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_var'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_var'}'`" >&6 @@ -5523,12 +4676,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -5542,20 +4698,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5565,11 +4711,10 @@ ac_cv_type_int32_t=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_int32_t=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_int32_t" >&5 echo "${ECHO_T}$ac_cv_type_int32_t" >&6 @@ -5587,12 +4732,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -5606,20 +4754,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5629,11 +4767,10 @@ ac_cv_type_uint32_t=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_uint32_t=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_uint32_t" >&5 echo "${ECHO_T}$ac_cv_type_uint32_t" >&6 @@ -5651,12 +4788,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -5670,20 +4810,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5693,11 +4823,10 @@ ac_cv_type_u_int32_t=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_u_int32_t=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_u_int32_t" >&5 echo "${ECHO_T}$ac_cv_type_u_int32_t" >&6 @@ -5716,12 +4845,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -5735,20 +4867,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5758,11 +4880,10 @@ ac_cv_type_int64_t=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_int64_t=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_int64_t" >&5 echo "${ECHO_T}$ac_cv_type_int64_t" >&6 @@ -5780,12 +4901,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -5799,20 +4923,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5822,11 +4936,10 @@ ac_cv_type_uint64_t=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_uint64_t=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_uint64_t" >&5 echo "${ECHO_T}$ac_cv_type_uint64_t" >&6 @@ -5844,12 +4957,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -5863,20 +4979,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5886,11 +4992,10 @@ ac_cv_type_u_int64_t=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_u_int64_t=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_u_int64_t" >&5 echo "${ECHO_T}$ac_cv_type_u_int64_t" >&6 @@ -5909,12 +5014,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -5928,20 +5036,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -5951,11 +5049,10 @@ ac_cv_type_ssize_t=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_ssize_t=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_ssize_t" >&5 echo "${ECHO_T}$ac_cv_type_ssize_t" >&6 @@ -5974,12 +5071,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -5993,20 +5093,10 @@ _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -6016,12 +5106,10 @@ slonac_cv_func_posix_signals=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 slonac_cv_func_posix_signals=no fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $slonac_cv_func_posix_signals" >&5 echo "${ECHO_T}$slonac_cv_func_posix_signals" >&6 @@ -6376,11 +5464,8 @@ ac_check_lib_save_LIBS=$LIBS LIBS="-lpq $LIBS" cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus @@ -6389,6 +5474,12 @@ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char PQunescapeBytea (); +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -6399,20 +5490,10 @@ _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -6422,12 +5503,10 @@ ac_cv_lib_pq_PQunescapeBytea=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_lib_pq_PQunescapeBytea=no fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi echo "$as_me:$LINENO: result: $ac_cv_lib_pq_PQunescapeBytea" >&5 @@ -6468,30 +5547,17 @@ echo "$as_me:$LINENO: checking libpq-fe.h usability" >&5 echo $ECHO_N "checking libpq-fe.h usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -6501,11 +5567,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -6513,24 +5578,20 @@ echo "$as_me:$LINENO: checking libpq-fe.h presence" >&5 echo $ECHO_N "checking libpq-fe.h presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -6541,8 +5602,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -6550,36 +5610,19 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: libpq-fe.h: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: libpq-fe.h: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: libpq-fe.h: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: libpq-fe.h: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: libpq-fe.h: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: libpq-fe.h: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: libpq-fe.h: present but cannot be compiled" >&5 echo "$as_me: WARNING: libpq-fe.h: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: libpq-fe.h: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: libpq-fe.h: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: libpq-fe.h: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: libpq-fe.h: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: libpq-fe.h: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: libpq-fe.h: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: libpq-fe.h: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: libpq-fe.h: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: libpq-fe.h: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: libpq-fe.h: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: libpq-fe.h: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for libpq-fe.h" >&5 echo $ECHO_N "checking for libpq-fe.h... $ECHO_C" >&6 @@ -6636,30 +5679,17 @@ echo "$as_me:$LINENO: checking postgres.h usability" >&5 echo $ECHO_N "checking postgres.h usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -6669,11 +5699,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -6681,24 +5710,20 @@ echo "$as_me:$LINENO: checking postgres.h presence" >&5 echo $ECHO_N "checking postgres.h presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -6709,8 +5734,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -6718,36 +5742,19 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: postgres.h: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: postgres.h: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: postgres.h: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: postgres.h: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: postgres.h: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: postgres.h: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: postgres.h: present but cannot be compiled" >&5 echo "$as_me: WARNING: postgres.h: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: postgres.h: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: postgres.h: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: postgres.h: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: postgres.h: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: postgres.h: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: postgres.h: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: postgres.h: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: postgres.h: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: postgres.h: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: postgres.h: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: postgres.h: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for postgres.h" >&5 echo $ECHO_N "checking for postgres.h... $ECHO_C" >&6 @@ -6771,31 +5778,18 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include "postgres.h" #include _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -6805,11 +5799,10 @@ ac_cv_header_utils_typcache_h=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_header_utils_typcache_h=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_header_utils_typcache_h" >&5 echo "${ECHO_T}$ac_cv_header_utils_typcache_h" >&6 @@ -6951,11 +5944,8 @@ ac_check_lib_save_LIBS=$LIBS LIBS="-lpq $LIBS" cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus @@ -6964,6 +5954,12 @@ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char PQputCopyData (); +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -6974,20 +5970,10 @@ _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -6997,12 +5983,10 @@ ac_cv_lib_pq_PQputCopyData=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_lib_pq_PQputCopyData=no fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi echo "$as_me:$LINENO: result: $ac_cv_lib_pq_PQputCopyData" >&5 @@ -7028,11 +6012,8 @@ ac_check_lib_save_LIBS=$LIBS LIBS="-lpq $LIBS" cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus @@ -7041,6 +6022,12 @@ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char PQsetNoticeReceiver (); +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7051,20 +6038,10 @@ _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7074,12 +6051,10 @@ ac_cv_lib_pq_PQsetNoticeReceiver=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_lib_pq_PQsetNoticeReceiver=no fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi echo "$as_me:$LINENO: result: $ac_cv_lib_pq_PQsetNoticeReceiver" >&5 @@ -7105,11 +6080,8 @@ ac_check_lib_save_LIBS=$LIBS LIBS="-lpq $LIBS" cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus @@ -7118,6 +6090,12 @@ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char PQfreemem (); +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7128,20 +6106,10 @@ _ACEOF rm -f conftest.$ac_objext conftest$ac_exeext if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5 - (eval $ac_link) 2>conftest.er1 + (eval $ac_link) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest$ac_exeext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7151,12 +6119,10 @@ ac_cv_lib_pq_PQfreemem=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_lib_pq_PQfreemem=no fi -rm -f conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext +rm -f conftest.$ac_objext conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi echo "$as_me:$LINENO: result: $ac_cv_lib_pq_PQfreemem" >&5 @@ -7178,13 +6144,16 @@ echo $ECHO_N "checking for typenameTypeId... $ECHO_C" >&6 if test -z "$ac_cv_typenameTypeId_args"; then cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include "postgres.h" #include "parser/parse_type.h" +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7195,20 +6164,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7218,20 +6177,22 @@ ac_cv_typenameTypeId_args=2 else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi if test -z "$ac_cv_typenameTypeId_args" ; then cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include "postgres.h" #include "parser/parse_type.h" +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7242,20 +6203,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7265,10 +6216,9 @@ ac_cv_typenameTypeId_args=1 else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi if test -z "$ac_cv_typenameTypeId_args"; then echo "$as_me:$LINENO: result: no" >&5 @@ -7289,22 +6239,51 @@ echo "${ECHO_T}yes, and it takes $ac_cv_typenameTypeId_args arguments" >&6 fi +echo "$as_me:$LINENO: checking for standard_conforming_strings" >&5 +echo $ECHO_N "checking for standard_conforming_strings... $ECHO_C" >&6 +if test -z "$ac_cv_standard_conforming_strings"; then + cat >conftest.$ac_ext <<_ACEOF +#line $LINENO "configure" +#include "confdefs.h" +#include + +_ACEOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "standard_conforming_strings" >/dev/null 2>&1; then + echo "$as_me:$LINENO: result: yes" >&5 +echo "${ECHO_T}yes" >&6 + cat >>confdefs.h <<\_ACEOF +#define HAVE_STANDARDCONFORMINGSTRINGS 1 +_ACEOF + +else + echo "$as_me:$LINENO: result: no" >&5 +echo "${ECHO_T}no" >&6 + +fi +rm -f conftest* + +fi + echo "$as_me:$LINENO: checking whether GetTopTransactionId is declared" >&5 echo $ECHO_N "checking whether GetTopTransactionId is declared... $ECHO_C" >&6 if test "${ac_cv_have_decl_GetTopTransactionId+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include "postgres.h" #include "access/xact.h" +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7318,20 +6297,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7341,11 +6310,10 @@ ac_cv_have_decl_GetTopTransactionId=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_have_decl_GetTopTransactionId=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_have_decl_GetTopTransactionId" >&5 echo "${ECHO_T}$ac_cv_have_decl_GetTopTransactionId" >&6 @@ -7483,30 +6451,17 @@ echo "$as_me:$LINENO: checking $ac_header usability" >&5 echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include <$ac_header> _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7516,11 +6471,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -7528,24 +6482,20 @@ echo "$as_me:$LINENO: checking $ac_header presence" >&5 echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include <$ac_header> _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -7556,8 +6506,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -7565,43 +6514,26 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5 echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for $ac_header" >&5 echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6 if eval "test \"\${$as_ac_Header+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - eval "$as_ac_Header=\$ac_header_preproc" + eval "$as_ac_Header=$ac_header_preproc" fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6 @@ -7622,12 +6554,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7641,20 +6576,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7664,11 +6589,10 @@ ac_cv_type_short=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_short=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_short" >&5 echo "${ECHO_T}$ac_cv_type_short" >&6 @@ -7686,12 +6610,15 @@ if test "$cross_compiling" = yes; then # Depending upon the size, compute the lo and hi bounds. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7704,20 +6631,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7727,12 +6644,15 @@ ac_lo=0 ac_mid=0 while :; do cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7745,20 +6665,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7768,8 +6678,7 @@ ac_hi=$ac_mid; break else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo=`expr $ac_mid + 1` if test $ac_lo -le $ac_mid; then ac_lo= ac_hi= @@ -7777,19 +6686,21 @@ fi ac_mid=`expr 2 '*' $ac_mid + 1` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7802,20 +6713,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7825,12 +6726,15 @@ ac_hi=-1 ac_mid=-1 while :; do cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7843,20 +6747,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7866,8 +6760,7 @@ ac_lo=$ac_mid; break else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_hi=`expr '(' $ac_mid ')' - 1` if test $ac_mid -le $ac_hi; then ac_lo= ac_hi= @@ -7875,27 +6768,29 @@ fi ac_mid=`expr 2 '*' $ac_mid` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo= ac_hi= fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext # Binary search between lo and hi bounds. while test "x$ac_lo" != "x$ac_hi"; do ac_mid=`expr '(' $ac_hi - $ac_lo ')' / 2 + $ac_lo` cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -7908,20 +6803,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -7931,39 +6816,37 @@ ac_hi=$ac_mid else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo=`expr '(' $ac_mid ')' + 1` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done case $ac_lo in ?*) ac_cv_sizeof_short=$ac_lo;; -'') { { echo "$as_me:$LINENO: error: cannot compute sizeof (short), 77 -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot compute sizeof (short), 77 -See \`config.log' for more details." >&2;} +'') { { echo "$as_me:$LINENO: error: cannot compute sizeof (short), 77" >&5 +echo "$as_me: error: cannot compute sizeof (short), 77" >&2;} { (exit 1); exit 1; }; } ;; esac else if test "$cross_compiling" = yes; then - { { echo "$as_me:$LINENO: error: cannot run test program while cross compiling -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot run test program while cross compiling -See \`config.log' for more details." >&2;} + { { echo "$as_me:$LINENO: error: cannot run test program while cross compiling" >&5 +echo "$as_me: error: cannot run test program while cross compiling" >&2;} { (exit 1); exit 1; }; } else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default long longval () { return (long) (sizeof (short)); } unsigned long ulongval () { return (long) (sizeof (short)); } #include #include +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8006,16 +6889,13 @@ else echo "$as_me: program exited with status $ac_status" >&5 echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ( exit $ac_status ) -{ { echo "$as_me:$LINENO: error: cannot compute sizeof (short), 77 -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot compute sizeof (short), 77 -See \`config.log' for more details." >&2;} +{ { echo "$as_me:$LINENO: error: cannot compute sizeof (short), 77" >&5 +echo "$as_me: error: cannot compute sizeof (short), 77" >&2;} { (exit 1); exit 1; }; } fi -rm -f core *.core gmon.out bb.out conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext +rm -f core core.* *.core conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext fi fi rm -f conftest.val @@ -8036,12 +6916,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8055,20 +6938,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8078,11 +6951,10 @@ ac_cv_type_int=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_int=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_int" >&5 echo "${ECHO_T}$ac_cv_type_int" >&6 @@ -8100,12 +6972,15 @@ if test "$cross_compiling" = yes; then # Depending upon the size, compute the lo and hi bounds. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8118,20 +6993,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8141,12 +7006,15 @@ ac_lo=0 ac_mid=0 while :; do cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8159,20 +7027,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8182,8 +7040,7 @@ ac_hi=$ac_mid; break else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo=`expr $ac_mid + 1` if test $ac_lo -le $ac_mid; then ac_lo= ac_hi= @@ -8191,19 +7048,21 @@ fi ac_mid=`expr 2 '*' $ac_mid + 1` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8216,20 +7075,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8239,12 +7088,15 @@ ac_hi=-1 ac_mid=-1 while :; do cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8257,20 +7109,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8280,8 +7122,7 @@ ac_lo=$ac_mid; break else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_hi=`expr '(' $ac_mid ')' - 1` if test $ac_mid -le $ac_hi; then ac_lo= ac_hi= @@ -8289,27 +7130,29 @@ fi ac_mid=`expr 2 '*' $ac_mid` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo= ac_hi= fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext # Binary search between lo and hi bounds. while test "x$ac_lo" != "x$ac_hi"; do ac_mid=`expr '(' $ac_hi - $ac_lo ')' / 2 + $ac_lo` cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8322,20 +7165,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8345,39 +7178,37 @@ ac_hi=$ac_mid else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo=`expr '(' $ac_mid ')' + 1` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done case $ac_lo in ?*) ac_cv_sizeof_int=$ac_lo;; -'') { { echo "$as_me:$LINENO: error: cannot compute sizeof (int), 77 -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot compute sizeof (int), 77 -See \`config.log' for more details." >&2;} +'') { { echo "$as_me:$LINENO: error: cannot compute sizeof (int), 77" >&5 +echo "$as_me: error: cannot compute sizeof (int), 77" >&2;} { (exit 1); exit 1; }; } ;; esac else if test "$cross_compiling" = yes; then - { { echo "$as_me:$LINENO: error: cannot run test program while cross compiling -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot run test program while cross compiling -See \`config.log' for more details." >&2;} + { { echo "$as_me:$LINENO: error: cannot run test program while cross compiling" >&5 +echo "$as_me: error: cannot run test program while cross compiling" >&2;} { (exit 1); exit 1; }; } else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default long longval () { return (long) (sizeof (int)); } unsigned long ulongval () { return (long) (sizeof (int)); } #include #include +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8420,16 +7251,13 @@ else echo "$as_me: program exited with status $ac_status" >&5 echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ( exit $ac_status ) -{ { echo "$as_me:$LINENO: error: cannot compute sizeof (int), 77 -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot compute sizeof (int), 77 -See \`config.log' for more details." >&2;} +{ { echo "$as_me:$LINENO: error: cannot compute sizeof (int), 77" >&5 +echo "$as_me: error: cannot compute sizeof (int), 77" >&2;} { (exit 1); exit 1; }; } fi -rm -f core *.core gmon.out bb.out conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext +rm -f core core.* *.core conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext fi fi rm -f conftest.val @@ -8450,12 +7278,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8469,20 +7300,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8492,11 +7313,10 @@ ac_cv_type_long=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_long=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_long" >&5 echo "${ECHO_T}$ac_cv_type_long" >&6 @@ -8514,12 +7334,15 @@ if test "$cross_compiling" = yes; then # Depending upon the size, compute the lo and hi bounds. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8532,20 +7355,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8555,12 +7368,15 @@ ac_lo=0 ac_mid=0 while :; do cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8573,20 +7389,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8596,8 +7402,7 @@ ac_hi=$ac_mid; break else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo=`expr $ac_mid + 1` if test $ac_lo -le $ac_mid; then ac_lo= ac_hi= @@ -8605,19 +7410,21 @@ fi ac_mid=`expr 2 '*' $ac_mid + 1` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8630,20 +7437,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8653,12 +7450,15 @@ ac_hi=-1 ac_mid=-1 while :; do cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8671,20 +7471,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8694,8 +7484,7 @@ ac_lo=$ac_mid; break else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_hi=`expr '(' $ac_mid ')' - 1` if test $ac_mid -le $ac_hi; then ac_lo= ac_hi= @@ -8703,27 +7492,29 @@ fi ac_mid=`expr 2 '*' $ac_mid` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo= ac_hi= fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext # Binary search between lo and hi bounds. while test "x$ac_lo" != "x$ac_hi"; do ac_mid=`expr '(' $ac_hi - $ac_lo ')' / 2 + $ac_lo` cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8736,20 +7527,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8759,39 +7540,37 @@ ac_hi=$ac_mid else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo=`expr '(' $ac_mid ')' + 1` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done case $ac_lo in ?*) ac_cv_sizeof_long=$ac_lo;; -'') { { echo "$as_me:$LINENO: error: cannot compute sizeof (long), 77 -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot compute sizeof (long), 77 -See \`config.log' for more details." >&2;} +'') { { echo "$as_me:$LINENO: error: cannot compute sizeof (long), 77" >&5 +echo "$as_me: error: cannot compute sizeof (long), 77" >&2;} { (exit 1); exit 1; }; } ;; esac else if test "$cross_compiling" = yes; then - { { echo "$as_me:$LINENO: error: cannot run test program while cross compiling -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot run test program while cross compiling -See \`config.log' for more details." >&2;} + { { echo "$as_me:$LINENO: error: cannot run test program while cross compiling" >&5 +echo "$as_me: error: cannot run test program while cross compiling" >&2;} { (exit 1); exit 1; }; } else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default long longval () { return (long) (sizeof (long)); } unsigned long ulongval () { return (long) (sizeof (long)); } #include #include +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8834,16 +7613,13 @@ else echo "$as_me: program exited with status $ac_status" >&5 echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ( exit $ac_status ) -{ { echo "$as_me:$LINENO: error: cannot compute sizeof (long), 77 -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot compute sizeof (long), 77 -See \`config.log' for more details." >&2;} +{ { echo "$as_me:$LINENO: error: cannot compute sizeof (long), 77" >&5 +echo "$as_me: error: cannot compute sizeof (long), 77" >&2;} { (exit 1); exit 1; }; } fi -rm -f core *.core gmon.out bb.out conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext +rm -f core core.* *.core conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext fi fi rm -f conftest.val @@ -8864,12 +7640,15 @@ echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8883,20 +7662,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8906,11 +7675,10 @@ ac_cv_type_long_long=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_cv_type_long_long=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi echo "$as_me:$LINENO: result: $ac_cv_type_long_long" >&5 echo "${ECHO_T}$ac_cv_type_long_long" >&6 @@ -8928,12 +7696,15 @@ if test "$cross_compiling" = yes; then # Depending upon the size, compute the lo and hi bounds. cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8946,20 +7717,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -8969,12 +7730,15 @@ ac_lo=0 ac_mid=0 while :; do cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -8987,20 +7751,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -9010,8 +7764,7 @@ ac_hi=$ac_mid; break else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo=`expr $ac_mid + 1` if test $ac_lo -le $ac_mid; then ac_lo= ac_hi= @@ -9019,19 +7772,21 @@ fi ac_mid=`expr 2 '*' $ac_mid + 1` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -9044,20 +7799,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -9067,12 +7812,15 @@ ac_hi=-1 ac_mid=-1 while :; do cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -9085,20 +7833,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -9108,8 +7846,7 @@ ac_lo=$ac_mid; break else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_hi=`expr '(' $ac_mid ')' - 1` if test $ac_mid -le $ac_hi; then ac_lo= ac_hi= @@ -9117,27 +7854,29 @@ fi ac_mid=`expr 2 '*' $ac_mid` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo= ac_hi= fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext # Binary search between lo and hi bounds. while test "x$ac_lo" != "x$ac_hi"; do ac_mid=`expr '(' $ac_hi - $ac_lo ')' / 2 + $ac_lo` cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -9150,20 +7889,10 @@ _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -9173,39 +7902,37 @@ ac_hi=$ac_mid else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_lo=`expr '(' $ac_mid ')' + 1` fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext done case $ac_lo in ?*) ac_cv_sizeof_long_long=$ac_lo;; -'') { { echo "$as_me:$LINENO: error: cannot compute sizeof (long long), 77 -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot compute sizeof (long long), 77 -See \`config.log' for more details." >&2;} +'') { { echo "$as_me:$LINENO: error: cannot compute sizeof (long long), 77" >&5 +echo "$as_me: error: cannot compute sizeof (long long), 77" >&2;} { (exit 1); exit 1; }; } ;; esac else if test "$cross_compiling" = yes; then - { { echo "$as_me:$LINENO: error: cannot run test program while cross compiling -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot run test program while cross compiling -See \`config.log' for more details." >&2;} + { { echo "$as_me:$LINENO: error: cannot run test program while cross compiling" >&5 +echo "$as_me: error: cannot run test program while cross compiling" >&2;} { (exit 1); exit 1; }; } else cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default long longval () { return (long) (sizeof (long long)); } unsigned long ulongval () { return (long) (sizeof (long long)); } #include #include +#ifdef F77_DUMMY_MAIN +# ifdef __cplusplus + extern "C" +# endif + int F77_DUMMY_MAIN() { return 1; } +#endif int main () { @@ -9248,16 +7975,13 @@ else echo "$as_me: program exited with status $ac_status" >&5 echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ( exit $ac_status ) -{ { echo "$as_me:$LINENO: error: cannot compute sizeof (long long), 77 -See \`config.log' for more details." >&5 -echo "$as_me: error: cannot compute sizeof (long long), 77 -See \`config.log' for more details." >&2;} +{ { echo "$as_me:$LINENO: error: cannot compute sizeof (long long), 77" >&5 +echo "$as_me: error: cannot compute sizeof (long long), 77" >&2;} { (exit 1); exit 1; }; } fi -rm -f core *.core gmon.out bb.out conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext +rm -f core core.* *.core conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext fi fi rm -f conftest.val @@ -9289,30 +8013,17 @@ echo "$as_me:$LINENO: checking $ac_header usability" >&5 echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include <$ac_header> _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -9322,11 +8033,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -9334,24 +8044,20 @@ echo "$as_me:$LINENO: checking $ac_header presence" >&5 echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include <$ac_header> _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -9362,8 +8068,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -9371,43 +8076,26 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5 echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for $ac_header" >&5 echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6 if eval "test \"\${$as_ac_Header+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - eval "$as_ac_Header=\$ac_header_preproc" + eval "$as_ac_Header=$ac_header_preproc" fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6 @@ -9440,30 +8128,17 @@ echo "$as_me:$LINENO: checking $ac_header usability" >&5 echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" $ac_includes_default #include <$ac_header> _ACEOF rm -f conftest.$ac_objext if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5 - (eval $ac_compile) 2>conftest.er1 + (eval $ac_compile) 2>&5 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err - rm -f conftest.er1 - cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } && - { ac_try='test -z "$ac_c_werror_flag" - || test ! -s conftest.err' - { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 - (eval $ac_try) 2>&5 - ac_status=$? - echo "$as_me:$LINENO: \$? = $ac_status" >&5 - (exit $ac_status); }; } && { ac_try='test -s conftest.$ac_objext' { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5 (eval $ac_try) 2>&5 @@ -9473,11 +8148,10 @@ ac_header_compiler=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - +cat conftest.$ac_ext >&5 ac_header_compiler=no fi -rm -f conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f conftest.$ac_objext conftest.$ac_ext echo "$as_me:$LINENO: result: $ac_header_compiler" >&5 echo "${ECHO_T}$ac_header_compiler" >&6 @@ -9485,24 +8159,20 @@ echo "$as_me:$LINENO: checking $ac_header presence" >&5 echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6 cat >conftest.$ac_ext <<_ACEOF -/* confdefs.h. */ -_ACEOF -cat confdefs.h >>conftest.$ac_ext -cat >>conftest.$ac_ext <<_ACEOF -/* end confdefs.h. */ +#line $LINENO "configure" +#include "confdefs.h" #include <$ac_header> _ACEOF if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5 (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1 ac_status=$? - grep -v '^ *+' conftest.er1 >conftest.err + egrep -v '^ *\+' conftest.er1 >conftest.err rm -f conftest.er1 cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } >/dev/null; then if test -s conftest.err; then ac_cpp_err=$ac_c_preproc_warn_flag - ac_cpp_err=$ac_cpp_err$ac_c_werror_flag else ac_cpp_err= fi @@ -9513,8 +8183,7 @@ ac_header_preproc=yes else echo "$as_me: failed program was:" >&5 -sed 's/^/| /' conftest.$ac_ext >&5 - + cat conftest.$ac_ext >&5 ac_header_preproc=no fi rm -f conftest.err conftest.$ac_ext @@ -9522,43 +8191,26 @@ echo "${ECHO_T}$ac_header_preproc" >&6 # So? What about this header? -case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in - yes:no: ) +case $ac_header_compiler:$ac_header_preproc in + yes:no ) { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5 echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;} - ac_header_preproc=yes - ;; - no:yes:* ) + { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; + no:yes ) { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5 echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5 echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5 -echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5 -echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;} { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5 -echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;} - { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5 -echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;} - ( - cat <<\_ASBOX -## --------------------------------------------------- ## -## Report this to the postgresql-slony1-engine lists. ## -## --------------------------------------------------- ## -_ASBOX - ) | - sed "s/^/$as_me: WARNING: /" >&2 - ;; +echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;};; esac echo "$as_me:$LINENO: checking for $ac_header" >&5 echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6 if eval "test \"\${$as_ac_Header+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else - eval "$as_ac_Header=\$ac_header_preproc" + eval "$as_ac_Header=$ac_header_preproc" fi echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5 echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6 @@ -10294,7 +8946,7 @@ # config.status only pays attention to the cache file if you give it # the --recheck option to rerun configure. # -# `ac_cv_env_foo' variables (set or unset) will be overridden when +# `ac_cv_env_foo' variables (set or unset) will be overriden when # loading this file, other *unset* `ac_cv_foo' will be assigned the # following values. @@ -10329,7 +8981,7 @@ t end /^ac_cv_env/!s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/ : end' >>confcache -if diff $cache_file confcache >/dev/null 2>&1; then :; else +if cmp -s $cache_file confcache; then :; else if test -w $cache_file; then test "x$cache_file" != "x/dev/null" && echo "updating cache $cache_file" cat confcache >$cache_file @@ -10360,21 +9012,6 @@ DEFS=-DHAVE_CONFIG_H -ac_libobjs= -ac_ltlibobjs= -for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue - # 1. Remove the extension, and $U if already installed. - ac_i=`echo "$ac_i" | - sed 's/\$U\././;s/\.o$//;s/\.obj$//'` - # 2. Add them. - ac_libobjs="$ac_libobjs $ac_i\$U.$ac_objext" - ac_ltlibobjs="$ac_ltlibobjs $ac_i"'$U.lo' -done -LIBOBJS=$ac_libobjs - -LTLIBOBJS=$ac_ltlibobjs - - : ${CONFIG_STATUS=./config.status} ac_clean_files_save=$ac_clean_files @@ -10389,12 +9026,11 @@ # configure, is in config.log if it exists. debug=false -ac_cs_recheck=false -ac_cs_silent=false SHELL=\${CONFIG_SHELL-$SHELL} _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF + ## --------------------- ## ## M4sh Initialization. ## ## --------------------- ## @@ -10403,57 +9039,46 @@ if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: - # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which - # is contrary to our usage. Disable this feature. - alias -g '${1+"$@"}'='"$@"' elif test -n "${BASH_VERSION+set}" && (set -o posix) >/dev/null 2>&1; then set -o posix fi -DUALCASE=1; export DUALCASE # for MKS sh +# NLS nuisances. # Support unset when possible. -if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then +if (FOO=FOO; unset FOO) >/dev/null 2>&1; then as_unset=unset else as_unset=false fi - -# Work around bugs in pre-3.0 UWIN ksh. -$as_unset ENV MAIL MAILPATH -PS1='$ ' -PS2='> ' -PS4='+ ' - -# NLS nuisances. -for as_var in \ - LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \ - LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \ - LC_TELEPHONE LC_TIME -do - if (set +x; test -z "`(eval $as_var=C; export $as_var) 2>&1`"); then - eval $as_var=C; export $as_var - else - $as_unset $as_var - fi -done - -# Required to use basename. -if expr a : '\(a\)' >/dev/null 2>&1; then - as_expr=expr -else - as_expr=false -fi - -if (basename /) >/dev/null 2>&1 && test "X`basename / 2>&1`" = "X/"; then - as_basename=basename -else - as_basename=false -fi +(set +x; test -n "`(LANG=C; export LANG) 2>&1`") && + { $as_unset LANG || test "${LANG+set}" != set; } || + { LANG=C; export LANG; } +(set +x; test -n "`(LC_ALL=C; export LC_ALL) 2>&1`") && + { $as_unset LC_ALL || test "${LC_ALL+set}" != set; } || + { LC_ALL=C; export LC_ALL; } +(set +x; test -n "`(LC_TIME=C; export LC_TIME) 2>&1`") && + { $as_unset LC_TIME || test "${LC_TIME+set}" != set; } || + { LC_TIME=C; export LC_TIME; } +(set +x; test -n "`(LC_CTYPE=C; export LC_CTYPE) 2>&1`") && + { $as_unset LC_CTYPE || test "${LC_CTYPE+set}" != set; } || + { LC_CTYPE=C; export LC_CTYPE; } +(set +x; test -n "`(LANGUAGE=C; export LANGUAGE) 2>&1`") && + { $as_unset LANGUAGE || test "${LANGUAGE+set}" != set; } || + { LANGUAGE=C; export LANGUAGE; } +(set +x; test -n "`(LC_COLLATE=C; export LC_COLLATE) 2>&1`") && + { $as_unset LC_COLLATE || test "${LC_COLLATE+set}" != set; } || + { LC_COLLATE=C; export LC_COLLATE; } +(set +x; test -n "`(LC_NUMERIC=C; export LC_NUMERIC) 2>&1`") && + { $as_unset LC_NUMERIC || test "${LC_NUMERIC+set}" != set; } || + { LC_NUMERIC=C; export LC_NUMERIC; } +(set +x; test -n "`(LC_MESSAGES=C; export LC_MESSAGES) 2>&1`") && + { $as_unset LC_MESSAGES || test "${LC_MESSAGES+set}" != set; } || + { LC_MESSAGES=C; export LC_MESSAGES; } # Name of the executable. -as_me=`$as_basename "$0" || +as_me=`(basename "$0") 2>/dev/null || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)$' \| \ @@ -10464,7 +9089,6 @@ /^X\/\(\/\).*/{ s//\1/; q; } s/.*/./; q'` - # PATH needs CR, and LINENO needs CR and PATH. # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' @@ -10475,15 +9099,15 @@ # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then - echo "#! /bin/sh" >conf$$.sh - echo "exit 0" >>conf$$.sh - chmod +x conf$$.sh - if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then + echo "#! /bin/sh" >conftest.sh + echo "exit 0" >>conftest.sh + chmod +x conftest.sh + if (PATH=".;."; conftest.sh) >/dev/null 2>&1; then PATH_SEPARATOR=';' else PATH_SEPARATOR=: fi - rm -f conf$$.sh + rm -f conftest.sh fi @@ -10532,8 +9156,6 @@ as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null` test "x$as_lineno_1" != "x$as_lineno_2" && test "x$as_lineno_3" = "x$as_lineno_2" ') 2>/dev/null; then - $as_unset BASH_ENV || test "${BASH_ENV+set}" != set || { BASH_ENV=; export BASH_ENV; } - $as_unset ENV || test "${ENV+set}" != set || { ENV=; export ENV; } CONFIG_SHELL=$as_dir/$as_base export CONFIG_SHELL exec "$CONFIG_SHELL" "$0" ${1+"$@"} @@ -10607,20 +9229,13 @@ fi rm -f conf$$ conf$$.exe conf$$.file -if mkdir -p . 2>/dev/null; then - as_mkdir_p=: -else - test -d ./-p && rmdir ./-p - as_mkdir_p=false -fi - as_executable_p="test -f" # Sed expression to map a string onto a valid CPP name. -as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" +as_tr_cpp="sed y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g" # Sed expression to map a string onto a valid variable name. -as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" +as_tr_sh="sed y%*+%pp%;s%[^_$as_cr_alnum]%_%g" # IFS @@ -10630,7 +9245,7 @@ IFS=" $as_nl" # CDPATH. -$as_unset CDPATH +$as_unset CDPATH || test "${CDPATH+set}" != set || { CDPATH=$PATH_SEPARATOR; export CDPATH; } exec 6>&1 @@ -10646,8 +9261,8 @@ } >&5 cat >&5 <<_CSEOF -This file was extended by postgresql-slony1-engine $as_me HEAD_20060621, which was -generated by GNU Autoconf 2.59. Invocation command line was +This file was extended by postgresql-slony1-engine $as_me HEAD_20060717, which was +generated by GNU Autoconf 2.53. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS @@ -10687,7 +9302,6 @@ -h, --help print this help, then exit -V, --version print version number, then exit - -q, --quiet do not print progress messages -d, --debug don't remove temporary files --recheck update $as_me by reconfiguring in the same conditions --file=FILE[:TEMPLATE] @@ -10706,11 +9320,12 @@ cat >>$CONFIG_STATUS <<_ACEOF ac_cs_version="\\ -postgresql-slony1-engine config.status HEAD_20060621 -configured by $0, generated by GNU Autoconf 2.59, +postgresql-slony1-engine config.status HEAD_20060717 +configured by $0, generated by GNU Autoconf 2.53, with options \\"`echo "$ac_configure_args" | sed 's/[\\""\`\$]/\\\\&/g'`\\" -Copyright (C) 2003 Free Software Foundation, Inc. +Copyright 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001 +Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." srcdir=$srcdir @@ -10726,25 +9341,25 @@ --*=*) ac_option=`expr "x$1" : 'x\([^=]*\)='` ac_optarg=`expr "x$1" : 'x[^=]*=\(.*\)'` - ac_shift=: - ;; - -*) - ac_option=$1 - ac_optarg=$2 - ac_shift=shift + shift + set dummy "$ac_option" "$ac_optarg" ${1+"$@"} + shift ;; + -*);; *) # This is not an option, so the user has probably given explicit # arguments. - ac_option=$1 ac_need_defaults=false;; esac - case $ac_option in + case $1 in # Handling of the options. _ACEOF -cat >>$CONFIG_STATUS <<\_ACEOF +cat >>$CONFIG_STATUS <<_ACEOF -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) - ac_cs_recheck=: ;; + echo "running $SHELL $0 " $ac_configure_args " --no-create --no-recursion" + exec $SHELL $0 $ac_configure_args --no-create --no-recursion ;; +_ACEOF +cat >>$CONFIG_STATUS <<\_ACEOF --version | --vers* | -V ) echo "$ac_cs_version"; exit 0 ;; --he | --h) @@ -10759,16 +9374,13 @@ --debug | --d* | -d ) debug=: ;; --file | --fil | --fi | --f ) - $ac_shift - CONFIG_FILES="$CONFIG_FILES $ac_optarg" + shift + CONFIG_FILES="$CONFIG_FILES $1" ac_need_defaults=false;; --header | --heade | --head | --hea ) - $ac_shift - CONFIG_HEADERS="$CONFIG_HEADERS $ac_optarg" + shift + CONFIG_HEADERS="$CONFIG_HEADERS $1" ac_need_defaults=false;; - -q | -quiet | --quiet | --quie | --qui | --qu | --q \ - | -silent | --silent | --silen | --sile | --sil | --si | --s) - ac_cs_silent=: ;; # This is an error. -*) { { echo "$as_me:$LINENO: error: unrecognized option: $1 @@ -10783,20 +9395,6 @@ shift done -ac_configure_extra_args= - -if $ac_cs_silent; then - exec 6>/dev/null - ac_configure_extra_args="$ac_configure_extra_args --silent" -fi - -_ACEOF -cat >>$CONFIG_STATUS <<_ACEOF -if \$ac_cs_recheck; then - echo "running $SHELL $0 " $ac_configure_args \$ac_configure_extra_args " --no-create --no-recursion" >&6 - exec $SHELL $0 $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion -fi - _ACEOF @@ -10828,9 +9426,6 @@ test "${CONFIG_HEADERS+set}" = set || CONFIG_HEADERS=$config_headers fi -# Have a temporary directory for convenience. Make it in the build tree -# simply because there is no reason to put it here, and in addition, -# creating and moving files from /tmp can sometimes cause problems. # Create a temporary directory, and hook for its removal unless debugging. $debug || { @@ -10839,17 +9434,17 @@ } # Create a (secure) tmp directory for tmp files. - +: ${TMPDIR=/tmp} { - tmp=`(umask 077 && mktemp -d -q "./confstatXXXXXX") 2>/dev/null` && + tmp=`(umask 077 && mktemp -d -q "$TMPDIR/csXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" } || { - tmp=./confstat$$-$RANDOM + tmp=$TMPDIR/cs$$-$RANDOM (umask 077 && mkdir $tmp) } || { - echo "$me: cannot create a temporary directory in ." >&2 + echo "$me: cannot create a temporary directory in $TMPDIR" >&2 { (exit 1); exit 1; } } @@ -10921,7 +9516,7 @@ s, at LD@,$LD,;t t s, at YFLAGS@,$YFLAGS,;t t s, at LEXFLAGS@,$LEXFLAGS,;t t -s, at HEAD_20060621@,$HEAD_20060621,;t t +s, at HEAD_20060717@,$HEAD_20060717,;t t s, at with_gnu_ld@,$with_gnu_ld,;t t s, at enable_rpath@,$enable_rpath,;t t s, at acx_pthread_config@,$acx_pthread_config,;t t @@ -10929,7 +9524,6 @@ s, at PTHREAD_LIBS@,$PTHREAD_LIBS,;t t s, at PTHREAD_CFLAGS@,$PTHREAD_CFLAGS,;t t s, at CPP@,$CPP,;t t -s, at EGREP@,$EGREP,;t t s, at HAVE_POSIX_SIGNALS@,$HAVE_POSIX_SIGNALS,;t t s, at NLSLIB@,$NLSLIB,;t t s, at PGINCLUDEDIR@,$PGINCLUDEDIR,;t t @@ -10962,8 +9556,6 @@ s, at COLLATEINDEX@,$COLLATEINDEX,;t t s, at docdir@,$docdir,;t t s, at perlsharedir@,$perlsharedir,;t t -s, at LIBOBJS@,$LIBOBJS,;t t -s, at LTLIBOBJS@,$LTLIBOBJS,;t t CEOF _ACEOF @@ -11034,30 +9626,25 @@ /^X\(\/\/\)$/{ s//\1/; q; } /^X\(\/\).*/{ s//\1/; q; } s/.*/./; q'` - { if $as_mkdir_p; then - mkdir -p "$ac_dir" - else - as_dir="$ac_dir" - as_dirs= - while test ! -d "$as_dir"; do - as_dirs="$as_dir $as_dirs" - as_dir=`(dirname "$as_dir") 2>/dev/null || -$as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ - X"$as_dir" : 'X\(//\)[^/]' \| \ - X"$as_dir" : 'X\(//\)$' \| \ - X"$as_dir" : 'X\(/\)' \| \ - . : '\(.\)' 2>/dev/null || -echo X"$as_dir" | - sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; } - /^X\(\/\/\)[^/].*/{ s//\1/; q; } - /^X\(\/\/\)$/{ s//\1/; q; } - /^X\(\/\).*/{ s//\1/; q; } - s/.*/./; q'` - done - test ! -n "$as_dirs" || mkdir $as_dirs - fi || { { echo "$as_me:$LINENO: error: cannot create directory \"$ac_dir\"" >&5 -echo "$as_me: error: cannot create directory \"$ac_dir\"" >&2;} - { (exit 1); exit 1; }; }; } + { case "$ac_dir" in + [\\/]* | ?:[\\/]* ) as_incr_dir=;; + *) as_incr_dir=.;; +esac +as_dummy="$ac_dir" +for as_mkdir_dir in `IFS='/\\'; set X $as_dummy; shift; echo "$@"`; do + case $as_mkdir_dir in + # Skip DOS drivespec + ?:) as_incr_dir=$as_mkdir_dir ;; + *) + as_incr_dir=$as_incr_dir/$as_mkdir_dir + test -d "$as_incr_dir" || + mkdir "$as_incr_dir" || + { { echo "$as_me:$LINENO: error: cannot create \"$ac_dir\"" >&5 +echo "$as_me: error: cannot create \"$ac_dir\"" >&2;} + { (exit 1); exit 1; }; } + ;; + esac +done; } ac_builddir=. @@ -11084,45 +9671,12 @@ ac_srcdir=$ac_top_builddir$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_builddir$srcdir ;; esac - -# Do not use `cd foo && pwd` to compute absolute paths, because -# the directories may not exist. -case `pwd` in -.) ac_abs_builddir="$ac_dir";; -*) - case "$ac_dir" in - .) ac_abs_builddir=`pwd`;; - [\\/]* | ?:[\\/]* ) ac_abs_builddir="$ac_dir";; - *) ac_abs_builddir=`pwd`/"$ac_dir";; - esac;; -esac -case $ac_abs_builddir in -.) ac_abs_top_builddir=${ac_top_builddir}.;; -*) - case ${ac_top_builddir}. in - .) ac_abs_top_builddir=$ac_abs_builddir;; - [\\/]* | ?:[\\/]* ) ac_abs_top_builddir=${ac_top_builddir}.;; - *) ac_abs_top_builddir=$ac_abs_builddir/${ac_top_builddir}.;; - esac;; -esac -case $ac_abs_builddir in -.) ac_abs_srcdir=$ac_srcdir;; -*) - case $ac_srcdir in - .) ac_abs_srcdir=$ac_abs_builddir;; - [\\/]* | ?:[\\/]* ) ac_abs_srcdir=$ac_srcdir;; - *) ac_abs_srcdir=$ac_abs_builddir/$ac_srcdir;; - esac;; -esac -case $ac_abs_builddir in -.) ac_abs_top_srcdir=$ac_top_srcdir;; -*) - case $ac_top_srcdir in - .) ac_abs_top_srcdir=$ac_abs_builddir;; - [\\/]* | ?:[\\/]* ) ac_abs_top_srcdir=$ac_top_srcdir;; - *) ac_abs_top_srcdir=$ac_abs_builddir/$ac_top_srcdir;; - esac;; -esac +# Don't blindly perform a `cd "$ac_dir"/$ac_foo && pwd` since $ac_foo can be +# absolute. +ac_abs_builddir=`cd "$ac_dir" && cd $ac_builddir && pwd` +ac_abs_top_builddir=`cd "$ac_dir" && cd $ac_top_builddir && pwd` +ac_abs_srcdir=`cd "$ac_dir" && cd $ac_srcdir && pwd` +ac_abs_top_srcdir=`cd "$ac_dir" && cd $ac_top_srcdir && pwd` @@ -11153,14 +9707,14 @@ test -f "$f" || { { echo "$as_me:$LINENO: error: cannot find input file: $f" >&5 echo "$as_me: error: cannot find input file: $f" >&2;} { (exit 1); exit 1; }; } - echo "$f";; + echo $f;; *) # Relative if test -f "$f"; then # Build tree - echo "$f" + echo $f elif test -f "$srcdir/$f"; then # Source tree - echo "$srcdir/$f" + echo $srcdir/$f else # /dev/null tree { { echo "$as_me:$LINENO: error: cannot find input file: $f" >&5 @@ -11243,15 +9797,14 @@ test -f "$f" || { { echo "$as_me:$LINENO: error: cannot find input file: $f" >&5 echo "$as_me: error: cannot find input file: $f" >&2;} { (exit 1); exit 1; }; } - # Do quote $f, to prevent DOS paths from being IFS'd. - echo "$f";; + echo $f;; *) # Relative if test -f "$f"; then # Build tree - echo "$f" + echo $f elif test -f "$srcdir/$f"; then # Source tree - echo "$srcdir/$f" + echo $srcdir/$f else # /dev/null tree { { echo "$as_me:$LINENO: error: cannot find input file: $f" >&5 @@ -11306,7 +9859,7 @@ # Break up conftest.defines because some shells have a limit on the size # of here documents, and old seds have small limits too (100 cmds). echo ' # Handle all the #define templates only if necessary.' >>$CONFIG_STATUS -echo ' if grep "^[ ]*#[ ]*define" $tmp/in >/dev/null; then' >>$CONFIG_STATUS +echo ' if egrep "^[ ]*#[ ]*define" $tmp/in >/dev/null; then' >>$CONFIG_STATUS echo ' # If there are no defines, we may have an empty if/fi' >>$CONFIG_STATUS echo ' :' >>$CONFIG_STATUS rm -f conftest.tail @@ -11330,7 +9883,7 @@ mv conftest.tail conftest.defines done rm -f conftest.defines -echo ' fi # grep' >>$CONFIG_STATUS +echo ' fi # egrep' >>$CONFIG_STATUS echo >>$CONFIG_STATUS # Break up conftest.undefs because some shells have a limit on the size @@ -11370,7 +9923,7 @@ cat $tmp/in >>$tmp/config.h rm -f $tmp/in if test x"$ac_file" != x-; then - if diff $ac_file $tmp/config.h >/dev/null 2>&1; then + if cmp -s $ac_file $tmp/config.h 2>/dev/null; then { echo "$as_me:$LINENO: $ac_file is unchanged" >&5 echo "$as_me: $ac_file is unchanged" >&6;} else @@ -11386,30 +9939,25 @@ /^X\(\/\/\)$/{ s//\1/; q; } /^X\(\/\).*/{ s//\1/; q; } s/.*/./; q'` - { if $as_mkdir_p; then - mkdir -p "$ac_dir" - else - as_dir="$ac_dir" - as_dirs= - while test ! -d "$as_dir"; do - as_dirs="$as_dir $as_dirs" - as_dir=`(dirname "$as_dir") 2>/dev/null || -$as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ - X"$as_dir" : 'X\(//\)[^/]' \| \ - X"$as_dir" : 'X\(//\)$' \| \ - X"$as_dir" : 'X\(/\)' \| \ - . : '\(.\)' 2>/dev/null || -echo X"$as_dir" | - sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; } - /^X\(\/\/\)[^/].*/{ s//\1/; q; } - /^X\(\/\/\)$/{ s//\1/; q; } - /^X\(\/\).*/{ s//\1/; q; } - s/.*/./; q'` - done - test ! -n "$as_dirs" || mkdir $as_dirs - fi || { { echo "$as_me:$LINENO: error: cannot create directory \"$ac_dir\"" >&5 -echo "$as_me: error: cannot create directory \"$ac_dir\"" >&2;} - { (exit 1); exit 1; }; }; } + { case "$ac_dir" in + [\\/]* | ?:[\\/]* ) as_incr_dir=;; + *) as_incr_dir=.;; +esac +as_dummy="$ac_dir" +for as_mkdir_dir in `IFS='/\\'; set X $as_dummy; shift; echo "$@"`; do + case $as_mkdir_dir in + # Skip DOS drivespec + ?:) as_incr_dir=$as_mkdir_dir ;; + *) + as_incr_dir=$as_incr_dir/$as_mkdir_dir + test -d "$as_incr_dir" || + mkdir "$as_incr_dir" || + { { echo "$as_me:$LINENO: error: cannot create \"$ac_dir\"" >&5 +echo "$as_me: error: cannot create \"$ac_dir\"" >&2;} + { (exit 1); exit 1; }; } + ;; + esac +done; } rm -f $ac_file mv $tmp/config.h $ac_file @@ -11439,11 +9987,8 @@ # need to make the FD available again. if test "$no_create" != yes; then ac_cs_success=: - ac_config_status_args= - test "$silent" = yes && - ac_config_status_args="$ac_config_status_args --quiet" exec 5>/dev/null - $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false + $SHELL $CONFIG_STATUS || ac_cs_success=false exec 5>>config.log # Use ||, not &&, to avoid exiting from the if with $? = 1, which # would make configure fail if this is the last instruction. Index: acx_libpq.m4 =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/config/acx_libpq.m4,v retrieving revision 1.21 retrieving revision 1.22 diff -Lconfig/acx_libpq.m4 -Lconfig/acx_libpq.m4 -u -w -r1.21 -r1.22 --- config/acx_libpq.m4 +++ config/acx_libpq.m4 @@ -361,6 +361,16 @@ AC_MSG_RESULT([yes, and it takes $ac_cv_typenameTypeId_args arguments]) fi +AC_MSG_CHECKING(for standard_conforming_strings) +if test -z "$ac_cv_standard_conforming_strings"; then + AC_EGREP_HEADER(standard_conforming_strings, + parser/gramparse.h, + [AC_MSG_RESULT(yes) + AC_DEFINE(HAVE_STANDARDCONFORMINGSTRINGS)], + AC_MSG_RESULT(no) + ) +fi + AC_CHECK_DECLS([GetTopTransactionId],[],[],[ #include "postgres.h" #include "access/xact.h" From cvsuser Tue Jul 18 10:59:59 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Remove second index on sl_log_[12]; it's dangerous... Message-ID: <20060718175959.DC46011BF022@gborg.postgresql.org> Log Message: ----------- Remove second index on sl_log_[12]; it's dangerous... Modified Files: -------------- slony1-engine/src/backend: slony1_base.sql (r1.31 -> r1.32) -------------- next part -------------- Index: slony1_base.sql =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/backend/slony1_base.sql,v retrieving revision 1.31 retrieving revision 1.32 diff -Lsrc/backend/slony1_base.sql -Lsrc/backend/slony1_base.sql -u -w -r1.31 -r1.32 --- src/backend/slony1_base.sql +++ src/backend/slony1_base.sql @@ -407,8 +407,8 @@ (log_origin, log_xid @NAMESPACE at .xxid_ops, log_actionseq); -- Add in an additional index as sometimes log_origin isn't a useful discriminant -create index sl_log_1_idx2 on @NAMESPACE at .sl_log_1 - (log_xid @NAMESPACE at .xxid_ops); +-- create index sl_log_1_idx2 on @NAMESPACE at .sl_log_1 +-- (log_xid @NAMESPACE at .xxid_ops); comment on table @NAMESPACE at .sl_log_1 is 'Stores each change to be propagated to subscriber nodes'; comment on column @NAMESPACE at .sl_log_1.log_origin is 'Origin node from which the change came'; @@ -438,8 +438,8 @@ (log_origin, log_xid @NAMESPACE at .xxid_ops, log_actionseq); -- Add in an additional index as sometimes log_origin isn't a useful discriminant -create index sl_log_2_idx2 on @NAMESPACE at .sl_log_2 - (log_xid @NAMESPACE at .xxid_ops); +-- create index sl_log_2_idx2 on @NAMESPACE at .sl_log_2 +-- (log_xid @NAMESPACE at .xxid_ops); -- ---------------------------------------------------------------------- From cvsuser Tue Jul 18 11:25:29 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Partial sl_log_? indices support Add a function, Message-ID: <20060718182530.25B7511BF022@gborg.postgresql.org> Log Message: ----------- Partial sl_log_? indices support Add a function, addPartialLogIndices(), which adds missing partial indices against the unused sl_log_? table, and drops any that are no longer needed. (Needed ==> "Node # is an origin for a set") This function is run in various places that touch set origins so that the indexes will, over time, be available on both tables, as log switches take place. Modified Files: -------------- slony1-engine/src/backend: slony1_funcs.sql (r1.91 -> r1.92) -------------- next part -------------- Index: slony1_funcs.sql =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/backend/slony1_funcs.sql,v retrieving revision 1.91 retrieving revision 1.92 diff -Lsrc/backend/slony1_funcs.sql -Lsrc/backend/slony1_funcs.sql -u -w -r1.91 -r1.92 --- src/backend/slony1_funcs.sql +++ src/backend/slony1_funcs.sql @@ -9,7 +9,6 @@ -- $Id$ -- ---------------------------------------------------------------------- - -- ********************************************************************** -- * C functions in src/backend/slony1_base.c -- ********************************************************************** @@ -1271,6 +1270,9 @@ -- Rewrite sl_listen table perform @NAMESPACE at .RebuildListenEntries(); + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform @NAMESPACE at .addPartialLogIndices(); + -- ---- -- Make sure the node daemon will restart -- ---- @@ -1915,6 +1917,9 @@ (p_set_id, p_set_origin, p_set_comment); end if; + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform @NAMESPACE at .addPartialLogIndices(); + return p_set_id; end; ' language plpgsql; @@ -2337,6 +2342,9 @@ -- Regenerate sl_listen since we revised the subscriptions perform @NAMESPACE at .RebuildListenEntries(); + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform @NAMESPACE at .addPartialLogIndices(); + -- ---- -- If we are the new or old origin, we have to -- put all the tables into altered state again. @@ -2446,6 +2454,9 @@ -- Regenerate sl_listen since we revised the subscriptions perform @NAMESPACE at .RebuildListenEntries(); + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform @NAMESPACE at .addPartialLogIndices(); + return p_set_id; end; ' language plpgsql; @@ -2730,6 +2741,13 @@ raise exception ''Slony-I: setAddTable_int: table % not replicable!'', p_fqname; end if; + select * into v_prec from @NAMESPACE at .sl_table where tab_id = p_tab_id; + if not found then + v_pkcand_nn := ''t''; -- No-op -- All is well + else + raise exception ''Slony-I: setAddTable_int: table id % has already been assigned!'', p_tab_id; + end if; + -- ---- -- Add the table to sl_table and create the trigger on it. -- ---- @@ -2904,7 +2922,7 @@ raise exception ''Slony-I: setAddSequence(): set % not found'', p_set_id; end if; if v_set_origin != @NAMESPACE at .getLocalNodeId(''_ at CLUSTERNAME@'') then - raise exception ''Slony-I: setAddSequence(): set % has remote origin'', p_set_id; + raise exception ''Slony-I: setAddSequence(): set % has remote origin - submit to origin node'', p_set_id; end if; if exists (select true from @NAMESPACE at .sl_subscribe @@ -2996,6 +3014,13 @@ p_fqname; end if; + select 1 into v_sync_row from @NAMESPACE at .sl_sequence where seq_id = p_seq_id; + if not found then + v_sync_row := NULL; -- all is OK + else + raise exception ''Slony-I: setAddSequence_int(): sequence ID % has already been assigned'', p_seq_id; + end if; + -- ---- -- Add the sequence to sl_sequence -- ---- @@ -3069,7 +3094,7 @@ raise exception ''Slony-I: setDropSequence(): set % not found'', v_set_id; end if; if v_set_origin != @NAMESPACE at .getLocalNodeId(''_ at CLUSTERNAME@'') then - raise exception ''Slony-I: setDropSequence(): set % has remote origin'', v_set_id; + raise exception ''Slony-I: setDropSequence(): set % has origin at another node - submit this to that node'', v_set_id; end if; -- ---- @@ -5529,6 +5554,9 @@ raise notice ''Slony-I: log switch to sl_log_1 complete - truncate sl_log_2''; truncate @NAMESPACE at .sl_log_2; perform "pg_catalog".setval(''@NAMESPACE at .sl_log_status'', 0); + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform @NAMESPACE at .addPartialLogIndices(); + return 1; end if; @@ -5552,6 +5580,8 @@ raise notice ''Slony-I: log switch to sl_log_2 complete - truncate sl_log_1''; truncate @NAMESPACE at .sl_log_1; perform "pg_catalog".setval(''@NAMESPACE at .sl_log_status'', 1); + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform @NAMESPACE at .addPartialLogIndices(); return 2; end if; END; @@ -5563,6 +5593,69 @@ -- ---------------------------------------------------------------------- +-- FUNCTION addPartialLogIndices () +-- Add partial indices to sl_log_? tables that aren't currently in use +-- ---------------------------------------------------------------------- + +create or replace function @NAMESPACE at .addPartialLogIndices () returns integer as ' +DECLARE + v_current_status int4; + v_log int4; + v_dummy record; + idef text; + v_count int4; +BEGIN + v_count := 0; + select last_value into v_current_status from @NAMESPACE at .sl_log_status; + + -- If status is 2 or 3 --> in process of cleanup --> unsafe to create indices + if v_current_status in (2, 3) then + return 0; + end if; + + if v_current_status = 0 then -- Which log should get indices? + v_log := 2; + else + v_log := 1; + end if; + + -- Add missing indices... + for v_dummy in select distinct set_origin from @NAMESPACE at .sl_set + where not exists + (select * from pg_catalog.pg_indexes where schemaname = ''@NAMESPACE@'' + and tablename = ''sl_log_'' || v_log and + indexname = ''PartInd_ at CLUSTERNAME@_sl_log_'' || v_log || ''-node-'' || set_origin) loop + idef := ''create index "PartInd_ at CLUSTERNAME@_sl_log_'' || v_log || ''-node-'' || v_dummy.set_origin || + ''" on @NAMESPACE at .sl_log_'' || v_log || '' USING btree(log_xid @NAMESPACE at .xxid_ops) where (log_origin = '' || v_dummy.set_origin || '');''; + execute idef; + v_count := v_count + 1; + end loop; + + -- Remove unneeded indices... + for v_dummy in select indexname from pg_catalog.pg_indexes i where i.schemaname = ''@NAMESPACE'' + and i.tablename = ''sl_log_'' || v_log and + not exists (select 1 from @NAMESPACE at .sl_set where + i.indexname = ''PartInd_ at CLUSTERNAME@_sl_log_'' || v_log || ''-node-'' || set_origin) + loop + idef := ''drop index "@NAMESPACE@"."'' || v_dummy.indexname || ''";''; + execute idef; + v_count := v_count - 1; + end loop; + return v_count; +END +' language plpgsql; + + +comment on function @NAMESPACE at .addPartialLogIndices () is +'Add partial indexes, if possible, to the unused sl_log_? table for +all origin nodes, and drop any that are no longer needed. + +This function presently gets run any time set origins are manipulated +(FAILOVER, STORE SET, MOVE SET, DROP SET), as well as each time the +system switches between sl_log_1 and sl_log_2.'; + + +-- ---------------------------------------------------------------------- -- FUNCTION upgradeSchema(old_version) -- upgrade sl_node -- From cvsuser Wed Jul 19 13:31:37 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Add documentation about error messages that can be returned Message-ID: <20060719203137.4C21011BF03E@gborg.postgresql.org> Log Message: ----------- Add documentation about error messages that can be returned by SET ADD TABLE Modified Files: -------------- slony1-engine/doc/adminguide: slonik_ref.sgml (r1.53 -> r1.54) -------------- next part -------------- Index: slonik_ref.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v retrieving revision 1.53 retrieving revision 1.54 diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.53 -r1.54 --- doc/adminguide/slonik_ref.sgml +++ doc/adminguide/slonik_ref.sgml @@ -1349,6 +1349,74 @@ ); + Error Messages + + Here are some of the error messages you may encounter if + adding tables incorrectly: + + + Slony-I: setAddTable_int: table public.my_table PK column id nullable + + Primary keys (or candidates thereof) are + required to have all column defined as NOT + NULL. If you have a PK candidate that has columns + that are not thus restricted, &slony1; will reject the table + with this message. + + Slony-I: setAddTable_int: table id 14 has already been assigned! + + The table id, stored in + sl_table.tab_id, is required to be unique + across all tables/nodes/sets. Apparently you have tried to + reused a table ID. + + Slony-I: setAddTable_int(): table public.my_table has no index mt_idx_14 + + This will normally occur with candidate + primary keys; apparently the index specified is not available + on this node. + + Slony-I: setAddTable_int(): table public.my_table not found + + Worse than an index missing, the whole table + is missing. Apparently whatever process you were using to get + the schema into place everywhere didn't work properly. + + + Slony-I: setAddTable_int(): public.my_view is not a regular table + + You can only replicate (at least, using + SET ADD TABLE) objects that are ordinary + tables. That doesn't include views or indexes. (Indexes can + come along for the ride, but you don't ask to replicate an + index...) + + Slony-I: setAddTable_int(): set 4 not found + + You need to define a replication set before + assigning tables to it. + + Slony-I: setAddTable(): set 4 has remote origin + + This will occur if set 4 is configured with, + as origin, node 1, and then you submit a SET ADD + TABLE request involving that set to some other node + than node 1. This would be expected to occur if there was + some confusion in the admin conninfo + configuration in the slonik script preamble... + + + + Slony-I: cannot add table to currently subscribed set 1 + + &slony1; does not support adding tables to + sets that are already participating in subscriptions. + Probably you need to define a new set to associate additional + tables to. + + + + Locking Behaviour On the origin node, this operation requires a brief From cvsuser Mon Jul 24 09:04:12 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Add a description to DDL docs as to how the Message-ID: <20060724160412.9568711BF02D@gborg.postgresql.org> Log Message: ----------- Add a description to DDL docs as to how the removal/readdition of log triggers addresses table alterations Modified Files: -------------- slony1-engine/doc/adminguide: ddlchanges.sgml (r1.22 -> r1.23) -------------- next part -------------- Index: ddlchanges.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v retrieving revision 1.22 retrieving revision 1.23 diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.22 -r1.23 --- doc/adminguide/ddlchanges.sgml +++ doc/adminguide/ddlchanges.sgml @@ -23,9 +23,9 @@ transactions are still winding their way through your systems, this is necessary. -It is also necessary to use EXECUTE SCRIPT if -you alter tables so as to change their schemas. If you do not, then -you may run into the problems described +It is essential to use EXECUTE SCRIPT if you +alter tables so as to change their schemas. If you do not, then you +may run into the problems described here where triggers on modified tables do not take account of the schema change. This has the potential to corrupt data on subscriber nodes. @@ -94,6 +94,14 @@ COMMIT; +Note that this is what allows &slony1; to take notice of +alterations to tables: before +that SYNC, &slony1; has been replicating tuples +based on the old +schema; after the DDL_SCRIPT, +tuples are being replicated based on the new +schema. + On a system which is busily taking updates, it may be troublesome to get in edgewise to actually successfully engage all the required locks. The locks may run into deadlocks. From cvsuser Tue Jul 25 17:36:04 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Documentation augmented... Message-ID: <20060726003604.C719711BF036@gborg.postgresql.org> Log Message: ----------- Documentation augmented... elein pointed out the good question "When is it OK / NOT OK to kill off slons?" I've added some comments on this to the best practices and FAQ. And added a link to the "generate_sync.sh" function... Modified Files: -------------- slony1-engine/doc/adminguide: bestpractices.sgml (r1.21 -> r1.22) faq.sgml (r1.59 -> r1.60) maintenance.sgml (r1.22 -> r1.23) -------------- next part -------------- Index: maintenance.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v retrieving revision 1.22 retrieving revision 1.23 diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.22 -r1.23 --- doc/adminguide/maintenance.sgml +++ doc/adminguide/maintenance.sgml @@ -82,7 +82,8 @@ thereby your whole day. -Parallel to Watchdog: generate_syncs.sh + +Parallel to Watchdog: generate_syncs.sh A new script for &slony1; 1.1 is generate_syncs.sh, which addresses the following kind of Index: bestpractices.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/bestpractices.sgml,v retrieving revision 1.21 retrieving revision 1.22 diff -Ldoc/adminguide/bestpractices.sgml -Ldoc/adminguide/bestpractices.sgml -u -w -r1.21 -r1.22 --- doc/adminguide/bestpractices.sgml +++ doc/adminguide/bestpractices.sgml @@ -154,7 +154,6 @@ In practice, strewing &lslon; processes and configuration across a dozen servers turns out to be inconvenient to manage. - &lslon; processes should run in the same @@ -175,6 +174,31 @@ condition. + Before getting too excited about having fallen into +some big problem, consider killing and restarting all the &lslon; +processes. Historically, this has frequently been able to +resolve stickiness. + + With a very few exceptions, it is generally not a big deal to +kill off and restart the &lslon; processes. Each &lslon; connects to +one database for which it is the manager, and then connects to other +databases as needed to draw in events. If you kill off a &lslon;, all +you do is to interrupt those connections. If +a SYNC or other event is sitting there +half-processed, there's no problem: the transaction will roll back, +and when the &lslon; restarts, it will restart that event from +scratch. + + The exception, where it is undesirable to restart a &lslon;, is +where a COPY_SET is running on a large replication +set, such that stopping the &lslon; may discard several hours worth of +load work. + + In early versions of &slony1;, it was frequently the case that +connections could get a bit deranged which restarting +&lslon;s would clean up. This has become much more rare, but it has +occasionally proven useful to restart the &lslon;. + The Database Schema Changes section outlines some practices that have been found useful for Index: faq.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v retrieving revision 1.59 retrieving revision 1.60 diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.59 -r1.60 --- doc/adminguide/faq.sgml +++ doc/adminguide/faq.sgml @@ -462,6 +462,63 @@ threaten the entire server. + + When can I shut down &lslon; processes? + + Are there risks to doing so? How about +benefits? + + Generally, it's no big deal to shut down a &lslon; +process. Each one is merely a &postgres; client, +managing one node, which spawns threads to manage receiving events +from other nodes. + +The event listening threads are no big deal; they +are doing nothing fancier than periodically checking remote nodes to +see if they have work to be done on this node. If you kill off the +&lslon; these threads will be closed, which should have little or no +impact on much of anything. Events generated while the &lslon; is +down will be picked up when it is restarted. + + The node managing thread is a bit more +interesting; most of the time, you can expect, on a subscriber, for +this thread to be processing SYNC events. If you +shut off the &lslon; during an event, the transaction +will fail, and be rolled back, so that when the &lslon; restarts, it +will have to go back and reprocess the event. + + The only situation where this will +cause particular heartburn is if +the event being processed was one which takes a long time to process, +such as COPY_SET for a large replication +set. + + The other thing that might cause trouble +is if the &lslon; runs fairly distant from nodes that it connects to; +you could discover that database connections are left idle in +transaction. This would normally only occur if the network +connection is destroyed without either &lslon; or database being made +aware of it. In that case, you may discover +that zombied connections are left around for as long as +two hours if you don't go in by hand and kill off the &postgres; +backends. + + There is one other case that could cause trouble; when the +&lslon; managing the origin node is not running, +no SYNC events run against that node. If the +&lslon; stays down for an extended period of time, and something +like isn't running, you could be left +with one big SYNC to process +when it comes back up. But that is only a concern if that &lslon; is +down for an extended period of time; shutting it down for a few +seconds shouldn't cause any great problem. + + In short, if you don't have something like an 18 +hour COPY_SET under way, it's normally not at all a +big deal to take a &lslon; down for a little while, or perhaps even +cycle all the &lslon;s. + + &slony1; FAQ: Configuration Issues From cvsuser Wed Jul 26 05:32:09 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By darcyb: SET standard_conforming_strings to 'off' for pg 8.2 and Message-ID: <20060726123210.0FCFE11BF02A@gborg.postgresql.org> Log Message: ----------- SET standard_conforming_strings to 'off' for pg 8.2 and newer. Modified Files: -------------- slony1-engine/src/slon: dbutils.c (r1.22 -> r1.23) -------------- next part -------------- Index: dbutils.c =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/slon/dbutils.c,v retrieving revision 1.22 retrieving revision 1.23 diff -Lsrc/slon/dbutils.c -Lsrc/slon/dbutils.c -u -w -r1.22 -r1.23 --- src/slon/dbutils.c +++ src/slon/dbutils.c @@ -125,9 +125,20 @@ PQfinish(dbconn); return NULL; } + slon_log(SLON_DEBUG4, "version for \"%s\" is %d\n", conninfo, conn->pg_version); + if (conn->pg_version >= 80200) + { + slon_mkquery(&query, "set standard_conforming_strings to 'off'"); + res = PQexec(dbconn, dstring_data(&query)); + if (!(PQresultStatus(res) == PGRES_COMMAND_OK)) + { + slon_log(SLON_ERROR, "Unable to set the standard_conforming_strings to off\n"); + } + PQclear(res); + } return conn; } From cvsuser Wed Jul 26 11:29:08 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Check return codes for remaining log shipping logic points Message-ID: <20060726182908.C16D011BF03B@gborg.postgresql.org> Log Message: ----------- Check return codes for remaining log shipping logic points which weren't evaluating file access return codes... Modified Files: -------------- slony1-engine/src/slon: remote_worker.c (r1.116 -> r1.117) -------------- next part -------------- Index: remote_worker.c =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/slon/remote_worker.c,v retrieving revision 1.116 retrieving revision 1.117 diff -Lsrc/slon/remote_worker.c -Lsrc/slon/remote_worker.c -u -w -r1.116 -r1.117 --- src/slon/remote_worker.c +++ src/slon/remote_worker.c @@ -3477,6 +3477,17 @@ if (archive_dir) { rc = submit_string_to_archive("\\."); + if (rc < 0) { + PQclear(res2); + PQclear(res1); + slon_disconnectdb(pro_conn); + dstring_free(&query1); + dstring_free(&query2); + dstring_free(&query3); + dstring_free(&indexregenquery); + terminate_log_archive(); + return -1; + } } #else /* ! HAVE_PQPUTCOPYDATA */ copydone = false; @@ -3500,13 +3511,38 @@ case 0: PQputline(loc_dbconn, copybuf); PQputline(loc_dbconn, "\n"); - if (archive_dir) - submit_string_to_archive(copybuf); + if (archive_dir) { + rc = submit_string_to_archive(copybuf); + if (rc < 0) { + PQclear(res2); + PQclear(res1); + slon_disconnectdb(pro_conn); + dstring_free(&query1); + dstring_free(&query2); + dstring_free(&query3); + dstring_free(&indexregenquery); + terminate_log_archive(); + return -1; + } + } break; case 1: PQputline(loc_dbconn, copybuf); - if (archive_dir) - submit_raw_data_to_archive(copybuf); + if (archive_dir) { + rc = submit_raw_data_to_archive(copybuf); + if (rc < 0) { + PQclear(res2); + PQclear(res1); + slon_disconnectdb(pro_conn); + dstring_free(&query1); + dstring_free(&query2); + dstring_free(&query3); + dstring_free(&indexregenquery); + terminate_log_archive(); + return -1; + } + + } break; } @@ -3516,6 +3552,17 @@ if (archive_dir) { rc = submit_string_to_archive("\\."); + if (rc < 0) { + PQclear(res2); + PQclear(res1); + slon_disconnectdb(pro_conn); + dstring_free(&query1); + dstring_free(&query2); + dstring_free(&query3); + dstring_free(&indexregenquery); + terminate_log_archive(); + return -1; + } } /* @@ -3590,7 +3637,10 @@ } if (archive_dir) { - submit_query_to_archive(&query1); + rc = submit_query_to_archive(&query1); + if (rc < 0) { + return -1; + } } gettimeofday(&tv_now, NULL); @@ -3663,7 +3713,17 @@ if (archive_dir) { - submit_query_to_archive(&query1); + rc = submit_query_to_archive(&query1); + if (rc < 0) { + PQclear(res1); + slon_disconnectdb(pro_conn); + dstring_free(&query1); + dstring_free(&query2); + dstring_free(&query3); + dstring_free(&indexregenquery); + terminate_log_archive(); + return -1; + } } } else @@ -6074,5 +6134,3 @@ } slon_log(SLON_DEBUG3, " compressed actionseq subquery... %s\n", dstring_data(action_subquery)); } - - From cvsuser Wed Jul 26 11:58:47 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Problem with sequence handling introduced when Chris added Message-ID: <20060726185847.D5E3711BF03B@gborg.postgresql.org> Log Message: ----------- Problem with sequence handling introduced when Chris added error checking... Modified Files: -------------- slony1-engine/src/backend: slony1_funcs.sql (r1.92 -> r1.93) -------------- next part -------------- Index: slony1_funcs.sql =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/backend/slony1_funcs.sql,v retrieving revision 1.92 retrieving revision 1.93 diff -Lsrc/backend/slony1_funcs.sql -Lsrc/backend/slony1_funcs.sql -u -w -r1.92 -r1.93 --- src/backend/slony1_funcs.sql +++ src/backend/slony1_funcs.sql @@ -3016,7 +3016,7 @@ select 1 into v_sync_row from @NAMESPACE at .sl_sequence where seq_id = p_seq_id; if not found then - v_sync_row := NULL; -- all is OK + v_relkind := ''O''; -- all is OK else raise exception ''Slony-I: setAddSequence_int(): sequence ID % has already been assigned'', p_seq_id; end if; From cvsuser Thu Jul 27 12:52:20 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By xfade: Fix faq.sgml error. Message-ID: <20060727195220.492C511BF089@gborg.postgresql.org> Log Message: ----------- Fix faq.sgml error. It seems you can not have two questions in the same qandaentry? Modified Files: -------------- slony1-engine/doc/adminguide: faq.sgml (r1.60 -> r1.61) -------------- next part -------------- Index: faq.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v retrieving revision 1.60 retrieving revision 1.61 diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.60 -r1.61 --- doc/adminguide/faq.sgml +++ doc/adminguide/faq.sgml @@ -465,9 +465,6 @@ When can I shut down &lslon; processes? - Are there risks to doing so? How about -benefits? - Generally, it's no big deal to shut down a &lslon; process. Each one is merely a &postgres; client, managing one node, which spawns threads to manage receiving events @@ -512,6 +509,11 @@ when it comes back up. But that is only a concern if that &lslon; is down for an extended period of time; shutting it down for a few seconds shouldn't cause any great problem. + + + + Are there risks to doing so? How about +benefits? In short, if you don't have something like an 18 hour COPY_SET under way, it's normally not at all a From cvsuser Thu Jul 27 12:55:01 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By xfade: Add closing tags and remove some warnings. Message-ID: <20060727195501.A7A8F11BF089@gborg.postgresql.org> Log Message: ----------- Add closing tags and remove some warnings. Modified Files: -------------- slony1-engine/doc/adminguide: ddlchanges.sgml (r1.23 -> r1.24) -------------- next part -------------- Index: ddlchanges.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v retrieving revision 1.23 retrieving revision 1.24 diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.23 -r1.24 --- doc/adminguide/ddlchanges.sgml +++ doc/adminguide/ddlchanges.sgml @@ -99,15 +99,14 @@ that SYNC, &slony1; has been replicating tuples based on the old schema; after the DDL_SCRIPT, -tuples are being replicated based on the new -schema. +tuples are being replicated based on the new +schema. On a system which is busily taking updates, it may be troublesome to get in edgewise to actually successfully engage all the required locks. The locks may run into deadlocks. This points to two ways to address this: - - + You may be able to define replication sets that consist of smaller sets of tables so @@ -123,8 +122,6 @@ will conflict with the ones you need to take in order to update the database schema. - - In &slony1; versions 1.0 thru 1.1.5, the script is processed as a single query request, which can cause problems if you are making complex changes. In version 1.2, the script is parsed into From cvsuser Thu Jul 27 16:56:26 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: monitoring.sgml - major addition of description of Message-ID: <20060727235626.4498011BF040@gborg.postgresql.org> Log Message: ----------- monitoring.sgml - major addition of description of error/warning/info messages based on walks through source code. Modified Files: -------------- slony1-engine/doc/adminguide: monitoring.sgml (r1.25 -> r1.26) -------------- next part -------------- Index: monitoring.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v retrieving revision 1.25 retrieving revision 1.26 diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.25 -r1.26 --- doc/adminguide/monitoring.sgml +++ doc/adminguide/monitoring.sgml @@ -162,6 +162,632 @@ + Errors and Implications + + +remoteWorkerThread_%d: log archive failed %s - %s\n + + This indicates that an error was encountered trying to write a +log shipping file. Normally the &lslon; will retry, and hopefully +succeed. + +remoteWorkerThread_%d: DDL preparation failed - set %d - only on node % + Something broke when applying a DDL script on one of the nodes. +This is quite likely indicates that the node's schema differed from +that on the origin; you may need to apply a change manually to the +node to allow the event to proceed. The scary, scary alternative +might be to delete the offending event, assuming it can't possibly +work... +SLON_CONFIG: remoteWorkerThread_%d: DDL request with %d statements + This is informational, indicating how many SQL statements were processed. +SLON_ERROR: remoteWorkerThread_%d: DDL had invalid number of statements - %d + + Occurs if there were < 0 statements (which should be impossible) or > MAXSTATEMENTS statements. Probably the script was bad... + +ERROR: remoteWorkerThread_%d: malloc() +failure in DDL_SCRIPT - could not allocate %d bytes of +memory + + This should only occur if you submit some extraordinarily large +DDL script that makes a &lslon; run out of memory + +CONFIG: remoteWorkerThread_%d: DDL Statement %d: [%s] + + This lists each DDL statement as it is submitted. +ERROR: DDL Statement failed - %s + + Oh, dear, one of the DDL statements that worked on the origin +failed on this remote node... + +CONFIG: DDL Statement success - %s + + All's well... + +ERROR: remoteWorkerThread_%d: Could not generate DDL archive tracker %s - %s + + Apparently the DDL script couldn't be written to a log shipping file... + +ERROR: remoteWorkerThread_%d: Could not submit DDL script %s - %s + +Couldn't write the script to a log shipping file. + +ERROR: remoteWorkerThread_%d: Could not close DDL script %s - %s + +Couldn't close a log shipping file for a DDL script. + +FATAL: remoteWorkerThread_%d: pthread_create() - %s + + Couldn't create a new remote worker thread. + +DEBUG1 remoteWorkerThread_%d: helper thread for provider %d created + + This normally happens when the &lslon; starts: a thread is created for each node to which the local node should be listening for events. + +DEBUG4: remoteWorkerThread_%d: added active set %d to provider %d + + Indicates that this set is being provided by this +provider. + +DEBUG1: remoteWorkerThread_%d: helper thread for provider %d terminated + + If subscriptions reshape such that a node no longer provides a +subscription, then the thread that works on that node can be +dropped. + +DEBUG1: remoteWorkerThread_%d: disconnecting +from data provider %d + + A no-longer-used data provider may be dropped; if connection +information is changed, the &lslon; needs to disconnect and +reconnect. + +DEBUG2: remoteWorkerThread_%d: ignore new events due to shutdown + + If the &lslon; is shutting down, it is futile to process more events +DEBUG2: remoteWorkerThread_%d: event %d ignored - unknown origin + + Probably happens if events arrive before +the STORE_NODE event that tells that the new node +now exists... + +WARN: remoteWorkerThread_%d: event %d ignored - origin inactive + This shouldn't occur now (2006) as we don't support the notion of deactivating a node... + +DEBUG2: remoteWorkerThread_%d: event %d ignored - duplicate + + This might be expected to happen if the event notification +comes in concurrently from two sources... + +DEBUG2: remoteWorkerThread_%d: unknown node %d + + Happens if the &lslon; is unaware of this node; probably a sign +of STORE_NODE requests not +propagating... + +DEBUG1: remoteWorkerThread_%d: node %d - no worker thread + + Curious: we can't wake up the worker thread; there probably +should already be one... + +DEBUG2: remoteWorkerThread_%d: forward confirm %d,%s received by %d + + These events should occur frequently and routinely as nodes report confirations of the events they receive. + +DEBUG1: copy_set %d + + This indicates the beginning of copying data for a new subscription. + +ERROR: remoteWorkerThread_%d: set %d not found in runtime configuration + + &lslon; tried starting up a subscription; it couldn't find conninfo for the data source. Perhaps paths are not properly propagated? + +ERROR: remoteWorkerThread_%d: node %d has no pa_conninfo + + Apparently the conninfo configuration +was wrong... + +ERROR: copy set %d cannot connect to provider DB node %d + + &lslon; couldn't connect to the provider. Is the conninfo +wrong? Or perhaps authentication is misconfigured? Or perhaps the +database is down? + +DEBUG1: remoteWorkerThread_%d: connected to provider DB + + Excellent: the copy set has a connection to its provider +ERROR: remoteWorkerThread_%d: Could not open COPY SET archive file %s - %s + + Seems pretty self-explanatory... +ERROR: remoteWorkerThread_%d: Could not generate COPY SET archive header %s - %s + + Probably means that we just filled up a filesystem... + +WARN: remoteWorkerThread_%d: transactions +earlier than XID %s are still in progress + + This indicates that some old transaction is in progress from before the earliest available SYNC on the provider. &slony1; cannot start replicating until that transaction completes. This will repeat until thetransaction completes... + + +DEBUG2: remoteWorkerThread_%d: prepare to copy table %s + + This indicates that &lslon; is beginning preparations to set up subscription for a table. +DEBUG1: remoteWorkerThread_%d: table %s will require Slony-I serial key + + Evidently this is a table defined with where &slony1; has to add a surrogate primary key. +ERROR: remoteWorkerThread_%d: Could not lock table %s on subscriber + + For whatever reason, the table could not be locked, so the +subscription needs to be restarted. If the problem was something like +a deadlock, retrying may help. If the problem was otherwise, you may +need to intervene... + +DEBUG2: remoteWorkerThread_%d: all tables for set %d found on subscriber + + An informational message indicating that the first pass through the tables found no problems... +DEBUG2: remoteWorkerThread_%d: copy sequence %s + + Processing some sequence... +DEBUG2: remoteWorkerThread_%d: copy table %s + + &lslon; is starting to copy a table... +DEBUG3: remoteWorkerThread_%d: table %s Slony-I serial key added local + + Just added new column to the table to provide surrogate primary key. +DEBUG3: remoteWorkerThread_%d: local table %s already has Slony-I serial key + + Did not need to add serial key; apparently it was already there. +DEBUG3: remoteWorkerThread_%d: table %s does not require Slony-I serial key + + Apparently this table didn't require a special serial key... + +DEBUG3: remoteWorkerThread_%d: table %s Slony-I serial key added local +DEBUG2: remoteWorkerThread_%d: Begin COPY of table %s + + &lslon; is about to start the COPY on both sides to copy a table... +ERROR: remoteWorkerThread_%d: Could not generate copy_set request for %s - %s + + This indicates that the delete/copy requests +failed on the subscriber. The &lslon; will repeat +the COPY_SET attempt; it will probably continue to +fail.. + +ERROR: remoteWorkerThread_%d: copy to stdout on provider - %s %s + + Evidently something about the COPY to stdout on the provider node broke... The event will be retried... + +ERROR: remoteWorkerThread_%d: copy from stdin on local node - %s %s + + Evidently something about the COPY into the table on the +subscriber node broke... The event will be +retried... + +DEBUG2: remoteWorkerThread_%d: %d bytes copied for table %s + + This message indicates that the COPY of the table has +completed. This is followed by running ANALYZE and +reindexing the table on the subscriber. + +DEBUG2: remoteWorkerThread_%d: %.3f seconds +to copy table %s + + After this message, copying and reindexing and analyzing the table on the subscriber is complete. + +DEBUG2: remoteWorkerThread_%d: set last_value of sequence %s (%s) to %s + + As should be no surprise, this indicates that a sequence has been processed on the subscriber. + +DEBUG2: remoteWorkerThread_%d: %.3 seconds to opy sequences + + Summarizing the time spent processing sequences in the COPY_SET event. + +ERROR: remoteWorkerThread_%d: query %s did not return a result + + This indicates that the query, as part of final processing of COPY_SET, failed. The copy will restart... + +DEBUG2: remoteWorkerThread_%d: copy_set no previous SYNC found, use enable event + + This takes place if no past SYNC event was found; the current +event gets set to the event point of +the ENABLE_SUBSCRIPTION event. + + +DEBUG2: remoteWorkerThread_%d: copy_set SYNC found, use event seqno %s + + This takes place if a SYNC event was found; the current +event gets set as shown. + +ERROR: remoteWorkerThread_%d: sl_setsyn entry for set %d not found on provider + + SYNC synchronization information was expected to be drawn from +an existing subscriber, but wasn't found. Something +replication-breakingly-bad has probably +happened... +DEBUG1: remoteWorkerThread_%d: could not insert to sl_setsync_offline + + Oh, dear. After setting up a subscriber, and getting pretty +well everything ready, some writes to a log shipping file failed. +Perhaps disk filled up... + +DEBUG1: remoteWorkerThread_%d: %.3f seconds to build initial setsync status + + Indicates the total time required to get the copy_set event finalized... + + +DEBUG1: remoteWorkerThread_%d: disconnected from provider DB + + At the end of a subscribe set event, the subscriber's &lslon; +will disconnect from the provider, clearing out +connections... + +DEBUG1: remoteWorkerThread_%d: copy_set %d done in %.3f seconds + + Indicates the total time required to complete copy_set... This indicates a successful subscription! + + +DEBUG1: remoteWorkerThread_%d: SYNC %d processing + + This indicates the start of processing of a SYNC + +ERROR: remoteWorkerThread_%d: No pa_conninfo +for data provider %d + + Oh dear, we haven't connection information to connect to the +data provider. That shouldn't be possible, +normally... + +ERROR: remoteWorkerThread_%d: cannot connect to data provider %d on 'dsn' + + Oh dear, we haven't got correct connection +information to connect to the data provider. + +DEBUG1: remoteWorkerThread_%d: connected to data provider %d on 'dsn' + + Excellent; the &lslon; has connected to the provider. + +WARN: remoteWorkerThread_%d: don't know what ev_seqno node %d confirmed for ev_origin %d + + There's no confirmation information available for this node's provider; need to abort the SYNC and wait a bit in hopes that that information will emerge soon... +DEBUG1: remoteWorkerThread_%d: data provider %d only confirmed up to ev_seqno %d for ev_origin %d + + The provider for this node is a subscriber, and apparently that subscriber is a bit behind. The &lslon; will need to wait for the provider to catch up until it has new data. +DEBUG2: remoteWorkerThread_%d: data provider %d confirmed up to ev_seqno %s for ev_origin %d - OK + + All's well; the provider should have the data that the subscriber needs... + +DEBUG2: remoteWorkerThread_%d: syncing set %d with %d table(s) from provider %d + This is declaring the plans for a SYNC: we have a set with some tables to process. +DEBUG2: remoteWorkerThread_%d: ssy_action_list value: %s length: %d + + This portion of the query to collect log data to be applied has been known to bloat up; this shows how it has gotten compressed... + +DEBUG2: remoteWorkerThread_%d: writing archive log... + + This indicates that a log shipping archive log is being written for a particular SYNC set. +DEBUG2: remoteWorkerThread_%d: Didn't add OR to provider + + This indicates that there wasn't anything in a provider clause in the query to collect log data to be applied, which shouldn't be. Things are quite likely to go bad at this point... +DEBUG2: remoteWorkerThread_%d: no sets need syncing for this event + +This will be the case for all SYNC events generated on nodes that are not originating replication sets. You can expect to see these messages reasonably frequently. +ERROR: remoteWorkerThread_%d: cannot determine current log status + + The attempt to read from sl_log_status, which determines +whether we're working on sl_log_1 +or sl_log_2 got no results; that can't be a good thing, +as there certainly should be data here... Replication is likely about +to halt... + +DEBUG2: remoteWorkerThread_%d: current local log_status is %d + This indicates which of sl_log_1 and sl_log_2 are being used to store replication data. + +DEBUG3: remoteWorkerThread_%d: activate helper %d + + We're about to kick off a thread to help process SYNC data... + +DEBUG4: remoteWorkerThread_%d: waiting for log data + + The thread is waiting to get data to consume (e.g. - apply to the replica). + +ERROR: remoteWorkerThread_%d: %s %s - qualification was %s + + Apparently an application of replication data to the subscriber failed... This quite likely indicates some sort of serious corruption. +ERROR: remoteWorkerThread_%d: replication query did not affect one row (cmdTuples = %s) - query was: %s qualification was: %s + + If SLON_CHECK_CMDTUPLES is set, &lslon; applies +changes one tuple at a time, and verifies that each change affects +exactly one tuple. Apparently that wasn't the case here, which +suggests a corruption of replication. That's a rather bad +thing... + +ERROR: remoteWorkerThread_%d: SYNC aborted + + If any errors have been encountered that haven't already aborted the SYNC, this catches and aborts it. + +DEBUG2: remoteWorkerThread_%d: new sl_rowid_seq value: %s + + This marks the progression of this internal &slony1; sequence. +INFO: remoteWorkerThread_%d: Run Archive Command %s + + If &lslon; has been configured to run a command after generating each log shipping archive log, this reports when that process is spawned using system(). +DEBUG2: remoteWorkerThread_%d: SYNC %d done in %.3f seconds + + This indicates the successful completion of a SYNC. Hurray! + +DEBUG1: remoteWorkerThread_%d_d:%.3f seconds delay for first row + + This indicates how long it took to get the first row from the LOG cursor that pulls in data from the sl_log tables. +ERROR: remoteWorkerThread_%d_d: large log_cmddata for actionseq %s not found + + &lslon; could not find the data for one of the very large sl_log table tuples that are pulled individually. This shouldn't happen. +DEBUG2: remoteWorkerThread_%d_d:%.3f seconds until close cursor + + This indicates how long it took to complete reading data from the LOG cursor that pulls in data from the sl_log tables. +DEBUG2: remoteWorkerThread_%d_d: inserts=%d updates=%d deletes=%d + + This reports how much activity was recorded in the current SYNC set. + +DEBUG3: remoteWorkerThread_%d: compress_actionseq(list,subquery) Action list: %s + + This indicates a portion of the LOG cursor query that is about to be compressed. (In some cases, this could grow to enormous size, blowing up the query parser...) +DEBUG3: remoteWorkerThread_%d: compressed actionseq subquery %s + + This indicates what that subquery compressed into. +DEBUG1: remoteWorkerThread_%d: +DEBUG1: remoteWorkerThread_%d: +DEBUG1: remoteWorkerThread_%d: + +ERROR: Slonik version: @MODULEVERSION@ != Slony-I version in PG build % + + This is raised in checkmoduleversion() if there is a mismatch between the version of &slony1; as reported by &lslonik; versus what the &postgres; build has. +ERROR: Slony-I: registry key % is not an int4 value + + Raised in registry_get_int4(), this complains if a requested value turns out to be NULL. +ERROR: registry key % is not a text value + + Raised in registry_get_text(), this complains if a requested value turns out to be NULL. +ERROR: registry key % is not a timestamp value + + Raised in registry_get_timestamp(), this complains if a requested value turns out to be NULL. +NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=% + + This will occur when a &lslon; starts up after another has crashed; this is routine cleanup. +ERROR: Slony-I: This node is already initialized + + This would typically happen if you submit gainst a node that has already been set up with the &slony1; schema. +ERROR: Slony-I: node % not found + + An attempt to mark a node not listed locally as enabled should fail... +ERROR: Slony-I: node % is already active + + An attempt to mark a node already marked as active as active should fail... +ERROR: Slony-I: DROP_NODE cannot initiate on the dropped node + + You need to have an EVENT NODE other than the node that is to be dropped.... + +ERROR: Slony-I: Node % is still configured as a data provider + + You cannot drop a node that is in use as a data provider; you +need to reshape subscriptions so no nodes are dependent on it first. +ERROR: Slony-I: Node % is still origin of one or more sets + + You can't drop a node if it is the origin for a set! Use or first. + +ERROR: Slony-I: cannot failover - node % has no path to the backup node + + You cannot failover to a node that isn't connected to all the subscribers, at least indirectly. +ERROR: Slony-I: cannot failover - node % is not subscribed to set % + + You can't failover to a node that doesn't subscribe to all the relevant sets. + +ERROR: Slony-I: cannot failover - subscription for set % is not active If the subscription has been set up, but isn't yet active, that's still no good. + +ERROR: Slony-I: cannot failover - node % is not a forwarder of set % + + You can only failover or move a set to a node that has +forwarding turned on. + +NOTICE: failedNode: set % has no other direct receivers - move now + + If the backup node is the only direct subscriber, then life is a bit simplified... No need to reshape any subscriptions! +NOTICE: failedNode set % has other direct receivers - change providers only + In this case, all direct subscribers are pointed to the backup node, and the backup node is pointed to receive from another node so it can get caught up. +NOTICE: Slony-I: Please drop schema _ at CLUSTERNAME@ + + A node has been uninstalled; you may need to drop the schema... +ERROR: Slony-I: setAddTable_int(): table % has no index % + + Apparently a PK index was specified that is absent on this node... +ERROR: Slony-I setAddTable_int(): table % not found + + Table wasn't found on this node; did you load the schema in properly?. +ERROR: Slony-I setAddTable_int(): table % is not a regular table + + You tried to replicate something that isn't a table; you can't do that! +NOTICE: Slony-I setAddTable_int(): table % PK column % nullable + + You tried to replicate a table where one of the columns in the would-be primary key is allowed to be null. All PK columns must be NOT NULL. This request is about to fail. +ERROR: Slony-I setAddTable_int(): table % not replicable! + + +This happens because of the NULLable PK column. +ERROR: Slony-I setAddTable_int(): table id % has already been assigned! + + The table ID value needs to be assigned uniquely +in ; apparently you requested a value +already in use. + + +ERROR: Slony-I setDropTable(): table % not found + + Table wasn't found on this node; are you sure you had the ID right? +ERROR: Slony-I setDropTable(): set % not found + + The replication set wasn't found on this node; are you sure you had the ID right? +ERROR: Slony-I setDropTable(): set % has remote origin + + The replication set doesn't originate on this node; you probably need to specify an EVENT NODE in the command. + +ERROR: Slony-I setAddSequence(): set % not found + Apparently the set you requested is not available... + +ERROR: Slony-I setAddSequence(): set % has remote origin + You may only add things at the origin node. + +ERROR: Slony-I setAddSequence(): cannot add sequence to currently subscribed set % + Apparently the set you requested has already been subscribed. You cannot add tables/sequences to an already-subscribed set. You will need to create a new set, add the objects to that new set, and set up subscriptions to that. +ERROR: Slony-I setAddSequence_int(): set % not found + Apparently the set you requested is not available... + +ERROR: Slony-I setAddSequence_int(): sequence % not found + Apparently the sequence you requested is not available on this node. How did you set up the schemas on the subscribers??? + +ERROR: Slony-I setAddSequence_int(): % is not a sequence + Seems pretty obvious :-). + +ERROR: Slony-I setAddSequence_int(): sequence ID % has already been assigned + Each sequence ID added must be unique; apparently you have reused an ID. + +ERROR: Slony-I setDropSequence_int(): sequence % not found + Could this sequence be in another set? + +ERROR: Slony-I setDropSequence_int(): set % not found + Could you have gotten the set ID wrong? + +ERROR: Slony-I setDropSequence_int(): set % has origin at another node - submit this to that node + + This message seems fairly self-explanatory... + +ERROR: Slony-I setMoveTable_int(): table % not found + + Table wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I setMoveTable_int(): set ids cannot be identical + + Does it make sense to move a table from a set into the very same set? + +ERROR: Slony-I setMoveTable_int(): set % not found + + Set wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I setMoveTable_int(): set % does not originate on local node + + Set wasn't found to have origin on this node; you probably gave the wrong EVENT NODE... + +ERROR: Slony-I setMoveTable_int(): subscriber lists of set % and % are different + + You can only move objects between sets that have identical subscriber lists. + +ERROR: Slony-I setMoveSequence_int(): sequence % not found + + Sequence wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I setMoveSequence_int(): set ids cannot be identical + + Does it make sense to move a sequence from a set into the very same set? + +ERROR: Slony-I setMoveSequence_int(): set % not found + + Set wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I setMoveSequence_int(): set % does not originate on local node + + Set wasn't found to have origin on this node; you probably gave the wrong EVENT NODE... + +ERROR: Slony-I setMoveSequence_int(): subscriber lists of set % and % are different + + You can only move objects between sets that have identical subscriber lists. + +Slony-I: sequenceSetValue(): sequence % not found + + Curious; the sequence object is missing. Could someone have dropped it from the schema by hand (e.g. - not using )? + +ERROR: Slony-I ddlScript_prepare(): set % not found + + Set wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I ddlScript_prepare_int(): set % not found + + Set wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I: alterTableForReplication(): Table with id % not found + Apparently the table wasn't found; could the schema be messed up? + +ERROR: Slony-I: alterTableForReplication(): Table % is already in altered state + + Curious... We're trying to set a table up for replication +a second time? + +NOTICE: Slony-I: alterTableForReplication(): multiple instances of trigger % on table %'', + + This normally happens if you have a table that had a trigger attached to it that replication hid due to this being a subscriber node, and then you added a trigger by the same name back to replication. Now, when trying to "fix up" triggers, those two triggers conflict. + + The DDL script will keep running and rerunning, or the UNINSTALL NODE will keep failing, until you drop the visible trigger, by hand, much as you must have added it, by hand, earlier. + +ERROR: Slony-I: Unable to disable triggers + This is the error that follows the multiple triggers problem. +ERROR: Slony-I: alterTableRestore(): Table with id % not found + + This runs when a table is being restored to non-replicated state; apparently the replicated table wasn't found. + +ERROR: Slony-I: alterTableRestore(): Table % is not in altered state + + Hmm. The table isn't in altered replicated state. That shouldn't be, if replication had been working properly... + +ERROR: Slony-I: subscribeSet() must be called on provider + This function should only get called on the provider node. &slonik; normally handles this right, unless one had wrong DSNs in a &slonik; script... + + +ERROR: Slony-I: subscribeSet(): set % not found + Hmm. The provider node isn't aware of this set. Wrong parms to a &slonik; script? + +ERROR: Slony-I: subscribeSet(): set origin and receiver cannot be identical + Duh, an origin node can't subscribe to itself. + +ERROR: Slony-I: subscribeSet(): set provider and receiver cannot be identical + A receiver must subscribe to a different node... +Slony-I: subscribeSet(): provider % is not an active forwarding node for replication set % + + You can only use a live, active, forwarding provider as a data +source. + +Slony-I: subscribeSet_int(): set % is not active, cannot change provider + You can't change the provider just yet... +Slony-I: subscribeSet_int(): set % not found + This node isn't aware of the set... Perhaps you submitted wrong parms? +Slony-I: unsubscribeSet() must be called on receiver + Seems obvious... This probably indicates a bad &slonik; admin DSN... +Slony-I: Cannot unsubscribe set % while being provider + + This should seem obvious; will fail if a node has dependent subscribers for which it is the provider + +Slony-I: cleanupEvent(): Single node - deleting events < % + If there's only one node, the cleanup event will delete old events so that you don't get build-up of crud. +Slony-I: tableAddKey(): table % not found + Perhaps you didn't copy the schema over properly? +Slony-I: tableDropKey(): table with ID% not found + Seems curious; you were presumably replicating to this table, so for this to be gone seems rather odd... +Slony-I: determineIdxnameUnique(): table % not found + +Did you properly copy over the schema to a new node??? +Slony-I: table % has no primary key + + This likely signifies a bad loading of schema... + +Slony-I: table % has no unique index % + + This likely signifies a bad loading of schema... + +Slony-I: Logswitch to sl_log_2 initiated' + Indicates that &lslon; is in the process of switching over to this log table. +Slony-I: Logswitch to sl_log_1 initiated' + Indicates that &lslon; is in the process of switching over to this log table. +Previous logswitch still in progress + + An attempt was made to do a log switch while one was in progress... + + + + &nagios; Replication Checks &nagios; for monitoring replication @@ -321,13 +947,4 @@ mode:sgml sgml-omittag:nil sgml-shorttag:t -sgml-minimize-attributes:nil -sgml-always-quote-attributes:t -sgml-indent-step:1 -sgml-indent-data:t -sgml-parent-document:"book.sgml" -sgml-exposed-tags:nil -sgml-local-catalogs:("/usr/lib/sgml/catalog") -sgml-local-ecat-files:nil -End: ---> +sgm From cvsuser Thu Jul 27 17:12:16 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Modified numerous RAISE requests in stored functions to Message-ID: <20060728001216.239B811BF040@gborg.postgresql.org> Log Message: ----------- Modified numerous RAISE requests in stored functions to more clearly detail what function the failure occurs in. Modified Files: -------------- slony1-engine/src/backend: slony1_funcs.sql (r1.93 -> r1.94) -------------- next part -------------- Index: slony1_funcs.sql =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/src/backend/slony1_funcs.sql,v retrieving revision 1.93 retrieving revision 1.94 diff -Lsrc/backend/slony1_funcs.sql -Lsrc/backend/slony1_funcs.sql -u -w -r1.93 -r1.94 --- src/backend/slony1_funcs.sql +++ src/backend/slony1_funcs.sql @@ -561,7 +561,7 @@ end if; else if v_value is null then - raise exception ''Slony-I: registry key % is not an text value'', + raise exception ''Slony-I: registry key % is not a text value'', p_key; end if; end if; @@ -3016,7 +3016,7 @@ select 1 into v_sync_row from @NAMESPACE at .sl_sequence where seq_id = p_seq_id; if not found then - v_relkind := ''O''; -- all is OK + v_relkind := ''o''; -- all is OK else raise exception ''Slony-I: setAddSequence_int(): sequence ID % has already been assigned'', p_seq_id; end if; @@ -3333,22 +3333,22 @@ select seq_set into v_old_set_id from @NAMESPACE at .sl_sequence where seq_id = p_seq_id; if not found then - raise exception ''Slony-I: sequence %d not found'', p_seq_id; + raise exception ''Slony-I: setMoveSequence(): sequence %d not found'', p_seq_id; end if; -- ---- -- Check that both sets exist and originate here -- ---- if p_new_set_id = v_old_set_id then - raise exception ''Slony-I: set ids cannot be identical''; + raise exception ''Slony-I: setMoveSequence(): set ids cannot be identical''; end if; select set_origin into v_origin from @NAMESPACE at .sl_set where set_id = p_new_set_id; if not found then - raise exception ''Slony-I: set % not found'', p_new_set_id; + raise exception ''Slony-I: setMoveSequence(): set % not found'', p_new_set_id; end if; if v_origin != @NAMESPACE at .getLocalNodeId(''_ at CLUSTERNAME@'') then - raise exception ''Slony-I: set % does not originate on local node'', + raise exception ''Slony-I: setMoveSequence(): set % does not originate on local node'', p_new_set_id; end if; @@ -3456,7 +3456,7 @@ and SQ.seq_reloid = PGC.oid and PGC.relnamespace = PGN.oid; if not found then - raise exception ''Slony-I: sequence % not found'', p_seq_id; + raise exception ''Slony-I: sequenceSetValue(): sequence % not found'', p_seq_id; end if; -- ---- @@ -3859,11 +3859,11 @@ and PGXC.relname = T.tab_idxname for update; if not found then - raise exception ''Slony-I: Table with id % not found'', p_tab_id; + raise exception ''Slony-I: alterTableForReplication(): Table with id % not found'', p_tab_id; end if; v_tab_fqname = v_tab_row.tab_fqname; if v_tab_row.tab_altered then - raise exception ''Slony-I: Table % is already in altered state'', + raise exception ''Slony-I: alterTableForReplication(): Table % is already in altered state'', v_tab_fqname; end if; @@ -3910,7 +3910,7 @@ pi.indrelid = tab.tab_reloid and -- indexes table is this table pc.oid = tab.tab_reloid loop - raise notice ''Slony-I: multiple instances of trigger % on table %'', + raise notice ''Slony-I: alterTableForReplication(): multiple instances of trigger % on table %'', v_trec.tgname, v_trec.relname; v_tgbad := ''true''; end loop; @@ -4023,11 +4023,11 @@ and PGXC.relname = T.tab_idxname for update; if not found then - raise exception ''Slony-I: Table with id % not found'', p_tab_id; + raise exception ''Slony-I: alterTableRestore(): Table with id % not found'', p_tab_id; end if; v_tab_fqname = v_tab_row.tab_fqname; if not v_tab_row.tab_altered then - raise exception ''Slony-I: Table % is not in altered state'', + raise exception ''Slony-I: alterTableRestore(): Table % is not in altered state'', v_tab_fqname; end if; @@ -4130,15 +4130,15 @@ from @NAMESPACE at .sl_set where set_id = p_sub_set; if not found then - raise exception ''Slony-I: set % not found'', p_sub_set; + raise exception ''Slony-I: subscribeSet(): set % not found'', p_sub_set; end if; if v_set_origin = p_sub_receiver then raise exception - ''Slony-I: set origin and receiver cannot be identical''; + ''Slony-I: subscribeSet(): set origin and receiver cannot be identical''; end if; if p_sub_receiver = p_sub_provider then raise exception - ''Slony-I: set provider and receiver cannot be identical''; + ''Slony-I: subscribeSet(): set provider and receiver cannot be identical''; end if; -- --- @@ -4150,7 +4150,7 @@ where sub_set = p_sub_set and sub_receiver = p_sub_provider and sub_forward and sub_active) then - raise exception ''Slony-I: provider % is not an active forwarding node for replication set %'', p_sub_provider, p_sub_set; + raise exception ''Slony-I: subscribeSet(): provider % is not an active forwarding node for replication set %'', p_sub_provider, p_sub_set; end if; end if; @@ -4204,7 +4204,7 @@ and sub_receiver = p_sub_receiver; if found then if not v_sub_row.sub_active then - raise exception ''Slony-I: set % is not active, cannot change provider'', + raise exception ''Slony-I: subscribeSet_int(): set % is not active, cannot change provider'', p_sub_set; end if; end if; @@ -4252,7 +4252,7 @@ from @NAMESPACE at .sl_set where set_id = p_sub_set; if not found then - raise exception ''Slony-I: set % not found'', p_sub_set; + raise exception ''Slony-I: subscribeSet_int(): set % not found'', p_sub_set; end if; if v_set_origin = @NAMESPACE at .getLocalNodeId(''_ at CLUSTERNAME@'') then @@ -4599,7 +4599,7 @@ select ev_origin, ev_seqno into v_min_row from @NAMESPACE at .sl_event where ev_origin = @NAMESPACE at .getLocalNodeId(''_ at CLUSTERNAME@'') order by ev_origin desc, ev_seqno desc limit 1; - raise notice ''Single node - deleting events < %'', v_min_row.ev_seqno; + raise notice ''Slony-I: cleanupEvent(): Single node - deleting events < %'', v_min_row.ev_seqno; delete from @NAMESPACE at .sl_event where ev_origin = v_min_row.ev_origin and @@ -4668,7 +4668,7 @@ -- anything means the table does not exist. -- if not found then - raise exception ''Slony-I: table % not found'', v_tab_fqname_quoted; + raise exception ''Slony-I: tableAddKey(): table % not found'', v_tab_fqname_quoted; end if; -- @@ -4734,7 +4734,7 @@ and T.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid; if not found then - raise exception ''Slony-I: table with ID % not found'', p_tab_id; + raise exception ''Slony-I: tableDropKey(): table with ID % not found'', p_tab_id; end if; -- ---- @@ -4784,7 +4784,7 @@ where @NAMESPACE at .slon_quote_brute(PGN.nspname) || ''.'' || @NAMESPACE at .slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace) is null then - raise exception ''Slony-I: table % not found'', v_tab_fqname_quoted; + raise exception ''Slony-I: determineIdxnameUnique(): table % not found'', v_tab_fqname_quoted; end if; -- From cvsuser Thu Jul 27 17:43:46 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:36 2007 Subject: [Slony1-commit] By cbbrowne: Fix up tagging problems to admin guide... Message-ID: <20060728004346.621CE11BF040@gborg.postgresql.org> Log Message: ----------- Fix up tagging problems to admin guide... Modified Files: -------------- slony1-engine/doc/adminguide: monitoring.sgml (r1.26 -> r1.27) slony.sgml (r1.31 -> r1.32) -------------- next part -------------- Index: monitoring.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v retrieving revision 1.26 retrieving revision 1.27 diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.26 -r1.27 --- doc/adminguide/monitoring.sgml +++ doc/adminguide/monitoring.sgml @@ -734,11 +734,11 @@ Hmm. The table isn't in altered replicated state. That shouldn't be, if replication had been working properly... ERROR: Slony-I: subscribeSet() must be called on provider - This function should only get called on the provider node. &slonik; normally handles this right, unless one had wrong DSNs in a &slonik; script... + This function should only get called on the provider node. &lslonik; normally handles this right, unless one had wrong DSNs in a &lslonik; script... ERROR: Slony-I: subscribeSet(): set % not found - Hmm. The provider node isn't aware of this set. Wrong parms to a &slonik; script? + Hmm. The provider node isn't aware of this set. Wrong parms to a &lslonik; script? ERROR: Slony-I: subscribeSet(): set origin and receiver cannot be identical Duh, an origin node can't subscribe to itself. @@ -755,7 +755,7 @@ Slony-I: subscribeSet_int(): set % not found This node isn't aware of the set... Perhaps you submitted wrong parms? Slony-I: unsubscribeSet() must be called on receiver - Seems obvious... This probably indicates a bad &slonik; admin DSN... + Seems obvious... This probably indicates a bad &lslonik; admin DSN... Slony-I: Cannot unsubscribe set % while being provider This should seem obvious; will fail if a node has dependent subscribers for which it is the provider @@ -790,7 +790,8 @@ &nagios; Replication Checks -&nagios; for monitoring replication +&nagios; for monitoring +replication The script in the tools directory called pgsql_replication_check.pl represents some of the @@ -947,4 +948,13 @@ mode:sgml sgml-omittag:nil sgml-shorttag:t -sgm +sgml-minimize-attributes:nil +sgml-always-quote-attributes:t +sgml-indent-step:1 +sgml-indent-data:t +sgml-parent-document:"book.sgml" +sgml-exposed-tags:nil +sgml-local-catalogs:"/usr/lib/sgml/catalog" +sgml-local-ecat-files:nil +End: +--> Index: slony.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v retrieving revision 1.31 retrieving revision 1.32 diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.31 -r1.32 --- doc/adminguide/slony.sgml +++ doc/adminguide/slony.sgml @@ -49,6 +49,7 @@ "> pg_listener"> "> +"> ]> From cvsuser Sat Jul 29 04:30:07 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:37 2007 Subject: [Slony1-commit] By cbbrowne: Reorganized log documentation, and linked it into various Message-ID: <20060729113007.C569811BF031@gborg.postgresql.org> Log Message: ----------- Reorganized log documentation, and linked it into various places that should point to it. Modified Files: -------------- slony1-engine/doc/adminguide: addthings.sgml (r1.17 -> r1.18) ddlchanges.sgml (r1.24 -> r1.25) filelist.sgml (r1.16 -> r1.17) logshipping.sgml (r1.14 -> r1.15) monitoring.sgml (r1.27 -> r1.28) slony.sgml (r1.32 -> r1.33) -------------- next part -------------- Index: addthings.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v retrieving revision 1.17 retrieving revision 1.18 diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.17 -r1.18 --- doc/adminguide/addthings.sgml +++ doc/adminguide/addthings.sgml @@ -39,9 +39,9 @@ things. For instance, submitting multiple subscription requests for a particular set in one script often turns out quite badly. If it is truly necessary to -automate this, you'll probably want to submit requests in between subscription requests in -order that the script wait for one +automate this, you'll probably want to +submit requests in between subscription +requests in order that the script wait for one subscription to complete processing before requesting the next one. @@ -111,7 +111,12 @@ There are a number of sharp edges to note... - You absolutely must not include transaction control commands, particularly BEGIN and COMMIT, inside these DDL scripts. &slony1; wraps DDL scripts with a BEGIN/COMMIT pair; adding extra transaction control will mean that parts of the DDL will commit outside the control of &slony1; + You absolutely must not include +transaction control commands, particularly BEGIN +and COMMIT, inside these DDL scripts. &slony1; +wraps DDL scripts with a BEGIN/COMMIT +pair; adding extra transaction control will mean that parts of the DDL +will commit outside the control of &slony1; Before version 1.2, it was necessary to be exceedingly restrictive about what you tried to process using @@ -137,7 +142,8 @@ How to remove replication for a node - You will want to remove the various &slony1; components connected to the database(s). + You will want to remove the various &slony1; components +connected to the database(s). We will just consider, for now, doing this to one node. If you have multiple nodes, you will have to repeat this as many times as @@ -149,13 +155,15 @@ Log Triggers / Update Denial Triggers - The "cluster" schema containing &slony1; tables indicating the state of the node as well as various stored functions - - - process that manages the node - - - Optionally, the SQL and pl/pgsql scripts and &slony1; binaries that are part of the &postgres; build. (Of course, this would make it challenging to restart replication; it is unlikely that you truly need to do this...) + The cluster schema containing &slony1; +tables indicating the state of the node as well as various stored +functions + + &lslon; process that manages the node + Optionally, the SQL and pl/pgsql scripts and &slony1; +binaries that are part of the &postgres; build. (Of course, this would +make it challenging to restart replication; it is unlikely that you +truly need to do this...) @@ -186,8 +194,8 @@ CASCADE;, which will drop out &slony1; functions, tables, and triggers alike. That is generally less suitable than , because that command not only -drops the schema and its contents, but also removes any columns added -in using . +drops the schema and its contents, but also removes any columns +previously added in using . @@ -306,15 +314,31 @@ + How do I upgrade +&slony1; to a newer version? + What happens when I fail over? - To be written... + Some of this is described under but +more of a procedure should be written... How do I move master to a new node? - Obviously, use ; more details -should be added... - + You must first pick a node that is connected to the former +origin (otherwise it is not straightforward to reverse connections in +the move to keep everything connected). + + Second, you must run a &lslonik; script with the +command to lock the set on the origin +node. Note that at this point you have an application outage under +way, as what this does is to put triggers on the origin that rejects +updates. + + Now, submit the &lslonik; request. +It's perfectly reasonable to submit both requests in the same +&lslonik; script. Now, the origin gets switched over to the new +origin node. If the new node is a few events behind, it may take a +little while for this to take place. Index: monitoring.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v retrieving revision 1.27 retrieving revision 1.28 diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.27 -r1.28 --- doc/adminguide/monitoring.sgml +++ doc/adminguide/monitoring.sgml @@ -4,794 +4,9 @@ monitoring &slony1; -Here are some of things that you may find in your &slony1; logs, -and explanations of what they mean. - -CONFIG notices - -These entries are pretty straightforward. They are informative -messages about your configuration. - -Here are some typical entries that you will probably run into in -your logs: - - -CONFIG main: local node id = 1 -CONFIG main: loading current cluster configuration -CONFIG storeNode: no_id=3 no_comment='Node 3' -CONFIG storePath: pa_server=5 pa_client=1 pa_conninfo="host=127.0.0.1 dbname=foo user=postgres port=6132" pa_connretry=10 -CONFIG storeListen: li_origin=3 li_receiver=1 li_provider=3 -CONFIG storeSet: set_id=1 set_origin=1 set_comment='Set 1' -CONFIG main: configuration complete - starting threads - - -DEBUG Notices - -Debug notices are always prefaced by the name of the thread that -the notice originates from. You will see messages from the following -threads: - - -localListenThread - - This is the local thread that listens for events on -the local node. - -remoteWorkerThread-X - - The thread processing remote events. You can expect -to see one of these for each node that this node communicates -with. - -remoteListenThread-X - -Listens for events on a remote node database. You may -expect to see one of these for each node in the -cluster. - -cleanupThread Takes care -of things like vacuuming, cleaning out the confirm and event tables, -and deleting old data. - -syncThread Generates SYNC -events. - - - - - - How to read &slony1; logs - - Note that as far as slon is concerned, there is no -master or slave. They are just -nodes. - -What you can expect, initially, is to see, on both nodes, some -events propagating back and forth. Firstly, there should be some -events published to indicate creation of the nodes and paths. If you -don't see those, then the nodes aren't likely to be able to -communicate with one another, and nothing else will happen... - - - -Create the two nodes. - - No slons are running yet, so there are no logs to look -at. - - - - Start the two slons - - The logs for each will start out very quiet, as neither node -has much to say, and neither node knows how to talk to another -node. - - - - Do the to set up -communications paths. That will allow the nodes to start to become -aware of one another. - - The slon logs should now start to receive events from -foreign nodes. - - In version 1.0, is not set up -automatically, so things still remain quiet until you explicitly -submit STORE LISTEN requests. In version 1.1, the -listen paths are set up automatically, which will much -more quickly get the communications network up and running. - - If you look at the contents of the tables and and , on each node, that should give a good idea -as to where things stand. Until the starts, -each node may only be partly configured. If there are two nodes, -there should be two entries in all three of these tables once the -communications configuration is set up properly. If there are fewer -entries than that, well, that should give you some idea of what is -missing. - - - If needed (e.g. - before version -1.1), submit requests to indicate how -the nodes will use the communications paths. - - Once this has been done, the nodes' logs should show a greater -level of activity, with events periodically being initiated on one -node or the other, and propagating to the other. - - - You'll set up the set (), add tables (), and sequences (), and will see relevant events only on -the origin node for the set. - - Then, when you submit the request, the event should go to both -nodes. - - The origin node has little more to do, after that... The -subscriber will then have a COPY_SET event, which -will lead to logging information about adding each table and copying -its data. - - - -After that, you'll mainly see two sorts of behaviour: - - - - On the origin, there won't be too terribly much -logged, just indication that some SYNC events are -being generated and confirmed by other nodes. - - On the subscriber, there will be reports of -SYNC events, and that the subscriber pulls data -from the provider for the relevant set(s). This will happen -infrequently if there are no updates going to the origin node; it will -happen frequently when the origin sees heavy updates. - - - - - WriteMe: I can't decide the format for the rest of this. I -think maybe there should be a "how it works" page, explaining more -about how the threads work, what to expect in the logs after you run a - - - - Errors and Implications - - -remoteWorkerThread_%d: log archive failed %s - %s\n - - This indicates that an error was encountered trying to write a -log shipping file. Normally the &lslon; will retry, and hopefully -succeed. - -remoteWorkerThread_%d: DDL preparation failed - set %d - only on node % - Something broke when applying a DDL script on one of the nodes. -This is quite likely indicates that the node's schema differed from -that on the origin; you may need to apply a change manually to the -node to allow the event to proceed. The scary, scary alternative -might be to delete the offending event, assuming it can't possibly -work... -SLON_CONFIG: remoteWorkerThread_%d: DDL request with %d statements - This is informational, indicating how many SQL statements were processed. -SLON_ERROR: remoteWorkerThread_%d: DDL had invalid number of statements - %d - - Occurs if there were < 0 statements (which should be impossible) or > MAXSTATEMENTS statements. Probably the script was bad... - -ERROR: remoteWorkerThread_%d: malloc() -failure in DDL_SCRIPT - could not allocate %d bytes of -memory - - This should only occur if you submit some extraordinarily large -DDL script that makes a &lslon; run out of memory - -CONFIG: remoteWorkerThread_%d: DDL Statement %d: [%s] - - This lists each DDL statement as it is submitted. -ERROR: DDL Statement failed - %s - - Oh, dear, one of the DDL statements that worked on the origin -failed on this remote node... - -CONFIG: DDL Statement success - %s - - All's well... - -ERROR: remoteWorkerThread_%d: Could not generate DDL archive tracker %s - %s - - Apparently the DDL script couldn't be written to a log shipping file... - -ERROR: remoteWorkerThread_%d: Could not submit DDL script %s - %s - -Couldn't write the script to a log shipping file. - -ERROR: remoteWorkerThread_%d: Could not close DDL script %s - %s - -Couldn't close a log shipping file for a DDL script. - -FATAL: remoteWorkerThread_%d: pthread_create() - %s - - Couldn't create a new remote worker thread. - -DEBUG1 remoteWorkerThread_%d: helper thread for provider %d created - - This normally happens when the &lslon; starts: a thread is created for each node to which the local node should be listening for events. - -DEBUG4: remoteWorkerThread_%d: added active set %d to provider %d - - Indicates that this set is being provided by this -provider. - -DEBUG1: remoteWorkerThread_%d: helper thread for provider %d terminated - - If subscriptions reshape such that a node no longer provides a -subscription, then the thread that works on that node can be -dropped. - -DEBUG1: remoteWorkerThread_%d: disconnecting -from data provider %d - - A no-longer-used data provider may be dropped; if connection -information is changed, the &lslon; needs to disconnect and -reconnect. - -DEBUG2: remoteWorkerThread_%d: ignore new events due to shutdown - - If the &lslon; is shutting down, it is futile to process more events -DEBUG2: remoteWorkerThread_%d: event %d ignored - unknown origin - - Probably happens if events arrive before -the STORE_NODE event that tells that the new node -now exists... - -WARN: remoteWorkerThread_%d: event %d ignored - origin inactive - This shouldn't occur now (2006) as we don't support the notion of deactivating a node... - -DEBUG2: remoteWorkerThread_%d: event %d ignored - duplicate - - This might be expected to happen if the event notification -comes in concurrently from two sources... - -DEBUG2: remoteWorkerThread_%d: unknown node %d - - Happens if the &lslon; is unaware of this node; probably a sign -of STORE_NODE requests not -propagating... - -DEBUG1: remoteWorkerThread_%d: node %d - no worker thread - - Curious: we can't wake up the worker thread; there probably -should already be one... - -DEBUG2: remoteWorkerThread_%d: forward confirm %d,%s received by %d - - These events should occur frequently and routinely as nodes report confirations of the events they receive. - -DEBUG1: copy_set %d - - This indicates the beginning of copying data for a new subscription. - -ERROR: remoteWorkerThread_%d: set %d not found in runtime configuration - - &lslon; tried starting up a subscription; it couldn't find conninfo for the data source. Perhaps paths are not properly propagated? - -ERROR: remoteWorkerThread_%d: node %d has no pa_conninfo - - Apparently the conninfo configuration -was wrong... - -ERROR: copy set %d cannot connect to provider DB node %d - - &lslon; couldn't connect to the provider. Is the conninfo -wrong? Or perhaps authentication is misconfigured? Or perhaps the -database is down? - -DEBUG1: remoteWorkerThread_%d: connected to provider DB - - Excellent: the copy set has a connection to its provider -ERROR: remoteWorkerThread_%d: Could not open COPY SET archive file %s - %s - - Seems pretty self-explanatory... -ERROR: remoteWorkerThread_%d: Could not generate COPY SET archive header %s - %s - - Probably means that we just filled up a filesystem... - -WARN: remoteWorkerThread_%d: transactions -earlier than XID %s are still in progress - - This indicates that some old transaction is in progress from before the earliest available SYNC on the provider. &slony1; cannot start replicating until that transaction completes. This will repeat until thetransaction completes... - - -DEBUG2: remoteWorkerThread_%d: prepare to copy table %s - - This indicates that &lslon; is beginning preparations to set up subscription for a table. -DEBUG1: remoteWorkerThread_%d: table %s will require Slony-I serial key - - Evidently this is a table defined with where &slony1; has to add a surrogate primary key. -ERROR: remoteWorkerThread_%d: Could not lock table %s on subscriber - - For whatever reason, the table could not be locked, so the -subscription needs to be restarted. If the problem was something like -a deadlock, retrying may help. If the problem was otherwise, you may -need to intervene... - -DEBUG2: remoteWorkerThread_%d: all tables for set %d found on subscriber - - An informational message indicating that the first pass through the tables found no problems... -DEBUG2: remoteWorkerThread_%d: copy sequence %s - - Processing some sequence... -DEBUG2: remoteWorkerThread_%d: copy table %s - - &lslon; is starting to copy a table... -DEBUG3: remoteWorkerThread_%d: table %s Slony-I serial key added local - - Just added new column to the table to provide surrogate primary key. -DEBUG3: remoteWorkerThread_%d: local table %s already has Slony-I serial key - - Did not need to add serial key; apparently it was already there. -DEBUG3: remoteWorkerThread_%d: table %s does not require Slony-I serial key - - Apparently this table didn't require a special serial key... - -DEBUG3: remoteWorkerThread_%d: table %s Slony-I serial key added local -DEBUG2: remoteWorkerThread_%d: Begin COPY of table %s - - &lslon; is about to start the COPY on both sides to copy a table... -ERROR: remoteWorkerThread_%d: Could not generate copy_set request for %s - %s - - This indicates that the delete/copy requests -failed on the subscriber. The &lslon; will repeat -the COPY_SET attempt; it will probably continue to -fail.. - -ERROR: remoteWorkerThread_%d: copy to stdout on provider - %s %s - - Evidently something about the COPY to stdout on the provider node broke... The event will be retried... - -ERROR: remoteWorkerThread_%d: copy from stdin on local node - %s %s - - Evidently something about the COPY into the table on the -subscriber node broke... The event will be -retried... - -DEBUG2: remoteWorkerThread_%d: %d bytes copied for table %s - - This message indicates that the COPY of the table has -completed. This is followed by running ANALYZE and -reindexing the table on the subscriber. - -DEBUG2: remoteWorkerThread_%d: %.3f seconds -to copy table %s - - After this message, copying and reindexing and analyzing the table on the subscriber is complete. - -DEBUG2: remoteWorkerThread_%d: set last_value of sequence %s (%s) to %s - - As should be no surprise, this indicates that a sequence has been processed on the subscriber. - -DEBUG2: remoteWorkerThread_%d: %.3 seconds to opy sequences - - Summarizing the time spent processing sequences in the COPY_SET event. - -ERROR: remoteWorkerThread_%d: query %s did not return a result - - This indicates that the query, as part of final processing of COPY_SET, failed. The copy will restart... - -DEBUG2: remoteWorkerThread_%d: copy_set no previous SYNC found, use enable event - - This takes place if no past SYNC event was found; the current -event gets set to the event point of -the ENABLE_SUBSCRIPTION event. - - -DEBUG2: remoteWorkerThread_%d: copy_set SYNC found, use event seqno %s - - This takes place if a SYNC event was found; the current -event gets set as shown. - -ERROR: remoteWorkerThread_%d: sl_setsyn entry for set %d not found on provider - - SYNC synchronization information was expected to be drawn from -an existing subscriber, but wasn't found. Something -replication-breakingly-bad has probably -happened... -DEBUG1: remoteWorkerThread_%d: could not insert to sl_setsync_offline - - Oh, dear. After setting up a subscriber, and getting pretty -well everything ready, some writes to a log shipping file failed. -Perhaps disk filled up... - -DEBUG1: remoteWorkerThread_%d: %.3f seconds to build initial setsync status - - Indicates the total time required to get the copy_set event finalized... - - -DEBUG1: remoteWorkerThread_%d: disconnected from provider DB - - At the end of a subscribe set event, the subscriber's &lslon; -will disconnect from the provider, clearing out -connections... - -DEBUG1: remoteWorkerThread_%d: copy_set %d done in %.3f seconds - - Indicates the total time required to complete copy_set... This indicates a successful subscription! - - -DEBUG1: remoteWorkerThread_%d: SYNC %d processing - - This indicates the start of processing of a SYNC - -ERROR: remoteWorkerThread_%d: No pa_conninfo -for data provider %d - - Oh dear, we haven't connection information to connect to the -data provider. That shouldn't be possible, -normally... - -ERROR: remoteWorkerThread_%d: cannot connect to data provider %d on 'dsn' - - Oh dear, we haven't got correct connection -information to connect to the data provider. - -DEBUG1: remoteWorkerThread_%d: connected to data provider %d on 'dsn' - - Excellent; the &lslon; has connected to the provider. - -WARN: remoteWorkerThread_%d: don't know what ev_seqno node %d confirmed for ev_origin %d - - There's no confirmation information available for this node's provider; need to abort the SYNC and wait a bit in hopes that that information will emerge soon... -DEBUG1: remoteWorkerThread_%d: data provider %d only confirmed up to ev_seqno %d for ev_origin %d - - The provider for this node is a subscriber, and apparently that subscriber is a bit behind. The &lslon; will need to wait for the provider to catch up until it has new data. -DEBUG2: remoteWorkerThread_%d: data provider %d confirmed up to ev_seqno %s for ev_origin %d - OK - - All's well; the provider should have the data that the subscriber needs... - -DEBUG2: remoteWorkerThread_%d: syncing set %d with %d table(s) from provider %d - This is declaring the plans for a SYNC: we have a set with some tables to process. -DEBUG2: remoteWorkerThread_%d: ssy_action_list value: %s length: %d - - This portion of the query to collect log data to be applied has been known to bloat up; this shows how it has gotten compressed... - -DEBUG2: remoteWorkerThread_%d: writing archive log... - - This indicates that a log shipping archive log is being written for a particular SYNC set. -DEBUG2: remoteWorkerThread_%d: Didn't add OR to provider - - This indicates that there wasn't anything in a provider clause in the query to collect log data to be applied, which shouldn't be. Things are quite likely to go bad at this point... -DEBUG2: remoteWorkerThread_%d: no sets need syncing for this event - -This will be the case for all SYNC events generated on nodes that are not originating replication sets. You can expect to see these messages reasonably frequently. -ERROR: remoteWorkerThread_%d: cannot determine current log status - - The attempt to read from sl_log_status, which determines -whether we're working on sl_log_1 -or sl_log_2 got no results; that can't be a good thing, -as there certainly should be data here... Replication is likely about -to halt... - -DEBUG2: remoteWorkerThread_%d: current local log_status is %d - This indicates which of sl_log_1 and sl_log_2 are being used to store replication data. - -DEBUG3: remoteWorkerThread_%d: activate helper %d - - We're about to kick off a thread to help process SYNC data... - -DEBUG4: remoteWorkerThread_%d: waiting for log data - - The thread is waiting to get data to consume (e.g. - apply to the replica). - -ERROR: remoteWorkerThread_%d: %s %s - qualification was %s - - Apparently an application of replication data to the subscriber failed... This quite likely indicates some sort of serious corruption. -ERROR: remoteWorkerThread_%d: replication query did not affect one row (cmdTuples = %s) - query was: %s qualification was: %s - - If SLON_CHECK_CMDTUPLES is set, &lslon; applies -changes one tuple at a time, and verifies that each change affects -exactly one tuple. Apparently that wasn't the case here, which -suggests a corruption of replication. That's a rather bad -thing... - -ERROR: remoteWorkerThread_%d: SYNC aborted - - If any errors have been encountered that haven't already aborted the SYNC, this catches and aborts it. - -DEBUG2: remoteWorkerThread_%d: new sl_rowid_seq value: %s - - This marks the progression of this internal &slony1; sequence. -INFO: remoteWorkerThread_%d: Run Archive Command %s - - If &lslon; has been configured to run a command after generating each log shipping archive log, this reports when that process is spawned using system(). -DEBUG2: remoteWorkerThread_%d: SYNC %d done in %.3f seconds - - This indicates the successful completion of a SYNC. Hurray! - -DEBUG1: remoteWorkerThread_%d_d:%.3f seconds delay for first row - - This indicates how long it took to get the first row from the LOG cursor that pulls in data from the sl_log tables. -ERROR: remoteWorkerThread_%d_d: large log_cmddata for actionseq %s not found - - &lslon; could not find the data for one of the very large sl_log table tuples that are pulled individually. This shouldn't happen. -DEBUG2: remoteWorkerThread_%d_d:%.3f seconds until close cursor - - This indicates how long it took to complete reading data from the LOG cursor that pulls in data from the sl_log tables. -DEBUG2: remoteWorkerThread_%d_d: inserts=%d updates=%d deletes=%d - - This reports how much activity was recorded in the current SYNC set. - -DEBUG3: remoteWorkerThread_%d: compress_actionseq(list,subquery) Action list: %s - - This indicates a portion of the LOG cursor query that is about to be compressed. (In some cases, this could grow to enormous size, blowing up the query parser...) -DEBUG3: remoteWorkerThread_%d: compressed actionseq subquery %s - - This indicates what that subquery compressed into. -DEBUG1: remoteWorkerThread_%d: -DEBUG1: remoteWorkerThread_%d: -DEBUG1: remoteWorkerThread_%d: - -ERROR: Slonik version: @MODULEVERSION@ != Slony-I version in PG build % - - This is raised in checkmoduleversion() if there is a mismatch between the version of &slony1; as reported by &lslonik; versus what the &postgres; build has. -ERROR: Slony-I: registry key % is not an int4 value - - Raised in registry_get_int4(), this complains if a requested value turns out to be NULL. -ERROR: registry key % is not a text value - - Raised in registry_get_text(), this complains if a requested value turns out to be NULL. -ERROR: registry key % is not a timestamp value - - Raised in registry_get_timestamp(), this complains if a requested value turns out to be NULL. -NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=% - - This will occur when a &lslon; starts up after another has crashed; this is routine cleanup. -ERROR: Slony-I: This node is already initialized - - This would typically happen if you submit gainst a node that has already been set up with the &slony1; schema. -ERROR: Slony-I: node % not found - - An attempt to mark a node not listed locally as enabled should fail... -ERROR: Slony-I: node % is already active - - An attempt to mark a node already marked as active as active should fail... -ERROR: Slony-I: DROP_NODE cannot initiate on the dropped node - - You need to have an EVENT NODE other than the node that is to be dropped.... - -ERROR: Slony-I: Node % is still configured as a data provider - - You cannot drop a node that is in use as a data provider; you -need to reshape subscriptions so no nodes are dependent on it first. -ERROR: Slony-I: Node % is still origin of one or more sets - - You can't drop a node if it is the origin for a set! Use or first. - -ERROR: Slony-I: cannot failover - node % has no path to the backup node - - You cannot failover to a node that isn't connected to all the subscribers, at least indirectly. -ERROR: Slony-I: cannot failover - node % is not subscribed to set % - - You can't failover to a node that doesn't subscribe to all the relevant sets. - -ERROR: Slony-I: cannot failover - subscription for set % is not active If the subscription has been set up, but isn't yet active, that's still no good. - -ERROR: Slony-I: cannot failover - node % is not a forwarder of set % - - You can only failover or move a set to a node that has -forwarding turned on. - -NOTICE: failedNode: set % has no other direct receivers - move now - - If the backup node is the only direct subscriber, then life is a bit simplified... No need to reshape any subscriptions! -NOTICE: failedNode set % has other direct receivers - change providers only - In this case, all direct subscribers are pointed to the backup node, and the backup node is pointed to receive from another node so it can get caught up. -NOTICE: Slony-I: Please drop schema _ at CLUSTERNAME@ - - A node has been uninstalled; you may need to drop the schema... -ERROR: Slony-I: setAddTable_int(): table % has no index % - - Apparently a PK index was specified that is absent on this node... -ERROR: Slony-I setAddTable_int(): table % not found - - Table wasn't found on this node; did you load the schema in properly?. -ERROR: Slony-I setAddTable_int(): table % is not a regular table - - You tried to replicate something that isn't a table; you can't do that! -NOTICE: Slony-I setAddTable_int(): table % PK column % nullable - - You tried to replicate a table where one of the columns in the would-be primary key is allowed to be null. All PK columns must be NOT NULL. This request is about to fail. -ERROR: Slony-I setAddTable_int(): table % not replicable! - - -This happens because of the NULLable PK column. -ERROR: Slony-I setAddTable_int(): table id % has already been assigned! - - The table ID value needs to be assigned uniquely -in ; apparently you requested a value -already in use. - - -ERROR: Slony-I setDropTable(): table % not found - - Table wasn't found on this node; are you sure you had the ID right? -ERROR: Slony-I setDropTable(): set % not found - - The replication set wasn't found on this node; are you sure you had the ID right? -ERROR: Slony-I setDropTable(): set % has remote origin - - The replication set doesn't originate on this node; you probably need to specify an EVENT NODE in the command. - -ERROR: Slony-I setAddSequence(): set % not found - Apparently the set you requested is not available... - -ERROR: Slony-I setAddSequence(): set % has remote origin - You may only add things at the origin node. - -ERROR: Slony-I setAddSequence(): cannot add sequence to currently subscribed set % - Apparently the set you requested has already been subscribed. You cannot add tables/sequences to an already-subscribed set. You will need to create a new set, add the objects to that new set, and set up subscriptions to that. -ERROR: Slony-I setAddSequence_int(): set % not found - Apparently the set you requested is not available... - -ERROR: Slony-I setAddSequence_int(): sequence % not found - Apparently the sequence you requested is not available on this node. How did you set up the schemas on the subscribers??? - -ERROR: Slony-I setAddSequence_int(): % is not a sequence - Seems pretty obvious :-). - -ERROR: Slony-I setAddSequence_int(): sequence ID % has already been assigned - Each sequence ID added must be unique; apparently you have reused an ID. - -ERROR: Slony-I setDropSequence_int(): sequence % not found - Could this sequence be in another set? - -ERROR: Slony-I setDropSequence_int(): set % not found - Could you have gotten the set ID wrong? - -ERROR: Slony-I setDropSequence_int(): set % has origin at another node - submit this to that node - - This message seems fairly self-explanatory... - -ERROR: Slony-I setMoveTable_int(): table % not found - - Table wasn't found on this node; you probably gave the wrong ID number... - -ERROR: Slony-I setMoveTable_int(): set ids cannot be identical - - Does it make sense to move a table from a set into the very same set? - -ERROR: Slony-I setMoveTable_int(): set % not found - - Set wasn't found on this node; you probably gave the wrong ID number... - -ERROR: Slony-I setMoveTable_int(): set % does not originate on local node - - Set wasn't found to have origin on this node; you probably gave the wrong EVENT NODE... - -ERROR: Slony-I setMoveTable_int(): subscriber lists of set % and % are different - - You can only move objects between sets that have identical subscriber lists. - -ERROR: Slony-I setMoveSequence_int(): sequence % not found - - Sequence wasn't found on this node; you probably gave the wrong ID number... - -ERROR: Slony-I setMoveSequence_int(): set ids cannot be identical - - Does it make sense to move a sequence from a set into the very same set? - -ERROR: Slony-I setMoveSequence_int(): set % not found - - Set wasn't found on this node; you probably gave the wrong ID number... - -ERROR: Slony-I setMoveSequence_int(): set % does not originate on local node - - Set wasn't found to have origin on this node; you probably gave the wrong EVENT NODE... - -ERROR: Slony-I setMoveSequence_int(): subscriber lists of set % and % are different - - You can only move objects between sets that have identical subscriber lists. - -Slony-I: sequenceSetValue(): sequence % not found - - Curious; the sequence object is missing. Could someone have dropped it from the schema by hand (e.g. - not using )? - -ERROR: Slony-I ddlScript_prepare(): set % not found - - Set wasn't found on this node; you probably gave the wrong ID number... - -ERROR: Slony-I ddlScript_prepare_int(): set % not found - - Set wasn't found on this node; you probably gave the wrong ID number... - -ERROR: Slony-I: alterTableForReplication(): Table with id % not found - Apparently the table wasn't found; could the schema be messed up? - -ERROR: Slony-I: alterTableForReplication(): Table % is already in altered state - - Curious... We're trying to set a table up for replication -a second time? - -NOTICE: Slony-I: alterTableForReplication(): multiple instances of trigger % on table %'', - - This normally happens if you have a table that had a trigger attached to it that replication hid due to this being a subscriber node, and then you added a trigger by the same name back to replication. Now, when trying to "fix up" triggers, those two triggers conflict. - - The DDL script will keep running and rerunning, or the UNINSTALL NODE will keep failing, until you drop the visible trigger, by hand, much as you must have added it, by hand, earlier. - -ERROR: Slony-I: Unable to disable triggers - This is the error that follows the multiple triggers problem. -ERROR: Slony-I: alterTableRestore(): Table with id % not found - - This runs when a table is being restored to non-replicated state; apparently the replicated table wasn't found. - -ERROR: Slony-I: alterTableRestore(): Table % is not in altered state - - Hmm. The table isn't in altered replicated state. That shouldn't be, if replication had been working properly... - -ERROR: Slony-I: subscribeSet() must be called on provider - This function should only get called on the provider node. &lslonik; normally handles this right, unless one had wrong DSNs in a &lslonik; script... - - -ERROR: Slony-I: subscribeSet(): set % not found - Hmm. The provider node isn't aware of this set. Wrong parms to a &lslonik; script? - -ERROR: Slony-I: subscribeSet(): set origin and receiver cannot be identical - Duh, an origin node can't subscribe to itself. - -ERROR: Slony-I: subscribeSet(): set provider and receiver cannot be identical - A receiver must subscribe to a different node... -Slony-I: subscribeSet(): provider % is not an active forwarding node for replication set % - - You can only use a live, active, forwarding provider as a data -source. - -Slony-I: subscribeSet_int(): set % is not active, cannot change provider - You can't change the provider just yet... -Slony-I: subscribeSet_int(): set % not found - This node isn't aware of the set... Perhaps you submitted wrong parms? -Slony-I: unsubscribeSet() must be called on receiver - Seems obvious... This probably indicates a bad &lslonik; admin DSN... -Slony-I: Cannot unsubscribe set % while being provider - - This should seem obvious; will fail if a node has dependent subscribers for which it is the provider - -Slony-I: cleanupEvent(): Single node - deleting events < % - If there's only one node, the cleanup event will delete old events so that you don't get build-up of crud. -Slony-I: tableAddKey(): table % not found - Perhaps you didn't copy the schema over properly? -Slony-I: tableDropKey(): table with ID% not found - Seems curious; you were presumably replicating to this table, so for this to be gone seems rather odd... -Slony-I: determineIdxnameUnique(): table % not found - -Did you properly copy over the schema to a new node??? -Slony-I: table % has no primary key - - This likely signifies a bad loading of schema... - -Slony-I: table % has no unique index % - - This likely signifies a bad loading of schema... - -Slony-I: Logswitch to sl_log_2 initiated' - Indicates that &lslon; is in the process of switching over to this log table. -Slony-I: Logswitch to sl_log_1 initiated' - Indicates that &lslon; is in the process of switching over to this log table. -Previous logswitch still in progress - - An attempt was made to do a log switch while one was in progress... - - - - &nagios; Replication Checks -&nagios; for monitoring -replication +&nagios; for monitoring replication The script in the tools directory called pgsql_replication_check.pl represents some of the @@ -883,6 +98,8 @@ test_slony_state +script test_slony_state to test replication state + This script is in preliminary stages, and may be used to do some analysis of the state of a &slony1; cluster. Index: ddlchanges.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v retrieving revision 1.24 retrieving revision 1.25 diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.24 -r1.25 --- doc/adminguide/ddlchanges.sgml +++ doc/adminguide/ddlchanges.sgml @@ -40,42 +40,45 @@ statements, as the script is already executed inside a transaction. In &postgres; version 8, the introduction of nested transactions changes this somewhat, but you must still remain aware that the -actions in the script are wrapped inside a single -transaction. +actions in the script are processed within the scope of a single +transaction whose BEGIN and END +you do not control. If there is anything broken about the script, or about how it executes on a particular node, this will cause the daemon for that node to panic and -crash. If you restart the node, it will, more likely than not, try to +crash. You may see various expected messages (positive and negative) +in . If you restart the &lslon;, it will, +more likely than not, try to repeat the DDL script, which will, almost -certainly, fail the second time just as it did the first time. I have -found this scenario to lead to a need to go to the -master node to delete the event to stop it from -continuing to fail. +certainly, fail the second time in the same way it did the first time. +I have found this scenario to lead to a need to go to the +master node to delete the event from the +table sl_event in order to stop it from continuing to +fail. The implication of this is that it is vital that modifications not be made in a haphazard way on one node or another. The schemas must always stay in sync. - For slon to, at that -point, panic is probably the + For &lslon; to, at that point, panic +is probably the correct answer, as it allows the DBA to head over to the database node that is broken, and manually fix things before cleaning out the defective event and restarting -slon. You can be certain that the updates +&lslon;. You can be certain that the updates made after the DDL change on the provider node are queued up, waiting to head to the subscriber. You don't run the risk of there being updates made that depended on the DDL changes in order to be correct. When you run , this -causes the slonik to request, for -each table in the specified set, an exclusive table -lock. +causes the &lslonik; to request, for each table in the +specified set, an exclusive table lock. - It starts by requesting the lock, and altering the table to -remove &slony1; triggers: + It starts by requesting the lock, altering the table to remove +&slony1; triggers, and restoring any triggers that had been hidden: BEGIN; @@ -115,7 +118,14 @@ If a particular DDL script only affects one table, it should be unnecessary to lock all application -tables. +tables. + + Actually, as of version 1.1.5 and later, this +is NOT TRUE. The danger of someone making DDL +changes that crosses replication sets seems sufficiently palpable that +&lslon; has been changed to lock ALL replicated +tables, whether they are in the specified replication set or +not. You may need to take a brief application outage in order to ensure that your applications are not demanding locks that @@ -124,18 +134,18 @@ In &slony1; versions 1.0 thru 1.1.5, the script is processed as a single query request, which can cause problems if you -are making complex changes. In version 1.2, the script is parsed into -individual SQL statements, and each statement is submitted separately, -which is a preferable handling of this. +are making complex changes. Starting in version 1.2, the script is +properly parsed into individual SQL statements, and each statement is +submitted separately, which is a preferable handling of this. The trouble with one query processing a compound -statement is that the SQL parser does its processing for that +statement is that the SQL parser does its planning for that entire set of queries based on the state of the database at the beginning of the query. This causes no particular trouble if those statements are -independent of one another, such as if you add two columns to a -table. +independent of one another, such as if you have two statements to add +two columns to a table. alter table t1 add column c1 integer; alter table t1 add column c2 integer; @@ -206,10 +216,9 @@ If it does matter that the object be propagated at the same location in the transaction stream on all the -nodes, then you but no tables need to be locked, then you might create -a replication set that contains no tables, -subscribe all the appropriate nodes to it, and use EXECUTE -SCRIPT, specifying that empty set. +nodes, then you but no tables need to be locked, then you need to use +EXECUTE SCRIPT, locking challenges or +no. You may want an extra index on some replicated node(s) in order to improve performance there. @@ -264,7 +273,7 @@ at the end, so that the would-be changes roll back. If this script works OK on all of the nodes, that suggests that -it should work fine everywhere if executed via Slonik. If problems +it should work fine everywhere if executed via &lslonik;. If problems are encountered on some nodes, that will hopefully allow you to fix the state of affairs on those nodes so that the script will run without error. Index: logshipping.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/logshipping.sgml,v retrieving revision 1.14 retrieving revision 1.15 diff -Ldoc/adminguide/logshipping.sgml -Ldoc/adminguide/logshipping.sgml -u -w -r1.14 -r1.15 --- doc/adminguide/logshipping.sgml +++ doc/adminguide/logshipping.sgml @@ -294,6 +294,10 @@ represent the time you will wish to have apply to all of the data in the given log shipping transaction log. + + You may find information on how relevant activity is +logged in . + Index: slony.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v retrieving revision 1.32 retrieving revision 1.33 diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.32 -r1.33 --- doc/adminguide/slony.sgml +++ doc/adminguide/slony.sgml @@ -83,6 +83,7 @@ The PostgreSQL Global Development Group Christopher Browne + &bestpractices; &firstdb; &startslons; &subscribenodes; @@ -99,9 +100,10 @@ &ddlchanges; &usingslonik; &adminscripts; + &slonyupgrade; &versionupgrade; - &bestpractices; &testbed; + &loganalysis; &help; Index: filelist.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/filelist.sgml,v retrieving revision 1.16 retrieving revision 1.17 diff -Ldoc/adminguide/filelist.sgml -Ldoc/adminguide/filelist.sgml -u -w -r1.16 -r1.17 --- doc/adminguide/filelist.sgml +++ doc/adminguide/filelist.sgml @@ -41,6 +41,8 @@ + + From cvsuser Sat Jul 29 05:26:55 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:37 2007 Subject: [Slony1-commit] By cbbrowne: Doc Updates: A new file has been added (already referenced) Message-ID: <20060729122655.A285511BF031@gborg.postgresql.org> Log Message: ----------- Doc Updates: A new file has been added (already referenced) that documents what log messages you can expect to see. This contains log-related material that used to be in monitoring.sgml Added Files: ----------- slony1-engine/doc/adminguide: loganalysis.sgml (r1.1) -------------- next part -------------- --- /dev/null +++ doc/adminguide/loganalysis.sgml @@ -0,0 +1,945 @@ + + +Log Analysis + +Log analysis + +Here are some of things that you may find in your &slony1; logs, +and explanations of what they mean. + +CONFIG notices + +These entries are pretty straightforward. They are informative +messages about your configuration. + +Here are some typical entries that you will probably run into in +your logs: + + +CONFIG main: local node id = 1 +CONFIG main: loading current cluster configuration +CONFIG storeNode: no_id=3 no_comment='Node 3' +CONFIG storePath: pa_server=5 pa_client=1 pa_conninfo="host=127.0.0.1 dbname=foo user=postgres port=6132" pa_connretry=10 +CONFIG storeListen: li_origin=3 li_receiver=1 li_provider=3 +CONFIG storeSet: set_id=1 set_origin=1 set_comment='Set 1' +CONFIG main: configuration complete - starting threads + + +DEBUG Notices + +Debug notices are always prefaced by the name of the thread that +the notice originates from. You will see messages from the following +threads: + + +localListenThread + + This is the local thread that listens for events on +the local node. + +remoteWorkerThread-X + + The thread processing remote events. You can expect +to see one of these for each node that this node communicates +with. + +remoteListenThread-X + +Listens for events on a remote node database. You may +expect to see one of these for each node in the +cluster. + +cleanupThread Takes care +of things like vacuuming, cleaning out the confirm and event tables, +and deleting old data. + +syncThread Generates SYNC +events. + + + + + + How to read &slony1; logs + + Note that as far as slon is concerned, there is no +master or slave. They are just +nodes. + +What you can expect, initially, is to see, on both nodes, some +events propagating back and forth. Firstly, there should be some +events published to indicate creation of the nodes and paths. If you +don't see those, then the nodes aren't properly communicating with one +another, and nothing else will happen... + + + +Create the two nodes. + + No slons are running yet, so there are no logs to look +at. + + + + Start the two slons + + The logs for each will start out very quiet, as neither node +has much to say, and neither node knows how to talk to another node. +Each node will periodically generate a SYNC event, +but recognize nothing about what is going on on +other nodes. + + + + Do the to set up +communications paths. That will allow the nodes to start to become +aware of one another. + + The slon logs should now start to receive events from +foreign nodes. + + In version 1.0, is not set up +automatically, so things still remain quiet until you explicitly +submit STORE LISTEN requests. In version 1.1, the +listen paths are set up automatically, which will much +more quickly get the communications network up and running. + + If you look at the contents of the tables and and , on each node, that should give a good idea +as to where things stand. Until the starts, +each node may only be partly configured. If there are two nodes, +there should be two entries in all three of these tables once the +communications configuration is set up properly. If there are fewer +entries than that, well, that should give you some idea of what is +missing. + + + If needed (e.g. - before version +1.1), submit requests to indicate how +the nodes will use the communications paths. + + Once this has been done, the nodes' logs should show a greater +level of activity, with events periodically being initiated on one +node or the other, and propagating to the other. + + + You'll set up the set +(), add tables +(), and sequences +(), and will see relevant +events only in the logs for the origin +node for the set. + + Then, when you submit the request, the event should go to both +nodes. + + The origin node has little more to do, after that... The +subscriber will then have a COPY_SET event, which +will lead to logging information about adding each table and copying +its data. See for more +details. + + + +After that, you'll mainly see two sorts of behaviour: + + + + On the origin, there won't be too terribly much +logged, just indication that some SYNC events are +being generated and confirmed by other nodes. +See to see the sorts of log entries to +expect. + + On the subscriber, there will be reports of +SYNC events, and that the subscriber pulls data +from the provider for the relevant set(s). This will happen +infrequently if there are no updates going to the origin node; it will +happen frequently when the origin sees heavy updates. + + + + + + + Log Messages and Implications + + This section lists numerous of the error messages found in +&slony1;, along with a brief explanation of implications. It is a +fairly well comprehensive list, leaving out mostly some of +the DEBUG4 messages that are generally +uninteresting. + + Log Messages Associated with Log +Shipping <title> + +<para> Most of these represent errors that come up if +the <xref linkend="logshipping"> functionality breaks. You may expect +things to break if the filesystem being used for log shipping fills, +or if permissions on that directory are wrongly set. </para> + +<itemizedlist> +<listitem><para><command>ERROR: remoteWorkerThread_%d: log archive failed %s - %s\n</command> </para> + +<para> This indicates that an error was encountered trying to write a +log shipping file. Normally the &lslon; will retry, and hopefully +succeed. </para> </listitem> + +<listitem><para><command>DEBUG2: remoteWorkerThread_%d: writing archive log...</command></para> + +<para> This indicates that a log shipping archive log is being written for a particular <command>SYNC</command> set. </para></listitem> + +<listitem><para><command>INFO: remoteWorkerThread_%d: Run Archive Command %s</command></para> + +<para> If &lslon; has been configured (<option>-x</option> +aka <envar>command_on_logarchive</envar>) to run a command after +generating each log shipping archive log, this reports when that +process is spawned using <function>system()</function>. </para> </listitem> + +<listitem><para><command>ERROR: remoteWorkerThread_%d: Could not open +COPY SET archive file %s - %s</command></para> + +<para> Seems pretty self-explanatory... </para></listitem> +<listitem><para><command>ERROR: remoteWorkerThread_%d: Could not generate COPY SET archive header %s - %s</command></para> + +<para> Probably means that we just filled up a filesystem... </para></listitem> +</itemizedlist> +</sect3> + +<sect3 id="ddllogs"><title> Log Messages - DDL scripts + + The handling of DDL is somewhat fragile, as described +in ; here are both informational and error +messages that may occur in the progress of +an request. + + + +ERROR: remoteWorkerThread_%d: DDL preparation +failed - set %d - only on node % + + Something broke when applying a DDL script on one of the nodes. +This is quite likely indicates that the node's schema differed from +that on the origin; you may need to apply a change manually to the +node to allow the event to proceed. The scary, scary alternative +might be to delete the offending event, assuming it can't possibly +work... +SLON_CONFIG: remoteWorkerThread_%d: DDL request with %d statements + This is informational, indicating how many SQL statements were processed. +SLON_ERROR: remoteWorkerThread_%d: DDL had invalid number of statements - %d + + Occurs if there were < 0 statements (which should be impossible) or > MAXSTATEMENTS statements. Probably the script was bad... + +ERROR: remoteWorkerThread_%d: malloc() +failure in DDL_SCRIPT - could not allocate %d bytes of +memory + + This should only occur if you submit some extraordinarily large +DDL script that makes a &lslon; run out of memory + +CONFIG: remoteWorkerThread_%d: DDL Statement %d: [%s] + + This lists each DDL statement as it is submitted. +ERROR: DDL Statement failed - %s + + Oh, dear, one of the DDL statements that worked on the origin +failed on this remote node... + +CONFIG: DDL Statement success - %s + + All's well... + +ERROR: remoteWorkerThread_%d: Could not generate DDL archive tracker %s - %s + + Apparently the DDL script couldn't be written to a log shipping file... + +ERROR: remoteWorkerThread_%d: Could not submit DDL script %s - %s + +Couldn't write the script to a log shipping file. + +ERROR: remoteWorkerThread_%d: Could not close DDL script %s - %s + +Couldn't close a log shipping file for a DDL script. +ERROR: Slony-I ddlScript_prepare(): set % not found + + Set wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I ddlScript_prepare_int(): set % not found + + Set wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I: alterTableForReplication(): Table with id % not found + Apparently the table wasn't found; could the schema be messed up? + +ERROR: Slony-I: alterTableForReplication(): Table % is already in altered state + + Curious... We're trying to set a table up for replication +a second time? + +ERROR: Slony-I: alterTableRestore(): Table with id % not found + + This runs when a table is being restored to non-replicated state; apparently the replicated table wasn't found. + +ERROR: Slony-I: alterTableRestore(): Table % is not in altered state + + Hmm. The table isn't in altered replicated state. That shouldn't be, if replication had been working properly... +NOTICE: Slony-I: alterTableForReplication(): multiple instances of trigger % on table %'', + + This normally happens if you have a table that had a trigger attached to it that replication hid due to this being a subscriber node, and then you added a trigger by the same name back to replication. Now, when trying to "fix up" triggers, those two triggers conflict. + + The DDL script will keep running and rerunning, or the UNINSTALL NODE will keep failing, until you drop the visible trigger, by hand, much as you must have added it, by hand, earlier. + +ERROR: Slony-I: Unable to disable triggers + This is the error that follows the multiple triggers problem. + + + + Threading Issues + + There should not be any user-serviceable aspects +to the &slony1; threading model; each &lslon; creates a quite +well-specified set of helper threads to manage the various database +connections that it requires. The only way that anything should break +on the threading side is if you have not compiled &postgres; libraries +to play well with threading, in which case you will be +unable to compile &slony1; in the first place. + + +FATAL: remoteWorkerThread_%d: pthread_create() - %s + + Couldn't create a new remote worker thread. + +DEBUG1 remoteWorkerThread_%d: helper thread for provider %d created + + This normally happens when the &lslon; starts: a thread is created for each node to which the local node should be listening for events. + +DEBUG1: remoteWorkerThread_%d: helper thread for provider %d terminated + + If subscriptions reshape such that a node no longer provides a +subscription, then the thread that works on that node can be +dropped. + +DEBUG1: remoteWorkerThread_%d: disconnecting +from data provider %d + + A no-longer-used data provider may be dropped; if connection +information is changed, the &lslon; needs to disconnect and +reconnect. + +DEBUG2: remoteWorkerThread_%d: ignore new events due to shutdown + + If the &lslon; is shutting down, it is futile to process more events +DEBUG1: remoteWorkerThread_%d: node %d - no worker thread + + Curious: we can't wake up the worker thread; there probably +should already be one... + + + + + Log Entries At Subscription Time + + Subscription time is quite a special time in &slony1;. If you +have a large amount of data to be copied to subscribers, this may take +a considerable period of time. &slony1; logs a fairly considerable +amount of information about its progress, which is sure to be useful +to the gentle reader. In particular, it generates log output every +time it starts and finishes copying data for a given table as well as +when it completes reindexing the table. That may not make a 28 hour +subscription go any faster, but at least helps you have some idea of +how it is progressing. + + +DEBUG1: copy_set %d + + This indicates the beginning of copying data for a new subscription. +ERROR: remoteWorkerThread_%d: set %d not found in runtime configuration + + &lslon; tried starting up a subscription; it couldn't find conninfo for the data source. Perhaps paths are not properly propagated? + +ERROR: remoteWorkerThread_%d: node %d has no pa_conninfo + + Apparently the conninfo configuration +was wrong... + +ERROR: copy set %d cannot connect to provider DB node %d + + &lslon; couldn't connect to the provider. Is the conninfo +wrong? Or perhaps authentication is misconfigured? Or perhaps the +database is down? + +DEBUG1: remoteWorkerThread_%d: connected to provider DB + + Excellent: the copy set has a connection to its provider +ERROR: Slony-I: sequenceSetValue(): sequence % not found + Curious; the sequence object is missing. Could someone have dropped it from the schema by hand (e.g. - not using )? + +ERROR: Slony-I: subscribeSet() must be called on provider + This function should only get called on the provider node. &lslonik; normally handles this right, unless one had wrong DSNs in a &lslonik; script... + + +ERROR: Slony-I: subscribeSet(): set % not found + Hmm. The provider node isn't aware of this set. Wrong parms to a &lslonik; script? + +ERROR: Slony-I: subscribeSet(): set origin and receiver cannot be identical + Duh, an origin node can't subscribe to itself. + +ERROR: Slony-I: subscribeSet(): set provider and receiver cannot be identical + A receiver must subscribe to a different node... +Slony-I: subscribeSet(): provider % is not an active forwarding node for replication set % + + You can only use a live, active, forwarding provider as a data +source. + +Slony-I: subscribeSet_int(): set % is not active, cannot change provider + You can't change the provider just yet... +Slony-I: subscribeSet_int(): set % not found + This node isn't aware of the set... Perhaps you submitted wrong parms? +Slony-I: unsubscribeSet() must be called on receiver + Seems obvious... This probably indicates a bad &lslonik; admin DSN... +Slony-I: Cannot unsubscribe set % while being provider + + This should seem obvious; will fail if a node has dependent subscribers for which it is the provider + +Slony-I: cleanupEvent(): Single node - deleting events < % + If there's only one node, the cleanup event will delete old events so that you don't get build-up of crud. +Slony-I: tableAddKey(): table % not found + Perhaps you didn't copy the schema over properly? +Slony-I: tableDropKey(): table with ID% not found + Seems curious; you were presumably replicating to this table, so for this to be gone seems rather odd... +Slony-I: determineIdxnameUnique(): table % not found + +Did you properly copy over the schema to a new node??? +Slony-I: table % has no primary key + + This likely signifies a bad loading of schema... + +Slony-I: table % has no unique index % + + This likely signifies a bad loading of schema... +WARN: remoteWorkerThread_%d: transactions +earlier than XID %s are still in progress + + This indicates that some old transaction is in progress from before the earliest available SYNC on the provider. &slony1; cannot start replicating until that transaction completes. This will repeat until thetransaction completes... + + +DEBUG2: remoteWorkerThread_%d: prepare to copy table %s + + This indicates that &lslon; is beginning preparations to set up subscription for a table. +DEBUG1: remoteWorkerThread_%d: table %s will require Slony-I serial key + + Evidently this is a table defined with where &slony1; has to add a surrogate primary key. +ERROR: remoteWorkerThread_%d: Could not lock table %s on subscriber + + For whatever reason, the table could not be locked, so the +subscription needs to be restarted. If the problem was something like +a deadlock, retrying may help. If the problem was otherwise, you may +need to intervene... + +DEBUG2: remoteWorkerThread_%d: all tables for set %d found on subscriber + + An informational message indicating that the first pass through the tables found no problems... +DEBUG2: remoteWorkerThread_%d: copy sequence %s + + Processing some sequence... +DEBUG2: remoteWorkerThread_%d: copy table %s + + &lslon; is starting to copy a table... +DEBUG3: remoteWorkerThread_%d: table %s Slony-I serial key added local + + Just added new column to the table to provide surrogate primary key. +DEBUG3: remoteWorkerThread_%d: local table %s already has Slony-I serial key + + Did not need to add serial key; apparently it was already there. +DEBUG3: remoteWorkerThread_%d: table %s does not require Slony-I serial key + + Apparently this table didn't require a special serial key... + +DEBUG3: remoteWorkerThread_%d: table %s Slony-I serial key added local +DEBUG2: remoteWorkerThread_%d: Begin COPY of table %s + + &lslon; is about to start the COPY on both sides to copy a table... +ERROR: remoteWorkerThread_%d: Could not generate copy_set request for %s - %s + + This indicates that the delete/copy requests +failed on the subscriber. The &lslon; will repeat +the COPY_SET attempt; it will probably continue to +fail.. + +ERROR: remoteWorkerThread_%d: copy to stdout on provider - %s %s + + Evidently something about the COPY to stdout on the provider node broke... The event will be retried... + +ERROR: remoteWorkerThread_%d: copy from stdin on local node - %s %s + + Evidently something about the COPY into the table on the +subscriber node broke... The event will be +retried... + +DEBUG2: remoteWorkerThread_%d: %d bytes copied for table %s + + This message indicates that the COPY of the table has +completed. This is followed by running ANALYZE and +reindexing the table on the subscriber. + +DEBUG2: remoteWorkerThread_%d: %.3f seconds +to copy table %s + + After this message, copying and reindexing and analyzing the table on the subscriber is complete. + +DEBUG2: remoteWorkerThread_%d: set last_value of sequence %s (%s) to %s + + As should be no surprise, this indicates that a sequence has been processed on the subscriber. + +DEBUG2: remoteWorkerThread_%d: %.3 seconds to copy sequences + + Summarizing the time spent processing sequences in the COPY_SET event. + +ERROR: remoteWorkerThread_%d: query %s did not return a result + + This indicates that the query, as part of final processing of COPY_SET, failed. The copy will restart... + +DEBUG2: remoteWorkerThread_%d: copy_set no previous SYNC found, use enable event + + This takes place if no past SYNC event was found; the current +event gets set to the event point of +the ENABLE_SUBSCRIPTION event. + + +DEBUG2: remoteWorkerThread_%d: copy_set SYNC found, use event seqno %s + + This takes place if a SYNC event was found; the current +event gets set as shown. + +ERROR: remoteWorkerThread_%d: sl_setsync entry for set %d not found on provider + + SYNC synchronization information was expected to be drawn from +an existing subscriber, but wasn't found. Something +replication-breakingly-bad has probably +happened... +DEBUG1: remoteWorkerThread_%d: could not insert to sl_setsync_offline + + Oh, dear. After setting up a subscriber, and getting pretty +well everything ready, some writes to a log shipping file failed. +Perhaps disk filled up... + +DEBUG1: remoteWorkerThread_%d: %.3f seconds to build initial setsync status + + Indicates the total time required to get the copy_set event finalized... + + +DEBUG1: remoteWorkerThread_%d: disconnected from provider DB + + At the end of a subscribe set event, the subscriber's &lslon; +will disconnect from the provider, clearing out +connections... + +DEBUG1: remoteWorkerThread_%d: copy_set %d done in %.3f seconds + + Indicates the total time required to complete copy_set... This indicates a successful subscription! + + + + + Log Entries Associated With Normal SYNC activity + + Some of these messages indicate exceptions, but +the normal stuff represents what you should expect to +see most of the time when replication is just plain working. + + + +DEBUG2: remoteWorkerThread_%d: forward confirm %d,%s received by %d + + These events should occur frequently and routinely as nodes report confirations of the events they receive. + +DEBUG1: remoteWorkerThread_%d: SYNC %d processing + + This indicates the start of processing of a SYNC + +ERROR: remoteWorkerThread_%d: No pa_conninfo +for data provider %d + + Oh dear, we haven't connection information to connect to the +data provider. That shouldn't be possible, +normally... + +ERROR: remoteWorkerThread_%d: cannot connect to data provider %d on 'dsn' + + Oh dear, we haven't got correct connection +information to connect to the data provider. + +DEBUG1: remoteWorkerThread_%d: connected to data provider %d on 'dsn' + + Excellent; the &lslon; has connected to the provider. + +WARN: remoteWorkerThread_%d: don't know what ev_seqno node %d confirmed for ev_origin %d + + There's no confirmation information available for this node's provider; need to abort the SYNC and wait a bit in hopes that that information will emerge soon... + +DEBUG1: remoteWorkerThread_%d: data provider %d only confirmed up to ev_seqno %d for ev_origin %d + + The provider for this node is a subscriber, and apparently that subscriber is a bit behind. The &lslon; will need to wait for the provider to catch up until it has new data. + +DEBUG2: remoteWorkerThread_%d: data provider %d confirmed up to ev_seqno %s for ev_origin %d - OK + + All's well; the provider should have the data that the subscriber needs... + +DEBUG2: remoteWorkerThread_%d: syncing set %d with %d table(s) from provider %d + + This is declaring the plans for a SYNC: we have a set with some tables to process. + +DEBUG2: remoteWorkerThread_%d: ssy_action_list value: %s length: %d + + This portion of the query to collect log data to be applied has been known to bloat up; this shows how it has gotten compressed... + +DEBUG2: remoteWorkerThread_%d: Didn't add OR to provider + + This indicates that there wasn't anything in a provider clause in the query to collect log data to be applied, which shouldn't be. Things are quite likely to go bad at this point... + +DEBUG2: remoteWorkerThread_%d: no sets need syncing for this event + +This will be the case for all SYNC events generated on nodes that are not originating replication sets. You can expect to see these messages reasonably frequently. +DEBUG3: remoteWorkerThread_%d: activate helper %d + + We're about to kick off a thread to help process SYNC data... + +DEBUG4: remoteWorkerThread_%d: waiting for log data + + The thread is waiting to get data to consume (e.g. - apply to the replica). + +ERROR: remoteWorkerThread_%d: %s %s - qualification was %s + + Apparently an application of replication data to the subscriber failed... This quite likely indicates some sort of serious corruption. + +ERROR: remoteWorkerThread_%d: replication query did not affect one row (cmdTuples = %s) - query was: %s qualification was: %s + + If SLON_CHECK_CMDTUPLES is set, &lslon; applies +changes one tuple at a time, and verifies that each change affects +exactly one tuple. Apparently that wasn't the case here, which +suggests a corruption of replication. That's a rather bad +thing... + +ERROR: remoteWorkerThread_%d: SYNC aborted + + If any errors have been encountered that haven't already aborted the SYNC, this catches and aborts it. + +DEBUG2: remoteWorkerThread_%d: new sl_rowid_seq value: %s + + This marks the progression of this internal &slony1; sequence. +DEBUG2: remoteWorkerThread_%d: SYNC %d done in %.3f seconds + + This indicates the successful completion of a SYNC. Hurray! + +DEBUG1: remoteWorkerThread_%d_d:%.3f seconds delay for first row + + This indicates how long it took to get the first row from the LOG cursor that pulls in data from the sl_log tables. + +ERROR: remoteWorkerThread_%d_d: large log_cmddata for actionseq %s not found + + &lslon; could not find the data for one of the very large sl_log table tuples that are pulled individually. This shouldn't happen. +DEBUG2: remoteWorkerThread_%d_d:%.3f seconds until close cursor + + This indicates how long it took to complete reading data from the LOG cursor that pulls in data from the sl_log tables. +DEBUG2: remoteWorkerThread_%d_d: inserts=%d updates=%d deletes=%d + + This reports how much activity was recorded in the current SYNC set. + +DEBUG3: remoteWorkerThread_%d: compress_actionseq(list,subquery) Action list: %s + + This indicates a portion of the LOG cursor query that is about to be compressed. (In some cases, this could grow to enormous size, blowing up the query parser...) +DEBUG3: remoteWorkerThread_%d: compressed actionseq subquery %s + + This indicates what that subquery compressed into. + + + + + Log Entries - Adding Objects to Sets + + These entries will be seen on an origin node's logs at the time +you are configuring a replication set; some of them will be seen on +subscribers at subscription time. + + +ERROR: Slony-I: setAddTable_int(): table % has no index % + + Apparently a PK index was specified that is absent on this node... +ERROR: Slony-I setAddTable_int(): table % not found + + Table wasn't found on this node; did you load the schema in properly?. +ERROR: Slony-I setAddTable_int(): table % is not a regular table + + You tried to replicate something that isn't a table; you can't do that! +NOTICE: Slony-I setAddTable_int(): table % PK column % nullable + + You tried to replicate a table where one of the columns in the would-be primary key is allowed to be null. All PK columns must be NOT NULL. This request is about to fail. +ERROR: Slony-I setAddTable_int(): table % not replicable! + + +This happens because of the NULLable PK column. +ERROR: Slony-I setAddTable_int(): table id % has already been assigned! + + The table ID value needs to be assigned uniquely +in ; apparently you requested a value +already in use. + +ERROR: Slony-I setAddSequence(): set % not found + Apparently the set you requested is not available... + +ERROR: Slony-I setAddSequence(): set % has remote origin + You may only add things at the origin node. + +ERROR: Slony-I setAddSequence(): cannot add sequence to currently subscribed set % + Apparently the set you requested has already been subscribed. You cannot add tables/sequences to an already-subscribed set. You will need to create a new set, add the objects to that new set, and set up subscriptions to that. +ERROR: Slony-I setAddSequence_int(): set % not found + Apparently the set you requested is not available... + +ERROR: Slony-I setAddSequence_int(): sequence % not found + Apparently the sequence you requested is not available on this node. How did you set up the schemas on the subscribers??? + +ERROR: Slony-I setAddSequence_int(): % is not a sequence + Seems pretty obvious :-). + +ERROR: Slony-I setAddSequence_int(): sequence ID % has already been assigned + Each sequence ID added must be unique; apparently you have reused an ID. + + + + + Logging When Moving Objects Between Sets + + +ERROR: Slony-I setMoveTable_int(): table % not found + + Table wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I setMoveTable_int(): set ids cannot be identical + + Does it make sense to move a table from a set into the very same set? + +ERROR: Slony-I setMoveTable_int(): set % not found + + Set wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I setMoveTable_int(): set % does not originate on local node + + Set wasn't found to have origin on this node; you probably gave the wrong EVENT NODE... + +ERROR: Slony-I setMoveTable_int(): subscriber lists of set % and % are different + + You can only move objects between sets that have identical subscriber lists. + +ERROR: Slony-I setMoveSequence_int(): sequence % not found + + Sequence wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I setMoveSequence_int(): set ids cannot be identical + + Does it make sense to move a sequence from a set into the very same set? + +ERROR: Slony-I setMoveSequence_int(): set % not found + + Set wasn't found on this node; you probably gave the wrong ID number... + +ERROR: Slony-I setMoveSequence_int(): set % does not originate on local node + + Set wasn't found to have origin on this node; you probably gave the wrong EVENT NODE... + +ERROR: Slony-I setMoveSequence_int(): subscriber lists of set % and % are different + + You can only move objects between sets that have identical subscriber lists. + + + + Issues with Dropping Objects + + +ERROR: Slony-I setDropTable(): table % not found + + Table wasn't found on this node; are you sure you had the ID right? +ERROR: Slony-I setDropTable(): set % not found + + The replication set wasn't found on this node; are you sure you had the ID right? +ERROR: Slony-I setDropTable(): set % has remote origin + + The replication set doesn't originate on this node; you probably need to specify an EVENT NODE in the command. + +ERROR: Slony-I setDropSequence_int(): sequence % not found + Could this sequence be in another set? + +ERROR: Slony-I setDropSequence_int(): set % not found + Could you have gotten the set ID wrong? + +ERROR: Slony-I setDropSequence_int(): set % has origin at another node - submit this to that node + + This message seems fairly self-explanatory... + + + + + Issues with MOVE SET, FAILOVER, DROP NODE + + Many of these errors will occur if you submit a &lslonik; +script that describes a reconfiguration incompatible with your +cluster's current configuration. Those will lead to the +feeling: Whew, I'm glad &lslonik; caught that for me! + + Some of the others lead to a &lslon; telling itself to fall +over; all should be well when you restart it, as +it will read in the revised, newly-correct configuration when it +starts up. + + Alas, a few indicate that something bad +happened, for which the resolution may not necessarily be +easy. Nobody said that replication was easy, alas... + + +ERROR: Slony-I: DROP_NODE cannot initiate on the dropped node + + You need to have an EVENT NODE other than the node that is to be dropped.... + +ERROR: Slony-I: Node % is still configured as a data provider + + You cannot drop a node that is in use as a data provider; you +need to reshape subscriptions so no nodes are dependent on it first. +ERROR: Slony-I: Node % is still origin of one or more sets + + You can't drop a node if it is the origin for a set! Use or first. + +ERROR: Slony-I: cannot failover - node % has no path to the backup node + + You cannot failover to a node that isn't connected to all the subscribers, at least indirectly. +ERROR: Slony-I: cannot failover - node % is not subscribed to set % + + You can't failover to a node that doesn't subscribe to all the relevant sets. + +ERROR: Slony-I: cannot failover - subscription for set % is not active If the subscription has been set up, but isn't yet active, that's still no good. + +ERROR: Slony-I: cannot failover - node % is not a forwarder of set % + + You can only failover or move a set to a node that has +forwarding turned on. + +NOTICE: failedNode: set % has no other direct receivers - move now + + If the backup node is the only direct subscriber, then life is a bit simplified... No need to reshape any subscriptions! +NOTICE: failedNode set % has other direct receivers - change providers only + In this case, all direct subscribers are pointed to the backup node, and the backup node is pointed to receive from another node so it can get caught up. +NOTICE: Slony-I: Please drop schema _ at CLUSTERNAME@ + + A node has been uninstalled; you may need to drop the schema... + + + + + + Log Switching + + These messages relate to the new-in-1.2 facility whereby +&slony1; periodically switches back and forth between storing data +in sl_log_1 and sl_log_2. + + +Slony-I: Logswitch to sl_log_2 initiated' + Indicates that &lslon; is in the process of switching over to this log table. +Slony-I: Logswitch to sl_log_1 initiated' + Indicates that &lslon; is in the process of switching over to this log table. +Previous logswitch still in progress + + An attempt was made to do a log switch while one was in progress... + +ERROR: remoteWorkerThread_%d: cannot determine current log status + + The attempt to read from sl_log_status, which determines +whether we're working on sl_log_1 +or sl_log_2 got no results; that can't be a good thing, +as there certainly should be data here... Replication is likely about +to halt... + +DEBUG2: remoteWorkerThread_%d: current local log_status is %d + This indicates which of sl_log_1 and sl_log_2 are being used to store replication data. + + + + + Miscellanea + + Perhaps these messages should be categorized further; that +remains a task for the documentors. + + + +ERROR: Slonik version: @MODULEVERSION@ != Slony-I version in PG build % + + This is raised in checkmoduleversion() if there is a mismatch between the version of &slony1; as reported by &lslonik; versus what the &postgres; build has. +ERROR: Slony-I: registry key % is not an int4 value + + Raised in registry_get_int4(), this complains if a requested value turns out to be NULL. +ERROR: registry key % is not a text value + + Raised in registry_get_text(), this complains if a requested value turns out to be NULL. +ERROR: registry key % is not a timestamp value + + Raised in registry_get_timestamp(), this complains if a requested value turns out to be NULL. +NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=% + + This will occur when a &lslon; starts up after another has crashed; this is routine cleanup. +ERROR: Slony-I: This node is already initialized + + This would typically happen if you submit gainst a node that has already been set up with the &slony1; schema. +ERROR: Slony-I: node % not found + + An attempt to mark a node not listed locally as enabled should fail... +ERROR: Slony-I: node % is already active + + An attempt to mark a node already marked as active as active should fail... +DEBUG4: remoteWorkerThread_%d: added active set %d to provider %d + + Indicates that this set is being provided by this +provider. + +DEBUG2: remoteWorkerThread_%d: event %d +ignored - unknown origin + + Probably happens if events arrive before +the STORE_NODE event that tells that the new node +now exists... + +WARN: remoteWorkerThread_%d: event %d ignored - origin inactive + + This shouldn't occur now (2006) as we don't support the notion +of deactivating a node... + + +DEBUG2: remoteWorkerThread_%d: event %d ignored - duplicate + + This might be expected to happen if the event notification +comes in concurrently from two sources... + +DEBUG2: remoteWorkerThread_%d: unknown node %d + + Happens if the &lslon; is unaware of this node; probably a sign +of STORE_NODE requests not +propagating... + + + + + From cvsuser Mon Jul 31 07:48:46 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:37 2007 Subject: [Slony1-commit] By cbbrowne: Add in document on how to perform Slony-I version upgrade Message-ID: <20060731144846.2246311BF03B@gborg.postgresql.org> Log Message: ----------- Add in document on how to perform Slony-I version upgrade Added Files: ----------- slony1-engine/doc/adminguide: slonyupgrade.sgml (r1.1) -------------- next part -------------- --- /dev/null +++ doc/adminguide/slonyupgrade.sgml @@ -0,0 +1,55 @@ + + + &slony1; Upgrade +upgrading &slony1; to a newer version + + When upgrading &slony1;, the installation on all nodes in a +cluster must be upgraded at once, using the &lslonik; +command . + + While this requires temporarily stopping replication, it does +not forcibly require an outage for applications that submit +updates. + +The proper upgrade procedure is thus: + + Stop the &lslon; processes on all nodes. +(e.g. - old version of &lslon;) + Install the new version of &lslon; software on all +nodes. + Execute a &lslonik; script containing the +command update functions (id = [whatever]); for +each node in the cluster. + Start all slons. + + + The trickiest part of this is ensuring that the C library +containing SPI functions is copied into place in the &postgres; build; +the easiest and safest way to handle this is to have two separate +&postgres; builds, one for each &slony1; version, where the postmaster +is shut down and then restarted against the new build; +that approach requires a brief database outage on each node. + + While that approach has been found to be easier and safer, +nothing prevents one from carefully copying &slony1; components for +the new version into place to overwrite the old version as +the install step. That might not +work on Windows if it locks library files that +are in use. + + + From cvsuser Mon Jul 31 11:49:53 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:37 2007 Subject: [Slony1-commit] By cbbrowne: Add in note that we now have partial indexes on the Message-ID: <20060731184953.1839911BF03A@gborg.postgresql.org> Log Message: ----------- Add in note that we now have partial indexes on the sl_log_? tables Modified Files: -------------- slony1-engine: RELEASE-1.2.0 (r1.6 -> r1.7) -------------- next part -------------- Index: RELEASE-1.2.0 =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/RELEASE-1.2.0,v retrieving revision 1.6 retrieving revision 1.7 diff -LRELEASE-1.2.0 -LRELEASE-1.2.0 -u -w -r1.6 -r1.7 --- RELEASE-1.2.0 +++ RELEASE-1.2.0 @@ -54,6 +54,13 @@ UPGRADE FUNCTIONS will remove OIDs from Slony-I tables in existing schemas, too. +- When possible (based on log switching functionality), partial + indexes on sl_log_1 and sl_log_2 are created on a per-origin-node + basis. This provides the performance boost of having an easily + recognizable index, but without the risk of having XIDs from + different nodes mixed together in one index, where rollover could + Cause Problems... + These features are generally configurable, but the defaults ought to allow improved behaviour for all but the most "Extreme Uses." From cvsuser Mon Jul 31 11:50:56 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:37 2007 Subject: [Slony1-commit] By cbbrowne: Fix tagging errors Message-ID: <20060731185056.449E511BF09D@gborg.postgresql.org> Log Message: ----------- Fix tagging errors Modified Files: -------------- slony1-engine/doc/adminguide: addthings.sgml (r1.18 -> r1.19) adminscripts.sgml (r1.37 -> r1.38) ddlchanges.sgml (r1.25 -> r1.26) loganalysis.sgml (r1.1 -> r1.2) maintenance.sgml (r1.23 -> r1.24) slonyupgrade.sgml (r1.1 -> r1.2) -------------- next part -------------- Index: slonyupgrade.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonyupgrade.sgml,v retrieving revision 1.1 retrieving revision 1.2 diff -Ldoc/adminguide/slonyupgrade.sgml -Ldoc/adminguide/slonyupgrade.sgml -u -w -r1.1 -r1.2 --- doc/adminguide/slonyupgrade.sgml +++ doc/adminguide/slonyupgrade.sgml @@ -5,7 +5,7 @@ When upgrading &slony1;, the installation on all nodes in a cluster must be upgraded at once, using the &lslonik; -command . +command . While this requires temporarily stopping replication, it does not forcibly require an outage for applications that submit Index: addthings.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v retrieving revision 1.18 retrieving revision 1.19 diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.18 -r1.19 --- doc/adminguide/addthings.sgml +++ doc/adminguide/addthings.sgml @@ -315,7 +315,7 @@ How do I upgrade -&slony1; to a newer version? +&slony1; to a newer version? What happens when I fail over? Index: maintenance.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v retrieving revision 1.23 retrieving revision 1.24 diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.23 -r1.24 --- doc/adminguide/maintenance.sgml +++ doc/adminguide/maintenance.sgml @@ -50,7 +50,10 @@ storing data in &sllog1; and &sllog2; so that it may seek to TRUNCATE the elder data. - That means that on a regular basis, these tables are completely cleared out, so that you will not suffer from them having grown to some significant size, due to heavy load, after which they are incapable of shrinking back down + That means that on a regular basis, these tables are completely +cleared out, so that you will not suffer from them having grown to +some significant size, due to heavy load, after which they are +incapable of shrinking back down Index: adminscripts.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v retrieving revision 1.37 retrieving revision 1.38 diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.37 -r1.38 --- doc/adminguide/adminscripts.sgml +++ doc/adminguide/adminscripts.sgml @@ -119,6 +119,7 @@ slonik_build_env +slonik_build_env Queries a database, generating output hopefully suitable for slon_tools.conf consisting of: @@ -258,6 +259,8 @@ mkslonconf.sh +script - mkslonconf.sh + This is a shell script designed to rummage through a &slony1; cluster and generate a set of slon.conf files that &lslon; accesses via the slon -f slon.conf @@ -352,6 +355,9 @@ launch_clusters.sh +script - launch_clusters.sh + + This is another shell script which uses the configuration as set up by mkslonconf.sh and is intended to either be run at system boot time, as an addition to the @@ -398,6 +404,9 @@ slony-cluster-analysis +script - slony-cluster-analysis + + If you are running a lot of replicated databases, where there are numerous &slony1; clusters, it can get painful to track and document this. The following tools may be of some assistance in this. Index: ddlchanges.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v retrieving revision 1.25 retrieving revision 1.26 diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.25 -r1.26 --- doc/adminguide/ddlchanges.sgml +++ doc/adminguide/ddlchanges.sgml @@ -218,7 +218,7 @@ propagated at the same location in the transaction stream on all the nodes, then you but no tables need to be locked, then you need to use EXECUTE SCRIPT, locking challenges or -no. +no. You may want an extra index on some replicated node(s) in order to improve performance there. Index: loganalysis.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/loganalysis.sgml,v retrieving revision 1.1 retrieving revision 1.2 diff -Ldoc/adminguide/loganalysis.sgml -Ldoc/adminguide/loganalysis.sgml -u -w -r1.1 -r1.2 --- doc/adminguide/loganalysis.sgml +++ doc/adminguide/loganalysis.sgml @@ -62,6 +62,8 @@ How to read &slony1; logs +reading and understanding &slony1; logs + Note that as far as slon is concerned, there is no master or slave. They are just nodes. @@ -172,8 +174,7 @@ the DEBUG4 messages that are generally uninteresting. - Log Messages Associated with Log -Shipping <title> +<sect3 id="logshiplog"><title> Log Messages Associated with Log Shipping Most of these represent errors that come up if the functionality breaks. You may expect @@ -785,7 +786,7 @@ Many of these errors will occur if you submit a &lslonik; script that describes a reconfiguration incompatible with your cluster's current configuration. Those will lead to the -feeling: Whew, I'm glad &lslonik; caught that for me! +feeling: Whew, I'm glad &lslonik; caught that for me! Some of the others lead to a &lslon; telling itself to fall over; all should be well when you restart it, as From cvsuser Mon Jul 31 11:56:24 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:37 2007 Subject: [Slony1-commit] By cbbrowne: Schema documentation - per autodoc - for 1.2 RC Message-ID: <20060731185624.DF74D11BF03A@gborg.postgresql.org> Log Message: ----------- Schema documentation - per autodoc - for 1.2 RC Modified Files: -------------- slony1-engine/doc/adminguide: schemadoc.xml (r1.6 -> r1.7) -------------- next part -------------- Index: schemadoc.xml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/schemadoc.xml,v retrieving revision 1.6 retrieving revision 1.7 diff -Ldoc/adminguide/schemadoc.xml -Ldoc/adminguide/schemadoc.xml -u -w -r1.6 -r1.7 --- doc/adminguide/schemadoc.xml +++ doc/adminguide/schemadoc.xml @@ -385,6 +385,7 @@ STORE_TRIGGER = DROP_TRIGGER = MOVE_SET = + ACCEPT_SET = SET_DROP_TABLE = SET_DROP_SEQUENCE = SET_MOVE_TABLE = @@ -1127,6 +1128,10 @@ + + Is the node being used for log shipping? + + @@ -1183,6 +1188,119 @@ +
+ + Table: + + <structname>sl_nodelock</structname> + + + + + Used to prevent multiple slon instances and to identify the backends to kill in terminateNodeConnections(). + + + + + + + Structure of <structname>sl_nodelock</structname> + + + + + nl_nodeid + + integer + + + PRIMARY KEY + + + + + + + + + + + + + + + + + Clients node_id + + + + + + + nl_conncnt + + serial + + + PRIMARY KEY + + + + + + + + + + + + + + + + + Clients connection number + + + + + + + nl_backendpid + + integer + + + + + + + + + + + PID of database backend owning this lock + + + + + + + + + + + + + + + + + +
+
@@ -1357,6 +1475,130 @@ </para> </section> + <section id="table.sl-registry" + xreflabel="sl_registry"> + <title id="table.sl-registry-title"> + Table: + + <structname>sl_registry</structname> + + + + + Stores miscellaneous runtime data + + + + + + + Structure of <structname>sl_registry</structname> + + + + + reg_key + + text + + + PRIMARY KEY + + + + + + + + + + + + + + + + + Unique key of the runtime option + + + + + + + reg_int4 + + integer + + + + + + + + + + + Option value if type int4 + + + + + + + reg_text + + text + + + + + + + + + + + Option value if type text + + + + + + + reg_timestamp + + timestamp without time zone + + + + + + + + + + + Option value if type timestamp + + + + + + + + + + + + + + + + + +
+
@@ -2342,7 +2584,7 @@ </para> <para> - Has this subscription been activated? This is not set until the subscriber has received COPY data from the provider + Has this subscription been activated? This is not set on the subscriber until AFTER the subscriber has received COPY data from the provider </para> </listitem> @@ -2761,6 +3003,84 @@ </para> </section> +<!-- Function addpartiallogindices( ) --> + <section id="function.addpartiallogindices" + xreflabel="schemadocaddpartiallogindices( )"> + <title id="function.addpartiallogindices-title"> + addpartiallogindices( ) + + + addpartiallogindices( ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + integer + + + + Add partial indexes, if possible, to the unused sl_log_? table for +all origin nodes, and drop any that are no longer needed. + +This function presently gets run any time set origins are manipulated +(FAILOVER, STORE SET, MOVE SET, DROP SET), as well as each time the +system switches between sl_log_1 and sl_log_2. + +DECLARE + v_current_status int4; + v_log int4; + v_dummy record; + idef text; + v_count int4; +BEGIN + v_count := 0; + select last_value into v_current_status from sl_log_status; + + -- If status is 2 or 3 --> in process of cleanup --> unsafe to create indices + if v_current_status in (2, 3) then + return 0; + end if; + + if v_current_status = 0 then -- Which log should get indices? + v_log := 2; + else + v_log := 1; + end if; + + -- Add missing indices... + for v_dummy in select distinct set_origin from sl_set + where not exists + (select * from pg_catalog.pg_indexes where schemaname = 'schemadoc' + and tablename = 'sl_log_' || v_log and + indexname = 'PartInd_schemadoc_sl_log_' || v_log || '-node-' || set_origin) loop + idef := 'create index "PartInd_schemadoc_sl_log_' || v_log || '-node-' || v_dummy.set_origin || + '" on sl_log_' || v_log || ' USING btree(log_xid xxid_ops) where (log_origin = ' || v_dummy.set_origin || ');'; + execute idef; + v_count := v_count + 1; + end loop; + + -- Remove unneeded indices... + for v_dummy in select indexname from pg_catalog.pg_indexes i where i.schemaname = '@NAMESPACE' + and i.tablename = 'sl_log_' || v_log and + not exists (select 1 from sl_set where + i.indexname = 'PartInd_schemadoc_sl_log_' || v_log || '-node-' || set_origin) + loop + idef := 'drop index "schemadoc"."' || v_dummy.indexname || '";'; + execute idef; + v_count := v_count - 1; + end loop; + return v_count; +END + + +
+
@@ -2799,6 +3119,8 @@ v_tab_fqname text; v_tab_attkind text; v_n int4; + v_trec record; + v_tgbad boolean; begin -- ---- -- Grab the central configuration lock @@ -2831,11 +3153,11 @@ and PGXC.relname = T.tab_idxname for update; if not found then - raise exception 'Slony-I: Table with id % not found', p_tab_id; + raise exception 'Slony-I: alterTableForReplication(): Table with id % not found', p_tab_id; end if; v_tab_fqname = v_tab_row.tab_fqname; if v_tab_row.tab_altered then - raise exception 'Slony-I: Table % is already in altered state', + raise exception 'Slony-I: alterTableForReplication(): Table % is already in altered state', v_tab_fqname; end if; @@ -2865,6 +3187,32 @@ -- ---- + -- Check to see if there are any trigger conflicts... + -- ---- + v_tgbad := 'false'; + for v_trec in + select pc.relname, tg1.tgname from + "pg_catalog".pg_trigger tg1, + "pg_catalog".pg_trigger tg2, + "pg_catalog".pg_class pc, + "pg_catalog".pg_index pi, + sl_table tab + where + tg1.tgname = tg2.tgname and -- Trigger names match + tg1.tgrelid = tab.tab_reloid and -- trigger 1 is on the table + pi.indexrelid = tg2.tgrelid and -- trigger 2 is on the index + pi.indrelid = tab.tab_reloid and -- indexes table is this table + pc.oid = tab.tab_reloid + loop + raise notice 'Slony-I: alterTableForReplication(): multiple instances of trigger % on table %', + v_trec.tgname, v_trec.relname; + v_tgbad := 'true'; + end loop; + if v_tgbad then + raise exception 'Slony-I: Unable to disable triggers'; + end if; + + -- ---- -- Disable all existing triggers -- ---- update "pg_catalog".pg_trigger @@ -2987,11 +3335,11 @@ and PGXC.relname = T.tab_idxname for update; if not found then - raise exception 'Slony-I: Table with id % not found', p_tab_id; + raise exception 'Slony-I: alterTableRestore(): Table with id % not found', p_tab_id; end if; v_tab_fqname = v_tab_row.tab_fqname; if not v_tab_row.tab_altered then - raise exception 'Slony-I: Table % is not in altered state', + raise exception 'Slony-I: alterTableRestore(): Table % is not in altered state', v_tab_fqname; end if; @@ -3053,10 +3401,49 @@
- -
- +<!-- Function checkmoduleversion( ) --> + <section id="function.checkmoduleversion" + xreflabel="schemadoccheckmoduleversion( )"> + <title id="function.checkmoduleversion-title"> + checkmoduleversion( ) + + + checkmoduleversion( ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + text + + + + Inline test function that verifies that slonik request for STORE +NODE/INIT CLUSTER is being run against a conformant set of +schema/functions. + +declare + moduleversion text; +begin + select into moduleversion getModuleVersion(); + if moduleversion <> '@MODULEVERSION@' then + raise exception 'Slonik version: @MODULEVERSION@ != Slony-I version in PG build %', + moduleversion; + end if; + return null; +end; + +
+ + +
+ cleanupevent( ) @@ -3130,20 +3517,45 @@ end if; end loop; + -- ---- + -- If cluster has only one node, then remove all events up to + -- the last SYNC - Bug #1538 + -- http://gborg.postgresql.org/project/slony1/bugs/bugupdate.php?1538 + -- ---- + + select * into v_min_row from sl_node where + no_id <> getLocalNodeId('_schemadoc') limit 1; + if not found then + select ev_origin, ev_seqno into v_min_row from sl_event + where ev_origin = getLocalNodeId('_schemadoc') + order by ev_origin desc, ev_seqno desc limit 1; + raise notice 'Slony-I: cleanupEvent(): Single node - deleting events < %', v_min_row.ev_seqno; + delete from sl_event + where + ev_origin = v_min_row.ev_origin and + ev_seqno < v_min_row.ev_seqno; + + end if; + + -- ---- + -- Also remove stale entries from the nodelock table. + -- ---- + perform cleanupNodelock(); + return 0; end;
- -
- - cleanuplistener( ) +<!-- Function cleanupnodelock( ) --> + <section id="function.cleanupnodelock" + xreflabel="schemadoccleanupnodelock( )"> + <title id="function.cleanupnodelock-title"> + cleanupnodelock( ) - - cleanuplistener( ) + + cleanupnodelock( ) @@ -3153,13 +3565,79 @@ Language Return Type - C + PLPGSQL integer - look for stale pg_listener entries and submit Async_Unlisten() to them - _Slony_I_cleanupListener + Clean up stale entries when restarting slon + +declare + v_row record; +begin + for v_row in select nl_nodeid, nl_conncnt, nl_backendpid + from sl_nodelock + for update + loop + if killBackend(v_row.nl_backendpid, 'NULL') < 0 then + raise notice 'Slony-I: cleanup stale sl_nodelock entry for pid=%', + v_row.nl_backendpid; + delete from sl_nodelock where + nl_nodeid = v_row.nl_nodeid and + nl_conncnt = v_row.nl_conncnt; + end if; + end loop; + + return 0; +end; + + +
+ + +
+ + copyfields( integer ) + + + copyfields( integer ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + text + + + + Return a string consisting of what should be appended to a COPY statement +to specify fields for the passed-in tab_id. + +In PG versions > 7.3, this looks like (field1,field2,...fieldn) + +declare + result text; + prefix text; + prec record; +begin + result := ''; + prefix := '('; -- Initially, prefix is the opening paren + + for prec in select slon_quote_input(a.attname) as column from sl_table t, pg_catalog.pg_attribute a where t.tab_id = $1 and t.tab_reloid = a.attrelid and a.attnum > 0 and a.attisdropped = false order by attnum + loop + result := result || prefix || prec.column; + prefix := ','; -- Subsequently, prepend columns with commas + end loop; + result := result || ')'; + return result; +end; +
@@ -3424,14 +3902,14 @@ - -
- - ddlscript( integer, text, integer ) +<!-- Function ddlscript_complete( integer, text, integer ) --> + <section id="function.ddlscript-complete-integer-text-integer" + xreflabel="schemadocddlscript_complete( integer, text, integer )"> + <title id="function.ddlscript-complete-integer-text-integer-title"> + ddlscript_complete( integer, text, integer ) - - ddlscript( integer, text, integer ) + + ddlscript_complete( integer, text, integer ) @@ -3442,15 +3920,15 @@ Return Type PLPGSQL - bigint + integer - ddlScript(set_id, script, only_on_node) + ddlScript_complete(set_id, script, only_on_node) -Generates a SYNC event, runs the script on the origin, and then -generates a DDL_SCRIPT event to request it to be run on replicated -slaves. +After script has run on origin, this fixes up relnames, restores +triggers, and generates a DDL_SCRIPT event to request it to be run on +replicated slaves. declare p_set_id alias for $1; @@ -3458,6 +3936,89 @@ p_only_on_node alias for $3; v_set_origin int4; begin + perform updateRelname(p_set_id, p_only_on_node); + return createEvent('_schemadoc', 'DDL_SCRIPT', + p_set_id, p_script, p_only_on_node); +end; + + +
+ + +
+ + ddlscript_complete_int( integer, integer ) + + + ddlscript_complete_int( integer, integer ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + integer + + + + ddlScript_complete_int(set_id, script, only_on_node) + +Complete processing the DDL_SCRIPT event. This puts tables back into +replicated mode. + +declare + p_set_id alias for $1; + p_only_on_node alias for $2; + v_row record; +begin + -- ---- + -- Put all tables back into replicated mode + -- ---- + for v_row in select * from sl_table + loop + perform alterTableForReplication(v_row.tab_id); + end loop; + + return p_set_id; +end; + + +
+ + +
+ + ddlscript_prepare( integer, integer ) + + + ddlscript_prepare( integer, integer ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + integer + + + + Prepare for DDL script execution on origin + +declare + p_set_id alias for $1; + p_only_on_node alias for $2; + v_set_origin int4; +begin -- ---- -- Grab the central configuration lock -- ---- @@ -3482,23 +4043,20 @@ -- Create a SYNC event, run the script and generate the DDL_SCRIPT event -- ---- perform createEvent('_schemadoc', 'SYNC', NULL); - perform ddlScript_int(p_set_id, p_script, p_only_on_node); - perform updateRelname(p_set_id, p_only_on_node); - return createEvent('_schemadoc', 'DDL_SCRIPT', - p_set_id, p_script, p_only_on_node); + return 1; end;
- -
- - ddlscript_int( integer, text, integer ) +<!-- Function ddlscript_prepare_int( integer, integer ) --> + <section id="function.ddlscript-prepare-int-integer-integer" + xreflabel="schemadocddlscript_prepare_int( integer, integer )"> + <title id="function.ddlscript-prepare-int-integer-integer-title"> + ddlscript_prepare_int( integer, integer ) - - ddlscript_int( integer, text, integer ) + + ddlscript_prepare_int( integer, integer ) @@ -3513,16 +4071,14 @@ - ddlScript_int(set_id, script, only_on_node) + ddlScript_prepare_int (set_id, only_on_node) -Processes the DDL_SCRIPT event. On slave nodes, this restores -original triggers/rules, runs the script, and then puts tables back -into replicated mode. +Do preparatory work for a DDL script, restoring +triggers/rules to original state. declare p_set_id alias for $1; - p_script alias for $2; - p_only_on_node alias for $3; + p_only_on_node alias for $2; v_set_origin int4; v_no_id int4; v_row record; @@ -3561,28 +4117,12 @@ end if; -- ---- - -- Restore all original triggers and rules + -- Restore all original triggers and rules of all sets -- ---- for v_row in select * from sl_table - where tab_set = p_set_id loop perform alterTableRestore(v_row.tab_id); end loop; - - -- ---- - -- Run the script - -- ---- - execute p_script; - - -- ---- - -- Put all tables back into replicated mode - -- ---- - for v_row in select * from sl_table - where tab_set = p_set_id - loop - perform alterTableForReplication(v_row.tab_id); - end loop; - return p_set_id; end; @@ -3941,7 +4481,7 @@ where slon_quote_brute(PGN.nspname) || '.' || slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace) is null then - raise exception 'Slony-I: table % not found', v_tab_fqname_quoted; + raise exception 'Slony-I: determineIdxnameUnique(): table % not found', v_tab_fqname_quoted; end if; -- @@ -4094,8 +4634,6 @@ p_li_provider alias for $2; p_li_receiver alias for $3; begin - return -1; - perform dropListen_int(p_li_origin, p_li_provider, p_li_receiver); @@ -4138,8 +4676,6 @@ p_li_provider alias for $2; p_li_receiver alias for $3; begin - return -1; - -- ---- -- Grab the central configuration lock -- ---- @@ -4212,7 +4748,7 @@ if exists (select true from sl_subscribe where sub_provider = p_no_id) then - raise exception 'Slony-I: Node % is still configured as data provider', + raise exception 'Slony-I: Node % is still configured as a data provider', p_no_id; end if; @@ -4570,6 +5106,9 @@ -- Regenerate sl_listen since we revised the subscriptions perform RebuildListenEntries(); + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform addPartialLogIndices(); + return p_set_id; end; @@ -5009,7 +5548,7 @@ -- ---- -- All consistency checks first - -- Check that every system that has a path to the failed node + -- Check that every node that has a path to the failed node -- also has a path to the backup node. -- ---- for v_row in select P.pa_client @@ -5033,7 +5572,7 @@ loop -- ---- -- Check that the backup node is subscribed to all sets - -- that origin on the failed node + -- that originate on the failed node -- ---- select into v_row2 sub_forward, sub_active from sl_subscribe @@ -5069,46 +5608,7 @@ -- ---- -- Terminate all connections of the failed node the hard way -- ---- - perform terminateNodeConnections( - '_schemadoc_Node_' || p_failed_node); - --- Note that the following code should all become obsolete in the wake --- of the availability of RebuildListenEntries()... - -if false then - -- ---- - -- Let every node that listens for something on the failed node - -- listen for that on the backup node instead. - -- ---- - for v_row in select * from sl_listen - where li_provider = p_failed_node - and li_receiver <> p_backup_node - loop - perform storeListen_int(v_row.li_origin, - p_backup_node, v_row.li_receiver); - end loop; - - -- ---- - -- Let the backup node listen for all events where the - -- failed node did listen for it. - -- ---- - for v_row in select li_origin, li_provider - from sl_listen - where li_receiver = p_failed_node - and li_provider <> p_backup_node - loop - perform storeListen_int(v_row.li_origin, - v_row.li_provider, p_backup_node); - end loop; - - -- ---- - -- Remove all sl_listen entries that receive anything from the - -- failed node. - -- ---- - delete from sl_listen - where li_provider = p_failed_node - or li_receiver = p_failed_node; -end if; + perform terminateNodeConnections(p_failed_node); -- ---- -- Move the sets @@ -5141,12 +5641,10 @@ loop perform alterTableRestore(v_row2.tab_id); end loop; - end if; update sl_set set set_origin = p_backup_node where set_id = v_row.set_id; - if p_backup_node = getLocalNodeId('_schemadoc') then delete from sl_setsync where ssy_setid = v_row.set_id; @@ -5192,6 +5690,9 @@ -- Rewrite sl_listen table perform RebuildListenEntries(); + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform addPartialLogIndices(); + -- ---- -- Make sure the node daemon will restart -- ---- @@ -5231,7 +5732,7 @@ FUNCTION failedNode2 (failed_node, backup_node, set_id, ev_seqno, ev_seqfake) On the node that has the highest sequence number of the failed node, -fake the FAILED_NODE event. +fake the FAILOVER_SET event. declare p_failed_node alias for $1; @@ -5346,6 +5847,15 @@ loop perform alterTableForReplication(v_row.tab_id); end loop; + insert into sl_event + (ev_origin, ev_seqno, ev_timestamp, + ev_minxid, ev_maxxid, ev_xip, + ev_type, ev_data1, ev_data2, ev_data3) + values + (p_backup_node, "pg_catalog".nextval('sl_event_seq'), CURRENT_TIMESTAMP, + '0', '0', '', + 'ACCEPT_SET', p_set_id::text, + p_failed_node::text, p_backup_node::text); else delete from sl_subscribe where sub_set = p_set_id @@ -5475,7 +5985,7 @@ - Generate a sync event if there has not been one in 30 seconds. + Generate a sync event if there has not been one in the requested interval. declare p_interval alias for $1; @@ -5640,6 +6150,33 @@
+ +
+ + killbackend( integer, text ) + + + killbackend( integer, text ) + + + + + Function Properties + + Language + Return Type + + C + integer + + + + Send a signal to a postgres process. Requires superuser rights + _Slony_I_killBackend + +
+
@@ -5757,6 +6294,268 @@
+ +
+ + logswitch_finish( ) + + + logswitch_finish( ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + integer + + + + logswitch_finish() + +Attempt to finalize a log table switch in progress + +DECLARE + v_current_status int4; + v_dummy record; +BEGIN + -- ---- + -- Grab the central configuration lock to prevent race conditions + -- while changing the sl_log_status sequence value. + -- ---- + lock table sl_config_lock; + + -- ---- + -- Get the current log status. + -- ---- + select last_value into v_current_status from sl_log_status; + + -- ---- + -- status value 0 or 1 means that there is no log switch in progress + -- ---- + if v_current_status = 0 or v_current_status = 1 then + return 0; + end if; + + -- ---- + -- status = 2: sl_log_1 active, cleanup sl_log_2 + -- ---- + if v_current_status = 2 then + -- ---- + -- The cleanup thread calls us after it did the delete and + -- vacuum of both log tables. If sl_log_2 is empty now, we + -- can truncate it and the log switch is done. + -- ---- + for v_dummy in select 1 from sl_log_2 loop + -- ---- + -- Found a row ... log switch is still in progress. + -- ---- + raise notice 'Slony-I: log switch to sl_log_1 still in progress - sl_log_2 not truncated'; + return -1; + end loop; + + raise notice 'Slony-I: log switch to sl_log_1 complete - truncate sl_log_2'; + truncate sl_log_2; + perform "pg_catalog".setval('sl_log_status', 0); + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform addPartialLogIndices(); + + return 1; + end if; + + -- ---- + -- status = 3: sl_log_2 active, cleanup sl_log_1 + -- ---- + if v_current_status = 3 then + -- ---- + -- The cleanup thread calls us after it did the delete and + -- vacuum of both log tables. If sl_log_2 is empty now, we + -- can truncate it and the log switch is done. + -- ---- + for v_dummy in select 1 from sl_log_1 loop + -- ---- + -- Found a row ... log switch is still in progress. + -- ---- + raise notice 'Slony-I: log switch to sl_log_2 still in progress - sl_log_1 not truncated'; + return -1; + end loop; + + raise notice 'Slony-I: log switch to sl_log_2 complete - truncate sl_log_1'; + truncate sl_log_1; + perform "pg_catalog".setval('sl_log_status', 1); + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform addPartialLogIndices(); + return 2; + end if; +END; + + +
+ + +
+ + logswitch_start( ) + + + logswitch_start( ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + integer + + + + logswitch_start() + +Initiate a log table switch if none is in progress + +DECLARE + v_current_status int4; +BEGIN + -- ---- + -- Grab the central configuration lock to prevent race conditions + -- while changing the sl_log_status sequence value. + -- ---- + lock table sl_config_lock; + + -- ---- + -- Get the current log status. + -- ---- + select last_value into v_current_status from sl_log_status; + + -- ---- + -- status = 0: sl_log_1 active, sl_log_2 clean + -- Initiate a switch to sl_log_2. + -- ---- + if v_current_status = 0 then + perform "pg_catalog".setval('sl_log_status', 3); + perform registry_set_timestamp( + 'logswitch.laststart', now()::timestamp); + raise notice 'Slony-I: Logswitch to sl_log_2 initiated'; + return 2; + end if; + + -- ---- + -- status = 1: sl_log_2 active, sl_log_1 clean + -- Initiate a switch to sl_log_1. + -- ---- + if v_current_status = 1 then + perform "pg_catalog".setval('sl_log_status', 2); + perform registry_set_timestamp( + 'logswitch.laststart', now()::timestamp); + raise notice 'Slony-I: Logswitch to sl_log_1 initiated'; + return 1; + end if; + + raise exception 'Previous logswitch still in progress'; +END; + + +
+ + +
+ + logswitch_weekly( ) + + + logswitch_weekly( ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + integer + + + + logswitch_weekly() + +Ensure a logswitch is done at least weekly + +DECLARE + v_now timestamp; + v_now_dow int4; + v_auto_dow int4; + v_auto_time time; + v_auto_ts timestamp; + v_lastrun timestamp; + v_laststart timestamp; + v_days_since int4; +BEGIN + -- ---- + -- Check that today is the day to run at all + -- ---- + v_auto_dow := registry_get_int4( + 'logswitch_weekly.dow', 0); + v_now := "pg_catalog".now(); + v_now_dow := extract (DOW from v_now); + if v_now_dow <> v_auto_dow then + perform registry_set_timestamp( + 'logswitch_weekly.lastrun', v_now); + return 0; + end if; + + -- ---- + -- Check that the last run of this procedure was before and now is + -- after the time we should automatically switch logs. + -- ---- + v_auto_time := registry_get_text( + 'logswitch_weekly.time', '02:00'); + v_auto_ts := current_date + v_auto_time; + v_lastrun := registry_get_timestamp( + 'logswitch_weekly.lastrun', 'epoch'); + if v_lastrun >= v_auto_ts or v_now < v_auto_ts then + perform registry_set_timestamp( + 'logswitch_weekly.lastrun', v_now); + return 0; + end if; + + -- ---- + -- This is the moment configured in dow+time. Check that the + -- last logswitch was done more than 2 days ago. + -- ---- + v_laststart := registry_get_timestamp( + 'logswitch.laststart', 'epoch'); + v_days_since := extract (days from (v_now - v_laststart)); + if v_days_since < 2 then + perform registry_set_timestamp( + 'logswitch_weekly.lastrun', v_now); + return 0; + end if; + + -- ---- + -- Fire off an automatic logswitch + -- ---- + perform logswitch_start(); + perform registry_set_timestamp( + 'logswitch_weekly.lastrun', v_now); + return 1; +END; + + +
+
@@ -6113,17 +6912,9 @@ -- On the new origin, raise an event - ACCEPT_SET if v_local_node_id = p_new_origin then - -- Find the event number from the origin - select max(ev_seqno) as seqno into v_sub_row - from sl_event - where ev_type = 'MOVE_SET' and - ev_data1 = p_set_id and - ev_data2 = p_old_origin and - ev_data3 = p_new_origin and - ev_origin = p_old_origin; perform createEvent('_schemadoc', 'ACCEPT_SET', - p_set_id, p_old_origin, p_new_origin, v_sub_row.seqno); + p_set_id, p_old_origin, p_new_origin); end if; -- ---- @@ -6239,40 +7030,308 @@ end if; end if; end if; - delete from sl_subscribe - where sub_set = p_set_id - and sub_receiver = p_new_origin; + delete from sl_subscribe + where sub_set = p_set_id + and sub_receiver = p_new_origin; + + -- Regenerate sl_listen since we revised the subscriptions + perform RebuildListenEntries(); + + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform addPartialLogIndices(); + + -- ---- + -- If we are the new or old origin, we have to + -- put all the tables into altered state again. + -- ---- + if v_local_node_id = p_old_origin or v_local_node_id = p_new_origin then + for v_tab_row in select tab_id from sl_table + where tab_set = p_set_id + order by tab_id + loop + perform alterTableForReplication(v_tab_row.tab_id); + end loop; + end if; + + return p_set_id; +end; + + +
+ + +
+ + reachablefromnode( integer, integer[] ) + + + reachablefromnode( integer, integer[] ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + SET OF integer + + + + ReachableFromNode(receiver, blacklist) + +Find all nodes that <receiver> can receive events from without +using nodes in <blacklist> as a relay. + +declare + v_node alias for $1 ; + v_blacklist alias for $2 ; + v_ignore int4[] ; + v_reachable_edge_last int4[] ; + v_reachable_edge_new int4[] default '{}' ; + v_server record ; +begin + v_reachable_edge_last := array[v_node] ; + v_ignore := v_blacklist || array[v_node] ; + return next v_node ; + while v_reachable_edge_last != '{}' loop + v_reachable_edge_new := '{}' ; + for v_server in select pa_server as no_id + from sl_path + where pa_client = ANY(v_reachable_edge_last) and pa_server != ALL(v_ignore) + loop + if v_server.no_id != ALL(v_ignore) then + v_ignore := v_ignore || array[v_server.no_id] ; + v_reachable_edge_new := v_reachable_edge_new || array[v_server.no_id] ; + return next v_server.no_id ; + end if ; + end loop ; + v_reachable_edge_last := v_reachable_edge_new ; + end loop ; + return ; +end ; + + +
+ + +
+ + rebuildlistenentries( ) + + + rebuildlistenentries( ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + integer + + + + RebuildListenEntries() + +Invoked by various subscription and path modifying functions, this +rewrites the sl_listen entries, adding in all the ones required to +allow communications between nodes in the Slony-I cluster. + +declare + v_receiver record ; + v_provider record ; + v_origin record ; + v_reachable int4[] ; +begin + -- First remove the entire configuration + delete from sl_listen; + + -- Loop over every possible pair of receiver and provider + for v_receiver in select no_id from sl_node loop + for v_provider in select pa_server as no_id from sl_path where pa_client = v_receiver.no_id loop + + -- Find all nodes that v_provider.no_id can receiver events from without using v_receiver.no_id + for v_origin in select * from ReachableFromNode(v_provider.no_id, array[v_receiver.no_id]) as r(no_id) loop + + -- If v_receiver.no_id subscribes a set from v_provider.no_id, events have to travel the same + -- path as the data. Ignore possible sl_listen that would break that rule. + perform 1 from sl_subscribe + join sl_set on sl_set.set_id = sl_subscribe.sub_set + where + sub_receiver = v_receiver.no_id and + sub_provider != v_provider.no_id and + set_origin = v_origin.no_id ; + if not found then + insert into sl_listen (li_receiver, li_provider, li_origin) + values (v_receiver.no_id, v_provider.no_id, v_origin.no_id) ; + end if ; + + + end loop ; + + end loop ; + end loop ; + + return null ; +end ; + + +
+ + +
+ + registernodeconnection( integer ) + + + registernodeconnection( integer ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + integer + + + + Register (uniquely) the node connection so that only one slon can service the node + +declare + p_nodeid alias for $1; +begin + insert into sl_nodelock + (nl_nodeid, nl_backendpid) + values + (p_nodeid, pg_backend_pid()); + + return 0; +end; + + +
+ + +
+ + registry_get_int4( text, integer ) + + + registry_get_int4( text, integer ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + integer + + + + registry_get_int4(key, value) + +Get a registry value. If not present, set and return the default. + +DECLARE + p_key alias for $1; + p_default alias for $2; + v_value int4; +BEGIN + select reg_int4 into v_value from sl_registry + where reg_key = p_key; + if not found then + v_value = p_default; + if p_default notnull then + perform registry_set_int4(p_key, p_default); + end if; + else + if v_value is null then + raise exception 'Slony-I: registry key % is not an int4 value', + p_key; + end if; + end if; + return v_value; +END; + + +
- -- Regenerate sl_listen since we revised the subscriptions - perform RebuildListenEntries(); + +
+ + registry_get_text( text, text ) + + + registry_get_text( text, text ) + - -- ---- - -- If we are the new or old origin, we have to - -- put all the tables into altered state again. - -- ---- - if v_local_node_id = p_old_origin or v_local_node_id = p_new_origin then - for v_tab_row in select tab_id from sl_table - where tab_set = p_set_id - order by tab_id - loop - perform alterTableForReplication(v_tab_row.tab_id); - end loop; - end if; + + + Function Properties + + Language + Return Type + + PLPGSQL + text + + - return p_set_id; -end; + registry_get_text(key, value) + +Get a registry value. If not present, set and return the default. + +DECLARE + p_key alias for $1; + p_default alias for $2; + v_value text; +BEGIN + select reg_text into v_value from sl_registry + where reg_key = p_key; + if not found then + v_value = p_default; + if p_default notnull then + perform registry_set_text(p_key, p_default); + end if; + else + if v_value is null then + raise exception 'Slony-I: registry key % is not a text value', + p_key; + end if; + end if; + return v_value; +END;
- -
- - rebuildlistenentries( ) +<!-- Function registry_get_timestamp( text, timestamp without time zone ) --> + <section id="function.registry-get-timestamp-text-timestamp-without-time-zone" + xreflabel="schemadocregistry_get_timestamp( text, timestamp without time zone )"> + <title id="function.registry-get-timestamp-text-timestamp-without-time-zone-title"> + registry_get_timestamp( text, timestamp without time zone ) - - rebuildlistenentries( ) + + registry_get_timestamp( text, timestamp without time zone ) @@ -6283,44 +7342,46 @@ Return Type PLPGSQL - integer + timestamp without time zone - RebuildListenEntries(p_provider, p_receiver) + registry_get_timestamp(key, value) -Invoked by various subscription and path modifying functions, this -rewrites the sl_listen entries, adding in all the ones required to -allow communications between nodes in the Slony-I cluster. +Get a registry value. If not present, set and return the default. -declare - v_row record; -begin - -- First remove the entire configuration - delete from sl_listen; - - -- The loop over every possible pair of origin, receiver - for v_row in select N1.no_id as origin, N2.no_id as receiver - from sl_node N1, sl_node N2 - where N1.no_id <> N2.no_id - loop - perform RebuildListenEntriesOne(v_row.origin, v_row.receiver); - end loop; - - return 0; -end; +DECLARE + p_key alias for $1; + p_default alias for $2; + v_value timestamp; +BEGIN + select reg_timestamp into v_value from sl_registry + where reg_key = p_key; + if not found then + v_value = p_default; + if p_default notnull then + perform registry_set_timestamp(p_key, p_default); + end if; + else + if v_value is null then + raise exception 'Slony-I: registry key % is not an timestamp value', + p_key; + end if; + end if; + return v_value; +END;
- -
- - rebuildlistenentriesone( integer, integer ) +<!-- Function registry_set_int4( text, integer ) --> + <section id="function.registry-set-int4-text-integer" + xreflabel="schemadocregistry_set_int4( text, integer )"> + <title id="function.registry-set-int4-text-integer-title"> + registry_set_int4( text, integer ) - - rebuildlistenentriesone( integer, integer ) + + registry_set_int4( text, integer ) @@ -6335,85 +7396,127 @@ - RebuildListenEntriesOne(p_origin, p_receiver) + registry_set_int4(key, value) -Rebuilding of sl_listen entries for one origin, receiver pair. +Set or delete a registry value -declare - p_origin alias for $1; - p_receiver alias for $2; - v_row record; -begin - -- 1. If the receiver is subscribed to any set from the origin, - -- listen on the same provider(s). - for v_row in select distinct sub_provider - from sl_subscribe, sl_set, - sl_path - where sub_set = set_id - and set_origin = p_origin - and sub_receiver = p_receiver - and sub_provider = pa_server - and sub_receiver = pa_client - loop - perform storeListen_int(p_origin, - v_row.sub_provider, p_receiver); - end loop; - if found then - return 1; +DECLARE + p_key alias for $1; + p_value alias for $2; +BEGIN + if p_value is null then + delete from sl_registry + where reg_key = p_key; + else + lock table sl_registry; + update sl_registry + set reg_int4 = p_value + where reg_key = p_key; + if not found then + insert into sl_registry (reg_key, reg_int4) + values (p_key, p_value); end if; - - -- 2. If the receiver has a direct path to the provider, - -- use that. - if exists (select true - from sl_path - where pa_server = p_origin - and pa_client = p_receiver) - then - perform storeListen_int(p_origin, p_origin, p_receiver); - return 1; end if; + return p_value; +END; + + +
- -- 3. Listen on every node that is either provider for the - -- receiver or is using the receiver as provider (follow the - -- normal subscription routes). - for v_row in select distinct provider from ( - select sub_provider as provider - from sl_subscribe - where sub_receiver = p_receiver - union - select sub_receiver as provider - from sl_subscribe - where sub_provider = p_receiver - and exists (select true from sl_path - where pa_server = sub_receiver - and pa_client = sub_provider) - ) as S - loop - perform storeListen_int(p_origin, - v_row.provider, p_receiver); - end loop; - if found then - return 1; - end if; + +
+ + registry_set_text( text, text ) + + + registry_set_text( text, text ) + - -- 4. If all else fails - meaning there are no subscriptions to - -- guide us to the right path - use every node we have a path - -- to as provider. This normally only happens when the cluster - -- is built or a new node added. This brute force fallback - -- ensures that events will propagate if possible at all. - for v_row in select pa_server as provider - from sl_path - where pa_client = p_receiver - loop - perform storeListen_int(p_origin, - v_row.provider, p_receiver); - end loop; - if found then - return 1; + + + Function Properties + + Language + Return Type + + PLPGSQL + text + + + + registry_set_text(key, value) + +Set or delete a registry value + +DECLARE + p_key alias for $1; + p_value alias for $2; +BEGIN + if p_value is null then + delete from sl_registry + where reg_key = p_key; + else + lock table sl_registry; + update sl_registry + set reg_text = p_value + where reg_key = p_key; + if not found then + insert into sl_registry (reg_key, reg_text) + values (p_key, p_value); + end if; end if; + return p_value; +END; + + +
- return 0; -end; + +
+ + registry_set_timestamp( text, timestamp without time zone ) + + + registry_set_timestamp( text, timestamp without time zone ) + + + + + Function Properties + + Language + Return Type + + PLPGSQL + timestamp without time zone + + + + registry_set_timestamp(key, value) + +Set or delete a registry value + +DECLARE + p_key alias for $1; + p_value alias for $2; +BEGIN + if p_value is null then + delete from sl_registry + where reg_key = p_key; + else + lock table sl_registry; + update sl_registry + set reg_timestamp = p_value + where reg_key = p_key; + if not found then + insert into sl_registry (reg_key, reg_timestamp) + values (p_key, p_value); + end if; + end if; + return p_value; +END;
@@ -6440,13 +7543,16 @@ + sequenceLastValue(p_seqname) +Utility function used in sl_seqlastvalue view to compactly get the +last value from the requested sequence. declare p_seqname alias for $1; v_seq_row record; begin - for v_seq_row in execute 'select last_value from ' || p_seqname + for v_seq_row in execute 'select last_value from ' || slon_quote_input(p_seqname) loop return v_seq_row.last_value; end loop; @@ -6501,7 +7607,7 @@ and SQ.seq_reloid = PGC.oid and PGC.relnamespace = PGN.oid; if not found then - raise exception 'Slony-I: sequence % not found', p_seq_id; + raise exception 'Slony-I: sequenceSetValue(): sequence % not found', p_seq_id; end if; -- ---- @@ -6570,7 +7676,7 @@ raise exception 'Slony-I: setAddSequence(): set % not found', p_set_id; end if; if v_set_origin != getLocalNodeId('_schemadoc') then - raise exception 'Slony-I: setAddSequence(): set % has remote origin', p_set_id; + raise exception 'Slony-I: setAddSequence(): set % has remote origin - submit to origin node', p_set_id; end if; if exists (select true from sl_subscribe @@ -6679,6 +7785,13 @@ p_fqname; end if; + select 1 into v_sync_row from sl_sequence where seq_id = p_seq_id; + if not found then + v_relkind := 'o'; -- all is OK + else + raise exception 'Slony-I: setAddSequence_int(): sequence ID % has already been assigned', p_seq_id; + end if; + -- ---- -- Add the sequence to sl_sequence -- ---- @@ -6827,6 +7940,8 @@ v_sub_provider int4; v_relkind char; v_tab_reloid oid; + v_pkcand_nn boolean; + v_prec record; begin -- ---- -- Grab the central configuration lock @@ -6865,11 +7980,11 @@ and slon_quote_input(p_fqname) = slon_quote_brute(PGN.nspname) || '.' || slon_quote_brute(PGC.relname); if not found then - raise exception 'Slony-I: setAddTable(): table % not found', + raise exception 'Slony-I: setAddTable_int(): table % not found', p_fqname; end if; if v_relkind != 'r' then - raise exception 'Slony-I: setAddTable(): % is not a regular table', + raise exception 'Slony-I: setAddTable_int(): % is not a regular table', p_fqname; end if; @@ -6879,11 +7994,38 @@ and PGX.indexrelid = PGC.oid and PGC.relname = p_tab_idxname) then - raise exception 'Slony-I: setAddTable(): table % has no index %', + raise exception 'Slony-I: setAddTable_int(): table % has no index %', p_fqname, p_tab_idxname; end if; -- ---- + -- Verify that the columns in the PK (or candidate) are not NULLABLE + -- ---- + + v_pkcand_nn := 'f'; + for v_prec in select attname from "pg_catalog".pg_attribute where attrelid = + (select oid from "pg_catalog".pg_class where oid = v_tab_reloid) + and attname in (select attname from "pg_catalog".pg_attribute where + attrelid = (select oid from "pg_catalog".pg_class PGC, + "pg_catalog".pg_index PGX where + PGC.relname = p_tab_idxname and PGX.indexrelid=PGC.oid and + PGX.indrelid = v_tab_reloid)) and attnotnull <> 't' + loop + raise notice 'Slony-I: setAddTable_int: table % PK column % nullable', p_fqname, v_prec.attname; + v_pkcand_nn := 't'; + end loop; + if v_pkcand_nn then + raise exception 'Slony-I: setAddTable_int: table % not replicable!', p_fqname; + end if; + + select * into v_prec from sl_table where tab_id = p_tab_id; + if not found then + v_pkcand_nn := 't'; -- No-op -- All is well + else + raise exception 'Slony-I: setAddTable_int: table id % has already been assigned!', p_tab_id; + end if; + + -- ---- -- Add the table to sl_table and create the trigger on it. -- ---- insert into sl_table @@ -6961,7 +8103,7 @@ raise exception 'Slony-I: setDropSequence(): set % not found', v_set_id; end if; if v_set_origin != getLocalNodeId('_schemadoc') then - raise exception 'Slony-I: setDropSequence(): set % has remote origin', v_set_id; + raise exception 'Slony-I: setDropSequence(): set % has origin at another node - submit this to that node', v_set_id; end if; -- ---- @@ -7247,7 +8389,9 @@ - + setMoveSequence(p_seq_id, p_new_set_id) - This generates the +SET_MOVE_SEQUENCE event, after validation, notably that both sets +exist, are distinct, and have exactly the same subscription lists declare p_seq_id alias for $1; @@ -7266,22 +8410,22 @@ select seq_set into v_old_set_id from sl_sequence where seq_id = p_seq_id; if not found then - raise exception 'Slony-I: sequence %d not found', p_seq_id; + raise exception 'Slony-I: setMoveSequence(): sequence %d not found', p_seq_id; end if; -- ---- -- Check that both sets exist and originate here -- ---- if p_new_set_id = v_old_set_id then - raise exception 'Slony-I: set ids cannot be identical'; + raise exception 'Slony-I: setMoveSequence(): set ids cannot be identical'; end if; select set_origin into v_origin from sl_set where set_id = p_new_set_id; if not found then - raise exception 'Slony-I: set % not found', p_new_set_id; + raise exception 'Slony-I: setMoveSequence(): set % not found', p_new_set_id; end if; if v_origin != getLocalNodeId('_schemadoc') then - raise exception 'Slony-I: set % does not originate on local node', + raise exception 'Slony-I: setMoveSequence(): set % does not originate on local node', p_new_set_id; end if; @@ -7351,7 +8495,9 @@ - + setMoveSequence_int(p_seq_id, p_new_set_id) - processes the +SET_MOVE_SEQUENCE event, moving a sequence to another replication +set. declare p_seq_id alias for $1; @@ -7549,7 +8695,11 @@ - not yet documented + setSessionRole(username, role) - set role for session. + +role can be "normal" or "slon"; setting the latter is necessary, on +subscriber nodes, in order to override the denyaccess() trigger +attached to subscribing tables. _Slony_I_setSessionRole @@ -7582,7 +8732,7 @@ p_tab_fqname alias for $1; v_fqname text default ''; begin - v_fqname := '"' || replace(p_tab_fqname,'"','\\"') || '"'; + v_fqname := '"' || replace(p_tab_fqname,'"','""') || '"'; return v_fqname; end; @@ -7615,48 +8765,68 @@ declare p_tab_fqname alias for $1; - v_temp_fqname text default ''; - v_pre_quoted text[] default '{}'; - v_pre_quote_counter smallint default 0; - v_count_fqname smallint default 0; - v_fqname_split text[]; - v_quoted_fqname text default ''; -begin - v_temp_fqname := p_tab_fqname; - - LOOP - v_pre_quote_counter := v_pre_quote_counter + 1; - v_pre_quoted[v_pre_quote_counter] := - substring(v_temp_fqname from '%#""%"#"%' for '#'); - IF v_pre_quoted[v_pre_quote_counter] <> '' THEN - v_temp_fqname := replace(v_temp_fqname, - v_pre_quoted[v_pre_quote_counter], '@' || - v_pre_quote_counter); - ELSE - EXIT; - END IF; - END LOOP; - - v_fqname_split := string_to_array(v_temp_fqname , '.'); - v_count_fqname := array_upper (v_fqname_split, 1); - - FOR i in 1..v_count_fqname LOOP - IF substring(v_fqname_split[i],1,1) = '@' THEN - v_quoted_fqname := v_quoted_fqname || - v_pre_quoted[substring (v_fqname_split[i] from 2)::int]; - ELSE - v_quoted_fqname := v_quoted_fqname || '"' || - v_fqname_split[i] || '"'; - END IF; - - IF i < v_count_fqname THEN - v_quoted_fqname := v_quoted_fqname || '.' ; - END IF; - END LOOP; + v_nsp_name text; + v_tab_name text; + v_i integer; + v_l integer; + v_pq2 integer; +begin + v_l := length(p_tab_fqname); - return v_quoted_fqname; -end; - + -- Let us search for the dot + if p_tab_fqname like '"%' then + -- if the first part of the ident starts with a double quote, search + -- for the closing double quote, skipping over double double quotes. + v_i := 2; + while v_i <= v_l loop + if substr(p_tab_fqname, v_i, 1) != '"' then + v_i := v_i + 1; + else + v_i := v_i + 1; + if substr(p_tab_fqname, v_i, 1) != '"' then + exit; + end if; + v_i := v_i + 1; + end if; + end loop; + else + -- first part of ident is not quoted, search for the dot directly + v_i := 1; + while v_i <= v_l loop + if substr(p_tab_fqname, v_i, 1) = '.' then + exit; + end if; + v_i := v_i + 1; + end loop; + end if; + + -- v_i now points at the dot or behind the string. + + if substr(p_tab_fqname, v_i, 1) = '.' then + -- There is a dot now, so split the ident into its namespace + -- and objname parts and make sure each is quoted + v_nsp_name := substr(p_tab_fqname, 1, v_i - 1); + v_tab_name := substr(p_tab_fqname, v_i + 1); + if v_nsp_name not like '"%' then + v_nsp_name := '"' || replace(v_nsp_name, '"', '""') || + '"'; + end if; + if v_tab_name not like '"%' then + v_tab_name := '"' || replace(v_tab_name, '"', '""') || + '"'; + end if; + + return v_nsp_name || '.' || v_tab_name; + else + -- No dot ... must be just an ident without schema + if p_tab_fqname like '"%' then + return p_tab_fqname; + else + return '"' || replace(p_tab_fqname, '"', '""') || '"'; + end if; + end if; + +end; @@ -7749,7 +8919,7 @@ Returns the minor version number of the slony schema begin - return 1; + return 2; end; @@ -7819,8 +8989,6 @@ p_provider alias for $2; p_receiver alias for $3; begin - return -1; - perform storeListen_int (p_origin, p_provider, p_receiver); return createEvent ('_schemadoc', 'STORE_LISTEN', p_origin, p_provider, p_receiver); @@ -8250,6 +9418,9 @@ (p_set_id, p_set_origin, p_set_comment); end if; + -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table + perform addPartialLogIndices(); + return p_set_id; end; @@ -8411,6 +9582,7 @@ p_sub_forward alias for $4; v_set_origin int4; v_ev_seqno int8; + v_rec record; begin -- ---- -- Grab the central configuration lock @@ -8418,7 +9590,7 @@ lock table sl_config_lock; -- ---- - -- Check that this is called on the receiver node + -- Check that this is called on the provider node -- ---- if p_sub_provider != getLocalNodeId('_schemadoc') then raise exception 'Slony-I: subscribeSet() must be called on provider'; @@ -8431,26 +9603,28 @@ from sl_set where set_id = p_sub_set; if not found then - raise exception 'Slony-I: set % not found', p_sub_set; + raise exception 'Slony-I: subscribeSet(): set % not found', p_sub_set; end if; if v_set_origin = p_sub_receiver then raise exception - 'Slony-I: set origin and receiver cannot be identical'; + 'Slony-I: subscribeSet(): set origin and receiver cannot be identical'; end if; if p_sub_receiver = p_sub_provider then raise exception - 'Slony-I: set provider and receiver cannot be identical'; + 'Slony-I: subscribeSet(): set provider and receiver cannot be identical'; end if; - -- --- - -- Check to see if the set contains any tables - gripe if not - bug #1226 + -- Verify that the provider is either the origin or an active subscriber + -- Bug report #1362 -- --- - if not exists (select true - from sl_table - where tab_set = p_sub_set) then - raise notice 'subscribeSet:: set % has no tables - risk of problems - see bug 1226', p_sub_set; - raise notice 'http://gborg.postgresql.org/project/slony1/bugs/bugupdate.php?1226'; + if v_set_origin <> p_sub_provider then + if not exists (select 1 from sl_subscribe + where sub_set = p_sub_set and + sub_receiver = p_sub_provider and + sub_forward and sub_active) then + raise exception 'Slony-I: subscribeSet(): provider % is not an active forwarding node for replication set %', p_sub_provider, p_sub_set; + end if; end if; -- ---- @@ -8466,11 +9640,6 @@ perform subscribeSet_int(p_sub_set, p_sub_provider, p_sub_receiver, p_sub_forward); - -- ---- - -- Submit listen management events - -- ---- - perform RebuildListenEntries(); - return v_ev_seqno; end; @@ -8526,7 +9695,7 @@ and sub_receiver = p_sub_receiver; if found then if not v_sub_row.sub_active then - raise exception 'Slony-I: set % is not active, cannot change provider', + raise exception 'Slony-I: subscribeSet_int(): set % is not active, cannot change provider', p_sub_set; end if; end if; @@ -8541,6 +9710,11 @@ where sub_set = p_sub_set and sub_receiver = p_sub_receiver; if found then + -- ---- + -- Rewrite sl_listen table + -- ---- + perform RebuildListenEntries(); + return p_sub_set; end if; @@ -8569,7 +9743,7 @@ from sl_set where set_id = p_sub_set; if not found then - raise exception 'Slony-I: set % not found', p_sub_set; + raise exception 'Slony-I: subscribeSet_int(): set % not found', p_sub_set; end if; if v_set_origin = getLocalNodeId('_schemadoc') then @@ -8580,7 +9754,9 @@ p_sub_provider, p_sub_receiver); end if; + -- ---- -- Rewrite sl_listen table + -- ---- perform RebuildListenEntries(); return p_sub_set; @@ -8653,7 +9829,7 @@ -- anything means the table does not exist. -- if not found then - raise exception 'Slony-I: table % not found', v_tab_fqname_quoted; + raise exception 'Slony-I: tableAddKey(): table % not found', v_tab_fqname_quoted; end if; -- @@ -8734,7 +9910,7 @@ and T.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid; if not found then - raise exception 'Slony-I: table with ID % not found', p_tab_id; + raise exception 'Slony-I: tableDropKey(): table with ID % not found', p_tab_id; end if; -- ---- @@ -8805,14 +9981,14 @@ - -
- - terminatenodeconnections( name ) +<!-- Function terminatenodeconnections( integer ) --> + <section id="function.terminatenodeconnections-integer" + xreflabel="schemadocterminatenodeconnections( integer )"> + <title id="function.terminatenodeconnections-integer-title"> + terminatenodeconnections( integer ) - - terminatenodeconnections( name ) + + terminatenodeconnections( integer ) @@ -8822,13 +9998,30 @@ Language Return Type - C + PLPGSQL integer - terminates connections to the node and terminates the process - _Slony_I_terminateNodeConnections + terminates all backends that have registered to be from the given node + +declare + p_failed_node alias for $1; + v_row record; +begin + for v_row in select nl_nodeid, nl_conncnt, + nl_backendpid from sl_nodelock + where nl_nodeid = p_failed_node for update + loop + perform killBackend(v_row.nl_backendpid, 'TERM'); + delete from sl_nodelock + where nl_nodeid = v_row.nl_nodeid + and nl_conncnt = v_row.nl_conncnt; + end loop; + + return 0; +end; +
@@ -9020,7 +10213,7 @@ where sub_set = p_sub_set and sub_provider = p_sub_receiver) then - raise exception 'Slony-I: Cannot unsubscibe set % while being provider', + raise exception 'Slony-I: Cannot unsubscribe set % while being provider', p_sub_set; end if; @@ -9319,7 +10512,7 @@ p_old alias for $1; begin -- upgrade sl_table - if p_old = '1.0.2' or p_old = '1.0.5' then + if p_old IN ('1.0.2', '1.0.5', '1.0.6') then -- Add new column(s) sl_table.tab_relname, sl_table.tab_nspname execute 'alter table sl_table add column tab_relname name'; execute 'alter table sl_table add column tab_nspname name'; @@ -9338,7 +10531,7 @@ end if; -- upgrade sl_sequence - if p_old = '1.0.2' or p_old = '1.0.5' then + if p_old IN ('1.0.2', '1.0.5', '1.0.6') then -- Add new column(s) sl_sequence.seq_relname, sl_sequence.seq_nspname execute 'alter table sl_sequence add column seq_relname name'; execute 'alter table sl_sequence add column seq_nspname name'; @@ -9358,11 +10551,60 @@ -- ---- -- Changes from 1.0.x to 1.1.0 -- ---- - if p_old = '1.0.2' or p_old = '1.0.5' then + if p_old IN ('1.0.2', '1.0.5', '1.0.6') then -- Add new column sl_node.no_spool for virtual spool nodes execute 'alter table sl_node add column no_spool boolean'; update sl_node set no_spool = false; end if; + + -- ---- + -- Changes for 1.1.3 + -- ---- + if p_old IN ('1.0.2', '1.0.5', '1.0.6', '1.1.0', '1.1.1', '1.1.2') then + -- Add new table sl_nodelock + execute 'create table sl_nodelock ( + nl_nodeid int4, + nl_conncnt serial, + nl_backendpid int4, + + CONSTRAINT "sl_nodelock-pkey" + PRIMARY KEY (nl_nodeid, nl_conncnt) + )'; + -- Drop obsolete functions + execute 'drop function terminateNodeConnections(name)'; + execute 'drop function cleanupListener()'; + execute 'drop function truncateTable(text)'; + end if; + + -- ---- + -- Changes for 1.2 + -- ---- + if p_old IN ('1.0.2', '1.0.5', '1.0.6', '1.1.0', '1.1.1', '1.1.2', '1.1.3') then + -- Add new table sl_registry + execute 'create table sl_registry ( + reg_key text primary key, + reg_int4 int4, + reg_text text, + reg_timestamp timestamp + ) without oids'; + execute 'alter table sl_config_lock set without oids;'; + execute 'alter table sl_confirm set without oids;'; + execute 'alter table sl_event set without oids;'; + execute 'alter table sl_listen set without oids;'; + execute 'alter table sl_log_1 set without oids;'; + execute 'alter table sl_log_2 set without oids;'; + execute 'alter table sl_node set without oids;'; + execute 'alter table sl_nodelock set without oids;'; + execute 'alter table sl_path set without oids;'; + execute 'alter table sl_seqlog set without oids;'; + execute 'alter table sl_sequence set without oids;'; + execute 'alter table sl_set set without oids;'; + execute 'alter table sl_setsync set without oids;'; + execute 'alter table sl_subscribe set without oids;'; + execute 'alter table sl_table set without oids;'; + execute 'alter table sl_trigger set without oids;'; + end if; + return p_old; end; From cvsuser Mon Jul 31 12:04:33 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:37 2007 Subject: [Slony1-commit] By darcyb: Fix spelling mistakes, and Add a note about Message-ID: <20060731190433.3B3A711BF03A@gborg.postgresql.org> Log Message: ----------- Fix spelling mistakes, and Add a note about standards_conforming_strings being set to off for 8.2 Modified Files: -------------- slony1-engine: RELEASE-1.2.0 (r1.7 -> r1.8) -------------- next part -------------- Index: RELEASE-1.2.0 =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/RELEASE-1.2.0,v retrieving revision 1.7 retrieving revision 1.8 diff -LRELEASE-1.2.0 -LRELEASE-1.2.0 -u -w -r1.7 -r1.8 --- RELEASE-1.2.0 +++ RELEASE-1.2.0 @@ -58,7 +58,7 @@ indexes on sl_log_1 and sl_log_2 are created on a per-origin-node basis. This provides the performance boost of having an easily recognizable index, but without the risk of having XIDs from - different nodes mixed together in one index, where rollover could + different nodes mixed together in one index, where roll-over could Cause Problems... These features are generally configurable, but the defaults ought to @@ -83,7 +83,7 @@ versions. This may revise how you want to start it up... In the past, slon processes tended to fall over easily, mandating having some form of "watchdog." The new behaviour points more towards - "init" / "rc.d"-like handling, where, upon bootup, one "rc.d" script + "init" / "rc.d"-like handling, where, upon boot-up, one "rc.d" script might start up PostgreSQL, another one starts pgpool, and a third (which must run third!) starts up a slon. @@ -136,6 +136,10 @@ there is just one node. That then allows the cleanup thread to clear sl_log_1 etc. -- Bug 1566 - Force all replication to occure in the ISO datestyle. +- Bug 1566 - Force all replication to occur in the ISO datestyle. This ensures that we can apply date/timestamps regardless of the datestyle they were entered in. + +- Force all replication to occur with standards_conforming_strings set to off. + This ensures we can replicate a node running on 8.2 without extra escape chars + showing up in the data. From cvsuser Mon Jul 31 12:42:10 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:37 2007 Subject: [Slony1-commit] By cbbrowne: Remap DDL script tagging to use common ENTITY Message-ID: <20060731194210.1B32311BF03A@gborg.postgresql.org> Log Message: ----------- Remap DDL script tagging to use common ENTITY Modified Files: -------------- slony1-engine/doc/adminguide: ddlchanges.sgml (r1.26 -> r1.27) dropthings.sgml (r1.14 -> r1.15) faq.sgml (r1.61 -> r1.62) slony.sgml (r1.33 -> r1.34) -------------- next part -------------- Index: ddlchanges.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v retrieving revision 1.26 retrieving revision 1.27 diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.26 -r1.27 --- doc/adminguide/ddlchanges.sgml +++ doc/adminguide/ddlchanges.sgml @@ -14,8 +14,7 @@ built. If you pass the changes through &slony1; via (slonik) / (stored function), +linkend="stmtddlscript"> (slonik) / &funddlscript; (stored function), this allows you to be certain that the changes take effect at the same point in the transaction streams on all of the nodes. That may not be so important if you can take something of an outage to do schema Index: faq.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v retrieving revision 1.61 retrieving revision 1.62 diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.61 -r1.62 --- doc/adminguide/faq.sgml +++ doc/adminguide/faq.sgml @@ -1439,7 +1439,7 @@ Those two queries could be submitted to all of the nodes via - / , thus eliminating the sequence everywhere at once. Or they may be applied by hand to each of the nodes. Index: dropthings.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/dropthings.sgml,v retrieving revision 1.14 retrieving revision 1.15 diff -Ldoc/adminguide/dropthings.sgml -Ldoc/adminguide/dropthings.sgml -u -w -r1.14 -r1.15 --- doc/adminguide/dropthings.sgml +++ doc/adminguide/dropthings.sgml @@ -138,7 +138,7 @@ Those two queries could be submitted to all of the nodes via - / , thus eliminating the sequence everywhere at once. Or they may be applied by hand to each of the nodes. Index: slony.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v retrieving revision 1.33 retrieving revision 1.34 diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.33 -r1.34 --- doc/adminguide/slony.sgml +++ doc/adminguide/slony.sgml @@ -26,7 +26,7 @@ "> "> "> -"> +"> "> "> "> From cvsuser Mon Jul 31 13:43:05 2006 From: cvsuser (CVS User Account) Date: Tue Feb 13 08:58:37 2007 Subject: [Slony1-commit] By cbbrowne: Update to release notes (doc only) Message-ID: <20060731204305.4D36E11BF03A@gborg.postgresql.org> Log Message: ----------- Update to release notes (doc only) Modified Files: -------------- slony1-engine: RELEASE-1.2.0 (r1.8 -> r1.9) -------------- next part -------------- Index: RELEASE-1.2.0 =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/RELEASE-1.2.0,v retrieving revision 1.8 retrieving revision 1.9 diff -LRELEASE-1.2.0 -LRELEASE-1.2.0 -u -w -r1.8 -r1.9 --- RELEASE-1.2.0 +++ RELEASE-1.2.0 @@ -100,6 +100,15 @@ (The slon will automatically try again; the irritation is that you may have been depending on that getting done by Monday morning...) +- As has been the case for fairly much each release that has been + made, the documentation has been significantly extended. The "admin + guide" has been augmented and cleaned up. + + Notable additions include a listing of "Best Practices" (due in + great part to discoveries by the oft-unsung heroes of Afilias' Data + Services department) and a fairly comprehensive listing of log + messages you may expect to see in your Slony-I logs. + - A lot of fixes to the build environment (this needs to be tested on lots of platforms)