Tue Feb 28 14:51:39 PST 2006
- Previous message: [Slony1-commit] By cbbrowne: Fix tagging of <command>
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Log Message:
-----------
1. change pg_listener references into a custom entity
2. notes for v1.2, where pg_listener and sl_log_n tables get 'abused'
less...
Modified Files:
--------------
slony1-engine/doc/adminguide:
bestpractices.sgml (r1.14 -> r1.15)
faq.sgml (r1.51 -> r1.52)
maintenance.sgml (r1.15 -> r1.16)
man.sgml (r1.5 -> r1.6)
slon.sgml (r1.24 -> r1.25)
slony.sgml (r1.27 -> r1.28)
-------------- next part --------------
Index: slon.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slon.sgml,v
retrieving revision 1.24
retrieving revision 1.25
diff -Ldoc/adminguide/slon.sgml -Ldoc/adminguide/slon.sgml -u -w -r1.24 -r1.25
--- doc/adminguide/slon.sgml
+++ doc/adminguide/slon.sgml
@@ -241,7 +241,7 @@
itself. If you are not, there are some tables
<productname>Slony-I</productname> uses that collect a
<emphasis>lot</emphasis> of dead tuples that should be vacuumed
- frequently, notably <envar>pg_listener</envar>.
+ frequently, notably &pglistener;.
</para>
<para> In &slony1; version 1.1, this changes a little; the
Index: man.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/man.sgml,v
retrieving revision 1.5
retrieving revision 1.6
diff -Ldoc/adminguide/man.sgml -Ldoc/adminguide/man.sgml -u -w -r1.5 -r1.6
--- doc/adminguide/man.sgml
+++ doc/adminguide/man.sgml
@@ -44,6 +44,7 @@
<!ENTITY slnode "<envar>sl_node</envar>">
<!ENTITY slconfirm "<envar>sl_confirm</envar>">
<!ENTITY bestpracticelink "Best Practice">
+ <!ENTITY pglistener "<envar>pg_listener</envar>">
]>
<book id="slony">
Index: maintenance.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v
retrieving revision 1.15
retrieving revision 1.16
diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.15 -r1.16
--- doc/adminguide/maintenance.sgml
+++ doc/adminguide/maintenance.sgml
@@ -13,7 +13,7 @@
linkend="table.sl-seqlog">.</para></listitem>
<listitem><para> Vacuum certain tables used by &slony1;. As of 1.0.5,
-this includes pg_listener; in earlier versions, you must vacuum that
+this includes &pglistener;; in earlier versions, you must vacuum that
table heavily, otherwise you'll find replication slowing down because
&slony1; raises plenty of events, which leads to that table having
plenty of dead tuples.</para>
@@ -24,13 +24,13 @@
vacuuming of these tables. Unfortunately, it has been quite possible
for <application>pg_autovacuum</application> to not vacuum quite
frequently enough, so you probably want to use the internal vacuums.
-Vacuuming <envar>pg_listener</envar> <quote>too often</quote> isn't
-nearly as hazardous as not vacuuming it frequently enough.</para>
+Vacuuming &pglistener; <quote>too often</quote> isn't nearly as
+hazardous as not vacuuming it frequently enough.</para>
<para>Unfortunately, if you have long-running transactions, vacuums
cannot clear out dead tuples that are newer than the eldest
transaction that is still running. This will most notably lead to
-<envar>pg_listener</envar> growing large and will slow
+&pglistener; growing large and will slow
replication.</para></listitem>
Index: bestpractices.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/bestpractices.sgml,v
retrieving revision 1.14
retrieving revision 1.15
diff -Ldoc/adminguide/bestpractices.sgml -Ldoc/adminguide/bestpractices.sgml -u -w -r1.14 -r1.15
--- doc/adminguide/bestpractices.sgml
+++ doc/adminguide/bestpractices.sgml
@@ -64,11 +64,28 @@
<para> Principle: Long running transactions are Evil </para>
<para> The FAQ has an entry on <link linkend="pglistenerfull"> growth
-of <envar>pg_listener</envar> </link> which discusses this in a fair
-bit of detail; the long and short is that long running transactions
-have numerous ill effects. They are particularly troublesome on an
+of &pglistener; </link> which discusses this in a fair bit of detail;
+the long and short is that long running transactions have numerous ill
+effects. They are particularly troublesome on an
<quote>origin</quote> node, holding onto locks, preventing vacuums
from taking effect, and the like.</para>
+
+<para> In version 1.2, some of the <quote>evils</quote> should be
+lessened, because:</para>
+
+<itemizedlist>
+
+<listitem><para> Events in &pglistener; are only generated when
+replication updates are relatively infrequent, which should mean that
+busy systems won't generate many dead tuples in that table
+</para></listitem>
+
+<listitem><para> The system will periodically rotate (using
+<command>TRUNCATE</command> to clean out the old table) between the
+two log tables, <xref linkend="table.sl-log-1"> and <xref
+linkend="table.sl-log-2">. </para></listitem>
+</itemizedlist>
+
</listitem>
<listitem>
Index: faq.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v
retrieving revision 1.51
retrieving revision 1.52
diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.51 -r1.52
--- doc/adminguide/faq.sgml
+++ doc/adminguide/faq.sgml
@@ -121,7 +121,7 @@
crash</para>
<para> After an immediate stop of &postgres; (simulation of system
-crash) in <envar>pg_catalog.pg_listener</envar> a tuple with <command>
+crash) in &pglistener; a tuple with <command>
relname='_${cluster_name}_Restart'</command> exists. slon doesn't
start because it thinks another process is serving the cluster on this
node. What can I do? The tuples can't be dropped from this
@@ -130,13 +130,13 @@
<para> The logs claim that <blockquote><para>Another slon daemon is
serving this node already</para></blockquote></para></question>
-<answer><para> The problem is that the system table
-<envar>pg_catalog.pg_listener</envar>, used by &postgres; to manage
-event notifications, contains some entries that are pointing to
-backends that no longer exist. The new <xref linkend="slon"> instance
-connects to the database, and is convinced, by the presence of these
-entries, that an old <application>slon</application> is still
-servicing this &slony1; node.</para>
+<answer><para> The problem is that the system table &pglistener;, used
+by &postgres; to manage event notifications, contains some entries
+that are pointing to backends that no longer exist. The new <xref
+linkend="slon"> instance connects to the database, and is convinced,
+by the presence of these entries, that an old
+<application>slon</application> is still servicing this &slony1;
+node.</para>
<para> The <quote>trash</quote> in that table needs to be thrown
away.</para>
@@ -164,8 +164,8 @@
condition, and automatically cleans it up.</para>
<para> As of version 8.1 of &postgres;, the functions that manipulate
-<envar>pg_listener</envar> do not support this usage, so for &slony1;
-versions after 1.1.2 (<emphasis>e.g. - </emphasis> 1.1.5), this
+&pglistener; do not support this usage, so for &slony1; versions after
+1.1.2 (<emphasis>e.g. - </emphasis> 1.1.5), this
<quote>interlock</quote> behaviour is handled via a new table, and the
issue should be transparently <quote>gone.</quote> </para>
@@ -427,21 +427,21 @@
fetch 100 from LOG;
</screen></para></question>
-<answer><para> This can be characteristic of pg_listener (which is the
-table containing <command>NOTIFY</command> data) having plenty of dead
-tuples in it. That makes <command>NOTIFY</command> events take a long
-time, and causes the affected node to gradually fall further and
+<answer><para> This can be characteristic of &pglistener; (which is
+the table containing <command>NOTIFY</command> data) having plenty of
+dead tuples in it. That makes <command>NOTIFY</command> events take a
+long time, and causes the affected node to gradually fall further and
further behind.</para>
<para>You quite likely need to do a <command>VACUUM FULL</command> on
-<envar>pg_listener</envar>, to vigorously clean it out, and need to
-vacuum <envar>pg_listener</envar> really frequently. Once every five
-minutes would likely be AOK.</para>
+&pglistener;, to vigorously clean it out, and need to vacuum
+&pglistener; really frequently. Once every five minutes would likely
+be AOK.</para>
<para> Slon daemons already vacuum a bunch of tables, and
<filename>cleanup_thread.c</filename> contains a list of tables that
are frequently vacuumed automatically. In &slony1; 1.0.2,
-<envar>pg_listener</envar> is not included. In 1.0.5 and later, it is
+&pglistener; is not included. In 1.0.5 and later, it is
regularly vacuumed, so this should cease to be a direct issue.</para>
<para>There is, however, still a scenario where this will still
@@ -503,8 +503,10 @@
schema dumped using <option>--schema=whatever</option>, and don't try
dumping the cluster's schema.</para></listitem>
-<listitem><para> It would be nice to add an <option>--exclude-schema</option>
-option to pg_dump to exclude the Slony cluster schema. Maybe in 8.1...</para></listitem>
+<listitem><para> It would be nice to add an
+<option>--exclude-schema</option> option to
+<application>pg_dump</application> to exclude the &slony1; cluster
+schema. Maybe in 8.2...</para></listitem>
<listitem><para>Note that 1.0.5 uses a more precise lock that is less
exclusive that alleviates this problem.</para></listitem>
@@ -973,8 +975,8 @@
<answer> <para> There are actually a number of possible causes for
this sort of thing. There is a question involving similar pathology
-where the problem is that <link linkend="pglistenerfull"> <envar>
-pg_listener </envar> grows because it is not vacuumed. </link>
+where the problem is that <link linkend="pglistenerfull">
+&pglistener; grows because it is not vacuumed. </link>
</para>
<para> Another <quote> proximate cause </quote> for this growth is for
@@ -986,9 +988,9 @@
<itemizedlist>
-<listitem><para> Vacuums on all tables, including <envar> pg_listener
-</envar>, will not clear out dead tuples from before the start of the
-idle transaction. </para> </listitem>
+<listitem><para> Vacuums on all tables, including &pglistener;, will
+not clear out dead tuples from before the start of the idle
+transaction. </para> </listitem>
<listitem><para> The cleanup thread will be unable to clean out
entries in <xref linkend="table.sl-log-1"> and <xref
Index: slony.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v
retrieving revision 1.27
retrieving revision 1.28
diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.27 -r1.28
--- doc/adminguide/slony.sgml
+++ doc/adminguide/slony.sgml
@@ -45,6 +45,7 @@
<!ENTITY slconfirm "<xref linkend=table.sl-confirm>">
<!ENTITY rplainpaths "<xref linkend=plainpaths>">
<!ENTITY rlistenpaths "<xref linkend=listenpaths>">
+ <!ENTITY pglistener "<envar>pg_listener</envar>">
]>
- Previous message: [Slony1-commit] By cbbrowne: Fix tagging of <command>
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-commit mailing list