CVS User Account cvsuser
Wed Aug 25 19:06:15 PDT 2004
Log Message:
-----------
Added a new file, "randomfacts.txt", to contain facts that we
haven't yet a better place to document yet.

Other modifications to existing documentation to make some
previously-undocumented features documented.

Modified Files:
--------------
    slony1-engine/doc/howto:
        helpitsbroken.txt (r1.2 -> r1.3)
        slonik_commands.html (r1.4 -> r1.5)

Added Files:
-----------
    slony1-engine/doc/howto:
        randomfacts.txt (r1.1)

-------------- next part --------------
Index: helpitsbroken.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/howto/helpitsbroken.txt,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/howto/helpitsbroken.txt -Ldoc/howto/helpitsbroken.txt -u -w -r1.2 -r1.3
--- doc/howto/helpitsbroken.txt
+++ doc/howto/helpitsbroken.txt
@@ -4,8 +4,8 @@
 You're having trouble getting it to work, and are scratching your head
 as to what might be wrong.
 
-Here are some things other people have stumbled over that might help
-you to "stumble more quickly."
+Here are some idiosyncracies that other people have stumbled over that
+might help you to "stumble more quickly."
 
 1.  I looked for the _clustername namespace, and it wasn't there.
 
@@ -33,6 +33,18 @@
 use different memory locations for errno, thereby leading to the
 request failing.
 
+Problems like this crop up with disadmirable regularity on AIX and
+Solaris; it may take something of an "object code audit" to make sure
+that ALL of the necessary components have been compiled and linked
+with --enable-thread-safety.
+
+For instance, I ran into the problem one that LD_LIBRARY_PATH had been
+set, on Solaris, to point to libraries from an old PostgreSQL compile.
+That meant that even though the database had been compiled with
+--enable-thread-safety, and slon had been compiled against that, slon
+was being dynamically linked to the "bad old thread-unsafe version,"
+so slon didn't work.  It wasn't clear until I ran "ldd" against slon.
+
 3.  I tried creating a CLUSTER NAME with a "-" in it.  That didn't work.
 
 Slony-I uses the same rules for unquoted identifiers as the PostgreSQL
@@ -42,17 +54,16 @@
 You may be able to defeat this by putting "quotes" around identifier
 names, but it's liable to bite you somewhere...
 
-
 4.  After an immediate stop of postgresql (simulation of system crash)
 in pg_catalog.pg_listener a tuple with
 relname='_${cluster_name}_Restart' exists. slon doesn't start cause it
 thinks another process is serving the cluster on this node.  What can
-I do? The tuples can't be droped from this relation.
+I do? The tuples can't be dropped from this relation.
 
 Answer:  
 
 Before starting slon, do a 'restart node'. PostgreSQL tries to notify
-the listeners and drop those are not anwsering. Slon then starts
+the listeners and drop those are not answering. Slon then starts
 cleanly.
 
 5.  If I run a "ps" command, I, and everyone else, can see passwords
@@ -69,8 +80,8 @@
 
 Evidently, you haven't got the xxid.so library in the $libdir
 directory that the PostgreSQL instance is using.  Note that the Slony
-components need to be installed on ALL of the nodes, not just on the
-"master."
+components need to be installed on EACH ONE of the nodes, not just on
+the "master."
 
 This may also point to there being some other mismatch between the
 PostgreSQL binary instance and the Slony-I instance.  If you compiled
@@ -82,7 +93,7 @@
 Long and short: This points to a need to "audit" what installations of
 PostgreSQL and Slony you have in place on the machine(s).
 Unfortunately, just about any mismatch will cause things not to link
-up quite right.
+up quite right.  Look back at #2...
 
 7.  An oddity - no need for Fully Qualified Name for table keys...
 
@@ -117,7 +128,6 @@
 transaction blocking Slony-I from processing the sync.  You might want
 to take a look at pg_locks to see what's up:
 
-
 sampledb=# select * from pg_locks where transaction is not null order by transaction;
  relation | database | transaction |   pid   |     mode      | granted 
 ----------+----------+-------------+---------+---------------+---------
@@ -131,4 +141,6 @@
 postgres 2605100  205018   0 18:53:43  pts/3  3:13 postgres: postgres sampledb localhost COPY 
 
 This happens to be a COPY transaction involved in setting up the
-subscription for one of the nodes.
+subscription for one of the nodes.  All is well; the system is busy
+setting up the first subscriber; it won't start on the second one
+until the first one has completed subscribing.
--- /dev/null
+++ doc/howto/randomfacts.txt
@@ -0,0 +1,49 @@
+Random Things That Ought To Be Documented Somewhere
+
+1.  Yes, you CAN "kill -9" slon processes
+
+2.  You can subscribe a node without having started the "slon" process
+for that node.
+
+Nothing will start replicating until the "slon" starts up.
+
+3.  No, you don't really need a "node 1".  
+
+In many places, slonik defaults kind of assume that there is one, but
+it doesn't HAVE to be there.
+
+4.  A little more about primary keys.
+
+Slony-I NEEDS to have a primary key candidate to work with in order to
+uniquely specify tuples that are to be replicated.  This can work out
+three ways:
+
+ - If the table has a primary key defined, then you can do a SET ADD
+   TABLE on the table, and it'll "just replicate."
+
+ - If the table has NO "unique, not NULL" key, you need to add one.
+
+   There's a slonik command to do that; TABLE ADD KEY.
+
+ - The _third_ case is where the table has one or more _candidate_
+    primary keys, none of which are formally defined to be THE primary
+    key.
+
+    In that case, you must pick one of them, and specify it in the SET
+    ADD TABLE statement.
+
+5 I want to update data on any of my servers, and have it propagate
+  around.
+
+  That case is specifically NOT addressed by Slony-I.  Slony-I _locks_
+  all the replicated tables on the subscribers; updates are only
+  permitted on the "master" node.
+
+  There are plans for a later Slony-II project to address distributed
+  updates; part of the point of Slony-I is to provide the
+  "bootstrapping" system needed to get multiple databases "in sync,"
+  which is a prerequisite for being able to do distributed updates.
+
+  That still means that distributed updates (e.g. - doing updates at
+  other than One Single Master Server Node) is NOT part of the Slony-I
+  design.
Index: slonik_commands.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/howto/slonik_commands.html,v
retrieving revision 1.4
retrieving revision 1.5
diff -Ldoc/howto/slonik_commands.html -Ldoc/howto/slonik_commands.html -u -w -r1.4 -r1.5
--- doc/howto/slonik_commands.html
+++ doc/howto/slonik_commands.html
@@ -169,13 +169,18 @@
 	procedures, tables and sequences are defined. The namespace
 	name is built by prefixing the given string literal with an
 	underscore. This namespace will be identical in all databases
-	that participate in the same replication group. No user
-	objects are supposed to live in this namespace and the
-	namespace is not allowed to exist prior to adding a database
-	to the replication system.  Thus, if you add a new node using
-	<tt> pg_dump -s </tt> on a database that is already in the
-	cluster, you will need to drop the namespace via the SQL
-	command <tt> DROP SCHEMA _testcluster CASCADE; </tt>.
+	that participate in the same replication group. 
+</p>
+
+<p>
+                  No user objects are supposed to live in this
+                  namespace and the namespace is not allowed to exist
+                  prior to adding a database to the replication
+                  system.  Thus, if you add a new node using <tt>
+                  pg_dump -s </tt> on a database that is already in
+                  the cluster of replicated databases, you will need
+                  to drop the namespace via the SQL command <tt> DROP
+                  SCHEMA _testcluster CASCADE; </tt>.
 </p>
 <h3>Example:</h3>
 <p>
@@ -193,13 +198,13 @@
 	NODE &lt;ival&gt ADMIN CONNINFO = &lt;string&gt;;
 <h3>Description:</h3>
 <p>
-	Describes how the slonik utility can reach a nodes database in the cluster
-	from where it is run (usually the DBA's workstation). The conninfo
-	string is the string agrument given to the PQconnectdb() libpq
-	function. The user as to connect must be the special replication
-	superuser, as some of the actions performed later may include
-	operations that are strictly reserved for database superusers by
-	PostgreSQL.
+	Describes how the slonik utility can reach a nodes database in
+	the cluster from where it is run (likely the DBA's
+	workstation). The conninfo string is the string agrument given
+	to the PQconnectdb() libpq function. The user as to connect
+	must be the special replication superuser, as some of the
+	actions performed later may include operations that are
+	strictly reserved for database superusers by PostgreSQL.
 </p>
 <p>
 	The slonik utility will not try to connect to the databases
@@ -211,7 +216,8 @@
 	throughout the entire development that the database servers and
 	administrative workstations involved in replication and/or setup
 	and configuration activities can use simple authentication schemes
-	like trust. 
+	like <tt>trust</tt>.   Alternatively, libpq can read passwords from
+                 <tt> .pgpass </tt>.
 </p>
 
 <p>
@@ -219,9 +225,8 @@
 	would happen if the IP address for a host were to change, you
 	may submit the new information using this command, and that
 	configuration will be propagated.  Existing <tt> slon </tt>
-	processes <i>may </i> need to be restarted in order for them
-	to become aware of the configuration change.
-
+	processes will need to be restarted in order to become aware
+	of the configuration change.
 </p>
 <h3>Example:</h3>
 <p>
@@ -1134,6 +1139,18 @@
 	on the subscriber using triggers against accidental updates by
 	the application.
 </p>
+
+<p>
+	Note: If you need to revise subscription information for a
+	node, you may submit the new information using this command,
+	and the new configuration will be propagated throughout the
+	replication network.  The normal reason to revise this
+	information is that you want a node to subscribe to a <i>
+	different </i> provider node, or for a node to become a
+	"forwarding" subscriber so it may later become the provider
+	for a later subscriber.
+
+</p>
 <table border="0" cellpadding="10">
 <tr>
 	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>


More information about the Slony1-commit mailing list