Christopher Browne cbbrowne
Wed Nov 24 22:58:01 PST 2004
We ran into the "obsolete sl_confirm" entries problem again today, as
some entries lingering from a week or so ago lead to pretty stellar
expansion of sl_log_1 and sl_seqlog...  

I beefed up FAQ #17 as a result...
<http://cbbrowne.dyndns.info:8741/cgi-bin/twiki/view/Sandbox/SlonyFAQ17>

The "clever" query that came up was thus:

  select * from @NAMESPACE at .sl_confirm where con_origin not in (select
  no_id from @NAMESPACE at .sl_node) or con_received not in (select no_id
  from @NAMESPACE at .sl_node);

I can see several directions to take this:

 1.  It would seem worthwhile to have a script that looks at node
     configuration to find anomalies.

     One of my coworkers has starting drafting such a script...

 2.  Have slon run this query at startup time, and if it finds
     entries, report the fact, and terminate itself.

 3.  Have slon run this query as part of the cleanup thread, and
     automatically purge out the offending entries.

In any case, if a node were to be dropped and later recreated, I'd
think the orphaned sl_confirm entries would be a Very Bad Thing,
right?  Those entries could convince the node that it was synced to a
later set of data than it was truly synced to, right?

The fact that 1.0.5 causes "drop node" to clean up after dropped nodes
doesn't strike me as being quite good enough.  That means that nodes
dropped in 1.0.5 are cleaned up after, but we evidently did some node
dropping that preceded 1.0.5, and we can't guarantee that someone
won't have such "sludge" around...
-- 
"cbbrowne","@","ca.afilias.info"
<http://dev6.int.libertyrms.com/>
Christopher Browne
(416) 673-4124 (land)


More information about the Slony1-general mailing list