Brian Fehrle brianf at consistentstate.com
Fri Aug 13 07:56:11 PDT 2010
I never did find a way to prevent this from happening, however I found 
what I believe to be a safe way to fix it after it happens.

http://slony.info/documentation/function.cleanupevent-interval-boolean.html

Running this function on the clusters with the "crumbs" left over will 
clean them out. I believe that this gets executed automatically during 
the drop node process, but for whatever reason it doesn't always clean 
everything up (possibly a race condition between the dropped node 
confirming the last event and the cleanupevent() execution).

Example sql statement"
" select  _slonyclustername.cleanupevent() "

I did multiple tests of a script I wrote that did a whole drop sequence 
which included this cleanup event, and each time everything went 
smoothly, slony replication continued to work without problem.

- Brian Fehrle

Jaime Casanova wrote:
> On Mon, May 10, 2010 at 5:07 PM, Brian Fehrle
> <brianf at consistentstate.com> wrote:
>   
>> Hi all,
>>    I've been running into a problem with dropping a node from the slony
>> cluster, in which the slony system catalogs aren't getting fully cleaned
>> up upon the dropping of the node.
>>
>>    I have a three node cluster, one master and two slaves. I have a
>> script that will generate the slonik command that will drop one of the
>> slaves (in this case node three) from the slony cluster and it executes
>> without problem. However, after preforming the drop node a few dozen
>> times, there have been several instances in which the data in
>> _slony.sl_status still refers to a third node, and the st_lag_num_events
>> climb and climb (since there's no node to sync with, it will never drop
>> to 0).
>>
>>     
>
> I have this same problem right now with slony 1.2.20
> what could i do to recover? STORE NODE?
>
>   



More information about the Slony1-general mailing list