Jaime Casanova jaime at 2ndquadrant.com
Thu Aug 12 22:34:04 PDT 2010
On Mon, May 10, 2010 at 5:07 PM, Brian Fehrle
<brianf at consistentstate.com> wrote:
> Hi all,
>    I've been running into a problem with dropping a node from the slony
> cluster, in which the slony system catalogs aren't getting fully cleaned
> up upon the dropping of the node.
>
>    I have a three node cluster, one master and two slaves. I have a
> script that will generate the slonik command that will drop one of the
> slaves (in this case node three) from the slony cluster and it executes
> without problem. However, after preforming the drop node a few dozen
> times, there have been several instances in which the data in
> _slony.sl_status still refers to a third node, and the st_lag_num_events
> climb and climb (since there's no node to sync with, it will never drop
> to 0).
>

I have this same problem right now with slony 1.2.20
what could i do to recover? STORE NODE?

-- 
Jaime Casanova         www.2ndQuadrant.com
Soporte y capacitación de PostgreSQL


More information about the Slony1-general mailing list