Chris Newland chrisn
Mon Mar 7 17:01:41 PST 2005
Hi all,

I understand the Slony failover mechanism and realise that taking
pg_dump backups from a Slony cluster is the "wrong" way to protect
against failures.

I'd like to ask what the Slony experts think is the best way to protect
against a catastrophic loss of origin and all subscriber nodes if they
are located in a single data centre and some physical disaster destroys
the entire cluster.

It is currently not possible for me to have a subscriber node at an
external location because of performance reasons so I would like to have
a way to dump only the application data (no Slony tables). This would
allow me to build a fresh PostgreSQL server and reload the saved
application data (which will hopefully never happen).

I imagine the upcoming log shipping feature would be an alternative and
I would keep a remote subscriber node updated using batch updates that
are out-of-band of the Slony transactions involving the origin node and
local subscriber cluster.

Can you give me any indication on how close log-shipping is to a
production-ready state?

If it's a long way then is there any quick way to dump the application
data from the master without all of the Slony information?

Thanks for all your hard work on Slony.

Regards,

Chris Newland




More information about the Slony1-general mailing list