Darcy Buskermolen darcy
Tue Mar 8 16:49:27 PST 2005
On Tuesday 08 March 2005 02:42, Chris Newland wrote:
> Thanks to all for the info.
>
> I'll keep an eye out for announcements as I'd like to test log shipping
> in Slony 1.1 in my dev environment.

You can always track the cvs -HEAD if you want a preview of what's up and 
coming in 1.1  (as well as providing feedback on anything that may not work)

>
> I'll go with the pg_dump + clean method for taking snapshots of
> production.
>
> Regards,
>
> Chris Newland
>
> -----Original Message-----
> From: Christopher Browne [mailto:cbbrowne at ca.afilias.info]
> Sent: 07 March 2005 19:13
> To: Chris Newland
> Cc: Slony1-general at gborg.postgresql.org
> Subject: Re: [Slony1-general] Taking backups from a Slony cluster
>
> Chris Newland wrote:
> >Hi all,
> >
> >I understand the Slony failover mechanism and realise that taking
> >pg_dump backups from a Slony cluster is the "wrong" way to protect
> >against failures.
> >
> >I'd like to ask what the Slony experts think is the best way to protect
> >against a catastrophic loss of origin and all subscriber nodes if they
> >are located in a single data centre and some physical disaster destroys
> >the entire cluster.
> >
> >It is currently not possible for me to have a subscriber node at an
> >external location because of performance reasons so I would like to
>
> have
>
> >a way to dump only the application data (no Slony tables). This would
> >allow me to build a fresh PostgreSQL server and reload the saved
> >application data (which will hopefully never happen).
> >
> >I imagine the upcoming log shipping feature would be an alternative and
> >I would keep a remote subscriber node updated using batch updates that
> >are out-of-band of the Slony transactions involving the origin node and
> >local subscriber cluster.
> >
> >Can you give me any indication on how close log-shipping is to a
> >production-ready state?
> >
> >If it's a long way then is there any quick way to dump the application
> >data from the master without all of the Slony information?
>
> I've been adding in various patches today sorta putting off work on
> debugging the SUBSCRIBE_SET event for log shipping.  :-).
>
> Log shipping is close to being ready to unleash on at least the
> unprepared part of the world.  I'm hoping we could let a 1.1 release
> candidate out of the bag this week or next.  That would make for a
> "1.1.0", which has rather a lot of new features.  I'd be reluctant to
> put that into production; I'd rather wait for a 1.1.2...
>
> That means log shipping isn't quite an immediate answer...  I'l try and
> touch on immediate answers...
>
> The problem with doing a straight pg_dump anywhere other than the origin
>
> node is that Slony-I does some fiddling with triggers on tables so as to
>
> hide them on subscriber nodes.  This means that a dump of the schema
> isn't going to be quite 100% what you want it to be.
>
> But a dump of just data, e.g. - via "pg_dump --data-only" will provide
> consistent (if not _completely_ up to date) results on pretty well any
> node.  (And note that if there's a lot of data, so that pg_dump runs for
>
> 20 minutes, and you get updates every 10 seconds, then there's no way
> for _any_ pg_dump to remain up to date when it ends...)
>
> On our system, all of the tables we want backed up are in the "public"
> schema, so our pg_dump indeed just dumps that schema.
>
> I expect that you'd like log shipping, which means you probably should
> start testing it out when the code is released.  More testers is always
> better.  But for now, using modified pg_dumps is probably the best
> answer available.
>
>
>
>
> _______________________________________________
> Slony1-general mailing list
> Slony1-general at gborg.postgresql.org
> http://gborg.postgresql.org/mailman/listinfo/slony1-general


More information about the Slony1-general mailing list