Brad Nicholson bnichols at ca.afilias.info
Wed Nov 21 07:14:48 PST 2007
On Tue, 2007-11-20 at 17:21 -0500, Robert Landrum wrote: 
> I'm working with a very large database.  300 tables and about 350 
> sequences.  Dumped, it's about 8GB of data.
> 
> I've read several posts that seem to indicate that I need to let slony 
> sync my data between the master and the slave when I subscribe the slave 
> to the master.  I would prefer to avoid this, as it'll take quite a 
> while for the sync to take place.  After about 2 hours and 30 minutes, 
> I'm only about 5% complete.
> 
> Is this just a limitation of slony? Or is there a workaround I've missed?


It's certainly not a limit of Slony with a data set of that size.  We've
subscribed databases 10 times the size of yours in a few hours, albiet
on very high end hardware.

First question - do your slon logs show that the tables you are copying
have been successfully truncated before copying?  If not, you may be
trying to write to a table with a lot of dead tuples left behind.  This
was a problem prior to PG 8.0 (or 8.1, I forget) where Slony had to
delete the data in the target tables instead of truncating it.  If it's
not truncating, then stop the copy, truncate the target tables on the
subscriber, and try again.

If not, it could be a limitation of your hardware and/or Postgres
configuration, or your usage.

Do some polking around your environment.  How is the IO, memory, CPU
usage?  Check all points - the server that is the provider, the server
that is the subscriber, the server that the slons are running on.  How
is you network.  Is it stable, is the bandwidth being saturated?

Is your slon running, or is it dying and starting the copy again?


-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.




More information about the Slony1-general mailing list