Hannu Krosing hannu
Wed Sep 28 23:09:37 PDT 2005
On K, 2005-09-28 at 12:37 -0500, Joe Markwardt wrote:
> On Tue, 2005-09-27 at 21:08 -0400, Christopher Browne wrote:
> > Joe Markwardt wrote:
> > 
> > >On Wed, 2005-09-28 at 01:20 +0300, Hannu Krosing wrote:
> > >  
> > >
> > >>On T, 2005-09-27 at 16:37 -0500, Joe Markwardt wrote:
> > >>
> > >>    
> > >>
> > >>>Yep.  And I restarted the slon's as well, so they should be using the
> > >>>index as long as the planner thinks it will help.  Is there anyway I can
> > >>>see the query that slon is generating to create the cursor? If I could
> > >>>see that query then I could do some explain analyze's and see what they
> > >>>say...
> > >>>      
> > >>>
> > >>I usually look at the queries in postgres log.
> > >>
> > >>In your case you could set up postgres to log only long queries, say
> > >>over 50 sec, and then just tail -f the log.
> > >>
> > >>When analysing take notice that slon specially sets enable_seqsnam and
> > >>enable_indexscan before running the query on sl_log_1.
> > >>
> > >>    
> > >>
> > >Thanks for the tip, but the only queries that are being logged are the
> > >FETCH statements...I suppose I could set it up to log all queries (and
> > >plans) and then restart slon, then turn off the query logging.
> > >  
> > >
> > Right you are...
> > 
> > The interesting query is the one that sets up the cursor, and it doesn't
> > take long :-(.
> > 
> > You can tell how many SYNCs are being processed together by looking at
> > two consecutive lines that say "SYNC XXXX done in Y.xyz seconds"
> > 
> > (They'll have lines in between...)
> > 
> > If it's "SYNC 1610644 done in 119 seconds" followed (2 minutes later) by
> > "SYNC 1610724 done in 119 seconds", then you know that it's doing groups
> > of 80, and even if it's not catching up quickly, that's at least the
> > right direction to go in.
> > 
> 
> chris, going through the logs from yesterday afternoon it appears that
> it is varying the number of SYNCs processed each time, isn't the -o0
> supposed to force it to always do 80?
> 
> I keep getting log entries like this:
> 
> 2005-09-28 11:06:29 CDT DEBUG3 calc sync size - last time: 1 last
> length: 127436 ideal: 80 proposed size: 2
> 
> with varying values for proposed size...
> 
> Attached you'll find the output of the explain analyze's from the cursor
> declaration I got from the logs.  It appears that it is using the new
> index, and the new index is indeed faster than a sequential scan on the
> table.  I'm not sure if there is anything else that could be done to
> make this query run faster while it's catching up, but i'll play around
> some.

So it takes 114 secs for indexscan over ~300k xids worth of rows
returning nothing. What hardware are you running on (RAM/disks/CPU) ?

Depending on how big sl_log_1 is, you may want to do a reindex on
sl_log_1.

Or maybe even
cluster  sl_log_1_idx2 on sl_log_1;

Otoh, if your client side prosessing needs allow it, you could just re-
subscribe the biggest tables (preferrably drop all indexes except the PK
or UNIQUE index used by slony before doing it). It is often faster than
catching up.

-- 
Hannu Krosing <hannu at skype.net>



More information about the Slony1-general mailing list