Wed Apr 21 22:42:55 PDT 2010
- Previous message: [Slony1-general] logswitch_finish( )
- Next message: [Slony1-general] Slony 2.0.3 RPMs for RHEL5 are released
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Wed, Apr 21, 2010 at 8:23 PM, Scott Marlowe <scott.marlowe at gmail.com> wrote: > On Wed, Apr 21, 2010 at 2:28 PM, Christopher Browne > <cbbrowne at ca.afilias.info> wrote: >> Scott Marlowe <scott.marlowe at gmail.com> writes: >>> On Wed, Apr 21, 2010 at 2:04 PM, Jan Wieck <JanWieck at yahoo.com> wrote: >>>> On 4/21/2010 2:38 PM, Scott Marlowe wrote: >>>>> >>>>> So, I had a query that blocked all updates going out of the sl_log_2 >>>>> table, and it's 13Gig. sl_log_1 is empty. >>>>> >>>>> IS the logswitch_finish() command an acceptable method for forcing the >>>>> replication engine to switch from 2 to 1 so I can vacuum full 2? >>>> >>>> It is completely safe to call logswitch_finish() at any time. It may or may >>>> not actually do something. >>>> >>>> In your case, I presume the value of sl_log_status is 2. This means it is >>>> waiting for sl_log_2 to become empty and once that happens, it will truncate >>>> it and set sl_log_status to 0. >>> >>> So, if I let the system just sit quiescent for a while, it should >>> straighten things out? >> >> The other thing that could be useful to run would be the stored function >> cleanupevent(). >> >> That clears out old events that have been confirmed by all nodes in the >> cluster, which is the pre-requisite for logswitch_finish() doing >> anything useful. >> >> It would probably be a useful idea for cleanupevent() to log a little >> bit of information about how much work it does (e.g. - how many tuples >> it trims out of sl_confirm, sl_event, sl_seqlog). So, after the load fell off this evening, slony caught up and both log tables showed as being small, in the < 100 MB range all the time.
- Previous message: [Slony1-general] logswitch_finish( )
- Next message: [Slony1-general] Slony 2.0.3 RPMs for RHEL5 are released
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list