Tue Apr 4 08:06:59 PDT 2006
- Previous message: [Slony1-general] sl_log_1 not getting cleaned out?
- Next message: [Slony1-general] sl_log_1 not getting cleaned out?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Christopher Browne wrote: >Gavin Hamill <gdh at laterooms.com> writes: > > >Ideally, there should be two indexes on sl_log_1 and sl_log_2... > >create index sl_log_2_idx1 on @NAMESPACE at .sl_log_2 > (log_origin, log_xid @NAMESPACE at .xxid_ops, log_actionseq); > >-- Add in an additional index as sometimes log_origin isn't a useful discriminant >create index sl_log_2_idx2 on @NAMESPACE at .sl_log_2 > (log_xid @NAMESPACE at .xxid_ops); > > > OK two indexes on each of sl_log_1 and 2 giving four indexes on each node? >I'd suggest you run either test_slony_state.pl or >test_slony_state-dbi.pl (depending on whether you like Pg or DBI); >those scripts rummage through the cluster looking for some common >problems. > > Anyway, from this morning, slony does appear to be maintaining itself, the number of rows in sl_log_1 has dropped right back to 30000, so the half-million must really be simply due to the large number of db updates we do during our normal daily churn - I'd no idea we'd be doing that much traffic :) Plus the fact that the db hadn't seen a VACUUM ANALYZE in over a week due to a broken cronjob won't have helped - whoops :) Anyway, I'll certainly give the Perl state-tester a go - thank you kindly {:-) Cheers, Gavin.
- Previous message: [Slony1-general] sl_log_1 not getting cleaned out?
- Next message: [Slony1-general] sl_log_1 not getting cleaned out?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list