Mon Oct 9 12:16:19 PDT 2006
- Previous message: [Slony1-general] "Fetch 100 from LOG" too slow, Slony way behind
- Next message: [Slony1-general] Using slony with many schema's
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Oct 5, 2006, at 1:48 PM, Hanks, Dan wrote: > Is there anything else we can be doing to help it along? sl_log_1 > has 15M rows in it currently. Are we at the point of being > hopelessly behind? > > You're might be. What's the rate of change on the db? ie, how fast is sl_log_1 growing? if you can take the hit of locking the table for a bit, try re- indexing sl_log_1 indexes. also, depending on your other systems' usage, try re-indexing large and frequenetly updated tables on those boxes, too. Even with pg 8.1, we're finding some usage patterns result in big table bloat, and reindex is the only way out (we shaved some tables down by over 50% dead space in the indexes.) In our case, the reindex helped speed up the replication significantly. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://gborg.postgresql.org/pipermail/slony1-general/attachments/20061009/807d1c91/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2530 bytes Desc: not available Url : http://gborg.postgresql.org/pipermail/slony1-general/attachments/20061009/807d1c91/attachment.bin
- Previous message: [Slony1-general] "Fetch 100 from LOG" too slow, Slony way behind
- Next message: [Slony1-general] Using slony with many schema's
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list