Mon Jun 14 13:25:36 PDT 2010
- Previous message: [Slony1-general] Huge lagging time
- Next message: [Slony1-general] Huge lagging time
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Mon, Jun 14, 2010 at 5:01 PM, Scott Marlowe <scott.marlowe at gmail.com>wrote: > On Mon, Jun 14, 2010 at 9:56 AM, Dhaval Jaiswal > <bablu_postgres at yahoo.com> wrote: > > > > I am working on PostgreSQL 8.0.2. with slony I. > > > > Whenever there is a update, insert, delete happened on primary it will > take > > some time to replicate the same on slave. We came to know about this > using > > sl_status table, where its lagging time showing 1 hr or 2 hrs. However, > > sl_confirm table shows last replicated events is before 5 mins. > > We have also seen there is vacuum analyze running on replications schema. > > Happens to me when there's too much IO for my hardware (which is quite > a bit on my hardware). > How can I meassure how much is too much use of my hardware when Slony is in place? I can meassure the CPU, memory, disk IO, and network use, but how much is needed in order to let Slony work well? Is there any way to calculate this on a transaction number and size basis? Thanks! > > > Can someone point me where should i look into and how to improve > replication > > performance. > > More / faster drives and controllers. > > > As of now there is no chance for upgradation of version. > > That would be the first thing I'd recommend. Since you can't do it, > you're gonna have to have faster hardware, specifically the IO > subsystem. > _______________________________________________ > Slony1-general mailing list > Slony1-general at lists.slony.info > http://lists.slony.info/mailman/listinfo/slony1-general > -- HeCSa -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.slony.info/pipermail/slony1-general/attachments/20100614/743e7648/attachment.htm
- Previous message: [Slony1-general] Huge lagging time
- Next message: [Slony1-general] Huge lagging time
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list