Will Langford unfies at gmail.com
Fri Sep 28 17:00:26 PDT 2007
Hi,

I work for a company that installs network services in many locations, each
are completely isolated/localized.  These services include moderate to heavy
postgresql traffic, with either a balanced select/update ratio, or slightly
leaning towards more updates.

Due to the nature of our products, we typically only need a single mid to
low range server to handle our loads.  Times are changing, our products are
becoming more demanding, and needs are changing as well.

We've always had a heartbeat/drbd fail over system on location so that if
our system goes down, a backup takes over.  We've had insanely great success
with this setup.  The low amount of down time is acceptable, although our
system is generally a 24/7 grinder.

If we keep our current setup where the two primary nodes have the database
live in DRBD, and there's a third report crunching box, how tolerant is
slony-1 to heartbeat/drbd switch over to a new system ? Or is there some
kind of multi-master replication system that's effective for a 2 or 3 system
setup ?

Lately, I've been looking at several bottle necks in our system.  I've had
good results with moving to a newer version of postgres, and also with
changing out the underlying filesystem from ext3 to reiser.  I've done about
all I can to tweak configuration file stuffs relating to
wal/fsm/buffers/etc.  I've also done what I can to get proper columns
indexed in the myriad of tables used.  The last couple things left for
optimization attempts would be to generate diagrammed output of query
patterns of our software and possibly also to move as many software queries
to function calls rather than just strings passed to the c lib back end (ie:
to avoid constant tokenizing).

Given that we typically have only two servers on location, it seems that
many of the multi-master replication/clustering addons would end up with a
negative speed benefit, especially given the 'bad' update/select ratio.  As
such, we do have some client softwares that do only some mean ole huge nasty
selects explicitly (no updates), and there are daily report crunchings on
nonvolatile (ie: previous day's) data.

We're a small software house, and many of the enterprise solutions are
beyond our budget... so... make do with what ya got, etc.  We could
probabily easily justify an additional low end server to handle the
read-only client softwares, as long as they were executed soley on this box.


So..... if the non-volatile read only queries are being executed on the
slave system.... I can see this as being a fairly decent performance boon
during peak report generation etc.  Anything that would need write access
would either connect to the master system, or generate some .sql files to be
copied over and ran on the master system.

Our current DRBD/failover system is for the database files and all other
our-software specific files to live on the DRBD replicated mount point.
We've not had any problems with a crashed / tripped-over-power-chord fail
over issues with postgres firing up on the backup server after discovering
the first node is dead.  And, I actually kind of prefer this system compared
to application level replication, because any kind of lock/stall caused by
the application replication subsystem is detrimental, and our client
software is a bit volatile at times (a 5 year old work in progress....
feature creep etc). So... my question becomes (just to copy/paste) :

If we keep our current setup where the two primary nodes have the database
live in DRBD, and there's a third report crunching box, how tolerant is
slony-1 to heartbeat/drbd switch over to a new system ? Or is there some
kind of multi-master replication system that's effective for a 2 or 3 system
setup ?

-Will
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.slony.info/pipermail/slony1-general/attachments/20070928/=
1d1d0341/attachment.htm


More information about the Slony1-general mailing list