Tim Bowden tim.bowden at westnet.com.au
Sun Sep 9 18:23:33 PDT 2007
On Sun, 2007-09-09 at 20:46 -0400, Jan Wieck wrote:
> On 9/9/2007 1:58 PM, Filip Rembiałkowski wrote:
> > 2007/9/9, Tim Bowden <tim.bowden at westnet.com.au>:
> > 
> >>  Hopefully that will scale well as I've
> >> got 100+ master db's.
> > 
> > over 100 partitions? you should be careful as slony1 has some
> > communication overhead  which grows quadratically with the number of
> > nodes in a cluster.
> > 
> > google for "slony communication costs"
> > 
> > I'm not sure if it applies to your situation though... I've never
> > tested such setup, maybe we should wait untill someone more
> > experienced gives a word about it
> > 
> > maybe I should not say this here ... but at worst, you will have to
> > drop slony in favour of other (simpler) solution - like home-made
> > trigger-based tool or other replication engine (londiste?)
> 
> No problem with saying that here. It is a valid concern and points to a 
> known weakness in Slony, so Tim knows right away what to test first (now 
> having a 100+ node test cluster is of cause a different story).
> 
Don't I know it.  Still haven't worked out how I'm going to build a test
system with each node carrying a reasonable load.

> The new logshipper tool I just added to the CVS tree (not released yet) 
> might be of help here. It would certainly be possible to divide the 
> whole bunch of locations into several regions and replicate each of them 
> into a region specific central slave. All those would use slony archives 
> and the logshipper to consolidate into one central database.
> 
> 

Ah, now there's a good idea.  Worst case I was going to revert to
logshipping for as many nodes as required to get it going so anything
better is sweet candy.

> Jan
> 

Thanks,
Tim Bowden



More information about the Slony1-general mailing list