Christopher Browne cbbrowne at ca.afilias.info
Fri Feb 26 14:54:41 PST 2010
Dave Stevenson wrote:
> SELECT pg_size_pretty(pg_database_size(current_database()))
> -> 1198MB
>  
> > Making Slony scale like this is a non-starter, I would think.
>  
> Can it cascade?:
>  
> Master
>     |-Regional Master 1
>     |    |-Slave1.1
>     |    |-Slave1.2
>     |    -Slave 1.3
>     |-Regional Master 2
>     |    |-Slave 2.1
>     |    |-Slave 2.2
> ...
It'll cascade, but the word "master" can only fit in one place.

For any given table, there's Only One Origin.  Updates must always go to 
the One True Master node.

You could build into your application some sort of queueing mechanism to 
queue updates and push them to the master, but that's definitely extra 
infrastructure, and some application redesign.  And if you think you're 
going in that direction, my suspicion is that you'll find that Slony-I 
looks mighty fragile in view of the fragility of the connections between 
the pieces of that infrastructure.

Look at the front page: <http://www.slony.info/>

"The /big picture/ for the development of Slony-I is that it is a 
master-slave replication system that includes all features and 
capabilities needed to replicate large databases to a reasonably limited 
number of slave systems.

Slony-I is a system designed for use at data centers and backup sites, 
where the normal mode of operation is that all nodes are available."

It won't *completely* fail to function instantly if:
a) There are a few more slave systems, or
b) There are moments when some nodes aren't available,

but once those become part of your operating assumptions, you're 
diverging from what Slony-I was intended to do, and you'll have to 
temper your expectations, probably ultimately to the point of saying 
"that's not a particularly excellent fit."



More information about the Slony1-general mailing list