Tue Feb 8 16:43:47 PST 2005
- Previous message: [Slony1-general] lurking on a master
- Next message: [Slony1-general] Figuring out replication is finished / replicas are same
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On February 8, 2005 08:28 am, David Parker wrote: > A few weeks ago I was looking for a way to get around slony scalability > issues for a large network of machines that need a replication > relationship. The log shipping approach Christopher mentioned sounds > promising, but I have to deliver something, hmmm, a couple of weeks from > now....(sigh) > > The approach I'm thinking of involves leveraging the existing slony > infrastructure I have, but implementing my own "push" process. This process > maintains its own little schema to manage subscribers, etc., but it listens > on sync events and uses the slony sl_log table(s) to capture the updates. > Then it pushes these updates out to wherever it needs to. The requirements > for synchronization here are somewhat looser, so I'm just tracking the > lastest transaction id on every subscriber node. > > The main reason for this approach is to avoid the overhead of the n-1 > connections to every node in the system, and also to avoid de-stabilizing > our existing slony-based clustering/failover infrastructure, which is > working fine as it is. > > One thing I'm wondering about is the logic in the cleanup thread that > cleans out the sl_log_1 table: the logic appears to be keying off of the > min(ev_seqno) in the sl_event table to determine what the floor on the xid > is should be in the log table, but it's not clear to me where the events > get managed. The event cleanup themself get mannaged through the CleanupEvent plpgsql function. see http://www.dbitech.ca/slony/book/function.cleanupevent.html Also, in -HEAD the sl_log cleanup has been made a bit more aggressive, to help acomidate large transactional volume servers. > Obviously it's all right there in front of me in SQL or C > code, but I keep getting confused by what's happening in what thread.... > > Also, at what point does the switch get made from sl_log_1 to sl_log_2? Currently there is no switch happening. The mechanics and process for this are still under debate. > > TIA for any suggestions. > > > - DAP > --------------------------------------------------------------------------- >------- David Parker Tazz Networks (401) 709-5130 > ? > _______________________________________________ > Slony1-general mailing list > Slony1-general at gborg.postgresql.org > http://gborg.postgresql.org/mailman/listinfo/slony1-general -- Darcy Buskermolen Wavefire Technologies Corp. ph: 250.717.0200 fx: 250.763.1759 http://www.wavefire.com
- Previous message: [Slony1-general] lurking on a master
- Next message: [Slony1-general] Figuring out replication is finished / replicas are same
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list