Tue May 30 14:48:28 PDT 2006
- Previous message: [Slony1-general] stack depth limit exceeded
- Next message: [Slony1-general] stack depth limit exceeded
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Tue, May 30, 2006 at 05:26:53PM -0400, Rod Taylor wrote: > I run with 64MB of stack specifically for Slony. > > It seems that with each set you want to replicate you get a duplication > of the restriction clauses for transaction boundaries, including the > large IN clause. > > If you run with a large number of sets and fall significantly behind due > to a long running transaction, the stack size required can grow quite > quickly. > > Try dropping, merging or unsubscribing from a few sets then rejoin to > them after it catches up again. Rod, Sorry for the double-reply. Forgot to use "reply to all." Again. Thanks for the advice about stack size. One postgres workaround for the stack size limit on "in" clauses is to insert the values into a temporary table and then do a join on that table (instead of doing an "in" clause with a gazillion values). Would it make sense for Slony to do that when there are a large number of values? I looked again at the scripts I used to setup slony. I claim (but due to my inexperience with Slony, cannot prove) that I have just one set. That set has all of the tables and sequences from my database. There are only two databases in the cluster. Should I start out with just a few tables in the set, sync up, and then add more? I can add tables to the set on the fly, right?
- Previous message: [Slony1-general] stack depth limit exceeded
- Next message: [Slony1-general] stack depth limit exceeded
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list