Tue May 30 14:26:53 PDT 2006
- Previous message: [Slony1-general] stack depth limit exceeded
- Next message: [Slony1-general] stack depth limit exceeded
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Tue, 2006-05-30 at 12:51 -0700, Wayne Conrad wrote: > On Tue, May 30, 2006 at 03:40:28PM -0400, Andrew Sullivan wrote: > > On Tue, May 30, 2006 at 12:29:41PM -0700, Wayne Conrad wrote: > > > has a "where log_actionseq not in (...)" clause with about 320,000 > > > numbers in it, which is what is making postgres cranky. > > > > Sort of. If I'm not mistaken, that setting is in the postgresql.conf > > file, and it's measured in KB. You can increase it. I believe it > > requires a restart of the postmaster to take effect, but check > > the docs. > > Thanks for mentioning that, since I forgot to. That's > max_stack_depth, which defaults to 2048KB. I increased it to 4098KB, > then 8192KB, and so on. When I got to 1048576K with no success, I > decided that dog wasn't gonna hunt. Does it make any sense to try > more than 1GB of stack? I run with 64MB of stack specifically for Slony. It seems that with each set you want to replicate you get a duplication of the restriction clauses for transaction boundaries, including the large IN clause. If you run with a large number of sets and fall significantly behind due to a long running transaction, the stack size required can grow quite quickly. Try dropping, merging or unsubscribing from a few sets then rejoin to them after it catches up again. --
- Previous message: [Slony1-general] stack depth limit exceeded
- Next message: [Slony1-general] stack depth limit exceeded
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list