Tue Sep 20 16:15:14 PDT 2005
- Previous message: [Slony1-general] Buffering problem - a patch?
- Next message: [Slony1-general] Buffering problem - a patch?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Jan Wieck wrote: > What you could to to keep it simple is to go with a free() approach. > free() buffers that are over a certain size after they are applied. > And have the fetch thread wait if the buffered amount exceeds your > limit. In addition, you probably want to make the initial fetch size a > config parameter and also make the actual number of fetched rows > depending on the buffers fill level, so to speak. The larger the > buffer is, the fewer rows to fetch in order to avoid "fetching 100 50M > rows at once" by surprise. Thanks for the replies; to summarize the plan (please say yay or nay!): - modify remote_worker.c to make initial fetch size a config parameter - modify remote_worker.c (and related) to alloc/free large blocks (say > 1MB, or a user-settable value) - add a config parameter 'fetch buffer limit' - modify remote_worker.c to do fetches <= 'initial fetch size' based on currently used memory and 'fetch buffer limit'. Minimum 1. - modify remote_worker.c to pause after completing one complete fetch cycle to pause if exceeding 'fetch buffer limit', and automagically wake up again....hmmm. (should we just skip the last one?)
- Previous message: [Slony1-general] Buffering problem - a patch?
- Next message: [Slony1-general] Buffering problem - a patch?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list