Mon Sep 19 19:12:15 PDT 2005
- Previous message: [Slony1-general] Buffering problem - a patch?
- Next message: [Slony1-general] Buffering problem - a patch?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Jan Wieck wrote: > > You didn't misread the code. It indeed buffers based on a compiled in > number of rows only and doesn't take the size into account at all. So > yes, the fetching thread needs to stop if the buffer grows too large. > Since it does block if all buffers are filled, that part wouldn't be > too complicated. > > What gets complicated is the fact that the buffer never shrinks! All > the buffer lines stay allocated and eventually get enlarged until slon > exits. So even if you stop fetching after you hit large rows, slowly > over time all buffer lines will get adjusted to that huge size. On > some operating systems (libc implementations to be precise) free() > isn't a solution here as it never returns memory to the OS, but keeps > the pages for future alloc()s. Well, it would help, wou;dn't it? If in one pass, row(1) had 37MB allocated, and in another pass row(2) wanted 37MB, at least another 37MB would not be grabbed from the OS -- the freed block would be available. > The best way to tackle that would IMHO be to allow only certain buffer > lines to be used for huge rows and block if none of them is available. Wouldn't this lead to ordering problems? What about definining a MAX_ROW_BUFFER which represents the maximum allowed to be permanently allocated to command data fetched from the log. Then, only fetch cmddata for log rows up to this size. For rows larger than this, retrieve the PK and store in the list. When the item is to be processed, retrieve the cmddata directly using the PK.
- Previous message: [Slony1-general] Buffering problem - a patch?
- Next message: [Slony1-general] Buffering problem - a patch?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list