Scott Marlowe scott.marlowe at gmail.com
Thu Jun 10 00:07:19 PDT 2010
2010/6/10 Alexander V Openkin <open at immo.ru>:
> 10.06.2010 10:07, Scott Marlowe пишет:
>>
>> 2010/6/9 Alexander V Openkin<open at immo.ru>:
>>
>>>
>>> 09.06.2010 18:27, Scott Marlowe пишет:
>>>
>>>>
>>>> Oh whoa, I thought you were talking about the postgres backend that
>>>> slony connects to using up that much memory.
>>>>
>>>>
>>>
>>> no, we tolking about slon processes, not about postgres backend.
>>>
>>>
>>>>
>>>> I wonder if there's some accounting difference in how your vps works
>>>> versus running right on the server.
>>>>
>>>>
>>>
>>> I have ~five replication cluster on slony1-1.2.14 and postgresql-8.3.9 on
>>> i686 architecture and
>>> we never see such problem...
>>> I think that no differents between running slony cluster on hardware
>>> server
>>> or VPS
>>>
>>
>> Is this the same OS as on hardware?  The accounting seems all kinds of
>> wrong to me.  I just can't see slony asking for and getting 4G or 8G
>> of ram.
>>
>>
>
> The same linux kernel,
> on OpenVZ hardware server, we can run different OS (different linux
> distributions),
> but the kernel will be same.
>
> i run ps auxf on hardware server
>
> [root at vz19 ~]# ps auxf |grep slon |grep cms
> postgres  6973  0.0  0.0  40636  1836 ?        S    09:58   0:00  \_
> /usr/bin/slon -f /etc/slony1.d/blabla
> postgres  6974  0.0  0.0 4108420 1544 ?        Sl   09:58   0:00  |   \_
> /usr/bin/slon -f /etc/slony1.d/blabla
> postgres  7016  0.0  0.0  40640  1836 ?        S    09:58   0:00  \_
> /usr/bin/slon -f /etc/slony1.d/blabla2
> postgres  7017  0.0  0.0 4108424 1544 ?        Sl   09:58   0:00      \_
> /usr/bin/slon -f /etc/slony1.d/blabla2
> [root at vz19 ~]#
>
> the fifth colunm is a VSZ (in kb) it show us two 4G segments...
>
>>>>> Besides OpenVZ divide shared memory and resident memory
>>>>>
>>>>> [root at vps6147 /]# cat /proc/user_beancounters |grep -E
>>>>> 'privvmpages|shmpages'
>>>>> privvmpages 2029830 2033439 2621440 2621440 5
>>>>> shmpages 17632 17632 412000 412000 0
>>>>> [root at vps6147 /]#
>>>>>
>>>>> first column - the current value in 4k pages, it`s indicates very small
>>>>> shared segment and huge resident segment,
>>>>>
>>>>>
>>>>
>>>> Yeah, that's different from what I was thinking was going on.
>>>>
>>>>
>>>>
>>>>>
>>>>> Do you have a expirience using slony1 on x86_64 servers ?
>>>>>
>>>>>
>>>>
>>>> Quite a bit actually.
>>>>
>>>>>
>>>>> We using slony1 replication about 3 year on i686 architecture and we
>>>>> hav`t
>>>>>
>>>>
>>>> Is that a "have" or "haven't" ?
>>>>
>>>
>>> i mean haven't.
>>> i have a 3 year expirience with slon replication and
>>> postgresql8.{0,1,2,3}
>>> on i686 architecture and i have never seen it before
>>>
>>
>> I have mostly experience on x86_64 / AMD64 hardware.  A little in the
>> past on 32 bit pentium, but that was slony 1.0 days.
>>
>>
>>>
>>> yesterday i read a news on slony.info "Slony-I 2.0.3 is not usable in its
>>> current state."
>>>
>>
>> Correct.  Like 2.0.4 will be close.  I tried it last year and it blew
>> up twice.  Luckily switching out 1.2.latest for 2.0.x is pretty easily
>> done.
>>
>>
>>>>>
>>>>> similar problem....
>>>>>
>>>>> PS we using the same OpenVZ template for application servers, and
>>>>> probability error in template or in the current VPS is minimum.
>>>>>
>>>>
>>>> I've never run dbs inside vms before (seems counter productive to me)
>>>>
>>>
>>> PS sorry for my awful english, i am russian )
>>>
>>
>> Your English is much better than my Russian, no need to apologize.
>>
>> Have you tried switching it out for slony 1.2.latest?    I'm thinking
>> it won't help this memory usage issue, but if you're in production you
>> should really be on 1.2.latest not 2.0.x.
>>
>>
>
> I'll try 1.2.latest, and show result's

Just wondering do you have any strange things about your setup, like
10000 tables in replication or 50,000 schemas in your db or something
like that?  I just keep wondering if you're running into some strange,
out of the ordinary corner case.


More information about the Slony1-general mailing list