Alexander V Openkin open at immo.ru
Wed Jun 9 22:39:13 PDT 2010
09.06.2010 18:27, Scott Marlowe пишет:
> On Wed, Jun 9, 2010 at 3:12 AM, Alexander V Openkin<open at immo.ru>  wrote:
>    
>> 09.06.2010 11:54, Scott Marlowe пишет:
>>      
>>> On Wed, Jun 9, 2010 at 1:37 AM, Alexander V Openkin<open at immo.ru>    wrote:
>>>
>>>        
>>>> Tnx for the quick answer,
>>>> but if you run all slon process (for your replication cluster)
>>>> you must have standalone server with 32G ?
>>>> I think this is unnatural )
>>>>
>>>>          
>>> No, definitely not.  VIRT is everything the process has ever touched,
>>> including shared memory and all libs whether or not they've actually
>>> been loaded or not.  It's not uncommon to have literally a hundred
>>> processes with 8G+VIRT on my 32Gig db servers, because most of that 8G
>>> is shred buffers.  It's not using that much memory individually, it's
>>> using shared_memory, and each process reports that it has access to
>>> and has touched that 8G.
>>>
>>>
>>>        
>> small experement, the second instance for ather Db was added...
>>
>> [root at vpsXXXX /]# /etc/init.d/slon stop
>> Stopping slon service: [ DONE ]
>> Stopping slon service: [ DONE ]
>> [root at vpsXXXX /]# ps axuf |grep slon
>> root 28419 0.0 0.0 6044 580 pts/0 S+ Jun08 0:00 \_ grep slon
>> [root at vps6147 /]# free
>> total used free shared buffers cached
>> Mem: 10485760 99192 10386568 0 0 0
>> -/+ buffers/cache: 99192 10386568
>> Swap: 0 0 0
>> [root at vpsXXX /]# /etc/init.d/slon start
>> Starting slon service: [ DONE ]
>> [root at vpsXXX /]# ps axuf |grep slon
>> root 29737 0.0 0.0 6040 584 pts/0 S+ Jun08 0:00 \_ grep slon
>> postgres 28555 0.0 0.0 40636 1832 pts/0 S Jun08 0:00 /usr/bin/slon
>> postgres 28556 0.0 0.0 4042880 1508 pts/0 Sl Jun08 0:00 \_ /usr/bin/slon
>> postgres 28592 0.0 0.0 40636 1836 pts/0 S Jun08 0:00 /usr/bin/slon
>> postgres 28594 0.0 0.0 4042880 1512 pts/0 Sl Jun08 0:00 \_ /usr/bin/slon
>> [root at vpsXXXX /]# free
>> total used free shared buffers cached
>> Mem: 10485760 8119136 2366624 0 0 0
>> -/+ buffers/cache: 8119136 2366624
>> Swap: 0 0 0
>> [root at vpsXXXX /]#
>>
>> The different between "no slon process" and "2 pair slon process" ~8G, that
>> we have a problem,
>> because it`s not a shared segment.....
>>      
> Oh whoa, I thought you were talking about the postgres backend that
> slony connects to using up that much memory.
>    
no, we tolking about slon processes, not about postgres backend.

> I wonder if there's some accounting difference in how your vps works
> versus running right on the server.
>    

I have ~five replication cluster on slony1-1.2.14 and postgresql-8.3.9 
on i686 architecture and
we never see such problem...
I think that no differents between running slony cluster on hardware 
server or VPS

>    
>> Besides OpenVZ divide shared memory and resident memory
>>
>> [root at vps6147 /]# cat /proc/user_beancounters |grep -E
>> 'privvmpages|shmpages'
>> privvmpages 2029830 2033439 2621440 2621440 5
>> shmpages 17632 17632 412000 412000 0
>> [root at vps6147 /]#
>>
>> first column - the current value in 4k pages, it`s indicates very small
>> shared segment and huge resident segment,
>>      
> Yeah, that's different from what I was thinking was going on.
>
>    
>> Do you have a expirience using slony1 on x86_64 servers ?
>>      
> Quite a bit actually.
>
>    
>> We using slony1 replication about 3 year on i686 architecture and we hav`t
>>      
> Is that a "have" or "haven't" ?
>
>    

i mean haven't.
i have a 3 year expirience with slon replication and postgresql8.{0,1,2,3}
on i686 architecture and i have never seen it before

yesterday i read a news on slony.info "Slony-I 2.0.3 is not usable in its current state."



>> similar problem....
>>
>> PS we using the same OpenVZ template for application servers, and
>> probability error in template or in the current VPS is minimum.
>>      
> I've never run dbs inside vms before (seems counter productive to me)
>
>    
PS sorry for my awful english, i am russian )


More information about the Slony1-general mailing list