Alexander V Openkin open at immo.ru
Wed Jun 9 02:12:55 PDT 2010
09.06.2010 11:54, Scott Marlowe пишет:
> On Wed, Jun 9, 2010 at 1:37 AM, Alexander V Openkin<open at immo.ru>  wrote:
>    
>> Tnx for the quick answer,
>> but if you run all slon process (for your replication cluster)
>> you must have standalone server with 32G ?
>> I think this is unnatural )
>>      
> No, definitely not.  VIRT is everything the process has ever touched,
> including shared memory and all libs whether or not they've actually
> been loaded or not.  It's not uncommon to have literally a hundred
> processes with 8G+VIRT on my 32Gig db servers, because most of that 8G
> is shred buffers.  It's not using that much memory individually, it's
> using shared_memory, and each process reports that it has access to
> and has touched that 8G.
>
>    
small experement, the second instance for ather Db was added...

[root at vpsXXXX /]# /etc/init.d/slon stop
Stopping slon service: [ DONE ]
Stopping slon service: [ DONE ]
[root at vpsXXXX /]# ps axuf |grep slon
root 28419 0.0 0.0 6044 580 pts/0 S+ Jun08 0:00 \_ grep slon
[root at vps6147 /]# free
total used free shared buffers cached
Mem: 10485760 99192 10386568 0 0 0
-/+ buffers/cache: 99192 10386568
Swap: 0 0 0
[root at vpsXXX /]# /etc/init.d/slon start
Starting slon service: [ DONE ]
[root at vpsXXX /]# ps axuf |grep slon
root 29737 0.0 0.0 6040 584 pts/0 S+ Jun08 0:00 \_ grep slon
postgres 28555 0.0 0.0 40636 1832 pts/0 S Jun08 0:00 /usr/bin/slon
postgres 28556 0.0 0.0 4042880 1508 pts/0 Sl Jun08 0:00 \_ /usr/bin/slon
postgres 28592 0.0 0.0 40636 1836 pts/0 S Jun08 0:00 /usr/bin/slon
postgres 28594 0.0 0.0 4042880 1512 pts/0 Sl Jun08 0:00 \_ /usr/bin/slon
[root at vpsXXXX /]# free
total used free shared buffers cached
Mem: 10485760 8119136 2366624 0 0 0
-/+ buffers/cache: 8119136 2366624
Swap: 0 0 0
[root at vpsXXXX /]#

The different between "no slon process" and "2 pair slon process" ~8G, 
that we have a problem,
because it`s not a shared segment.....

Besides OpenVZ divide shared memory and resident memory

[root at vps6147 /]# cat /proc/user_beancounters |grep -E 
'privvmpages|shmpages'
privvmpages 2029830 2033439 2621440 2621440 5
shmpages 17632 17632 412000 412000 0
[root at vps6147 /]#

first column - the current value in 4k pages, it`s indicates very small 
shared segment and huge resident segment,

Do you have a expirience using slony1 on x86_64 servers ?
We using slony1 replication about 3 year on i686 architecture and we 
hav`t similar problem....

PS we using the same OpenVZ template for application servers, and 
probability error in template or in the current VPS is minimum.



More information about the Slony1-general mailing list