Sébastien Marchand smarchand at sgo.fr
Fri Jul 20 06:50:10 PDT 2018
In fact I have only 2 servers in datacenter all the others (16) are in agency scattered in the country (low bandwidth).
So no server will become master except one.
I noticed quite quickly that past a dozen nodes servers consumes too much resource cpu, DD and network.

So if I want the sl_listen table to no longer be filled can I modify the function RebuildListenEntries ()?

Thx you for your time.

-----Message d'origine-----
De : Steve Singer [mailto:ssinger at afilias.info] 
Envoyé : jeudi 19 juillet 2018 15:33
À : Sébastien Marchand
Cc : slony1-general at lists.slony.info
Objet : Re: [Slony1-general] Problème with sl_listen and too many nodes

On 07/19/2018 04:05 AM, Sébastien Marchand wrote:

RebuildListenEntries() is the function that populates sl_listen.

With slony, any node can be the source of an event, possibly because 
that event was submitted to via slonik.

This means that node 2 must be listening for events with origin=node 2 
and origin=130.

When you perform your 'cleanup' you actually don't have a way for node 2 
to receive events from 130.

Node 2 could receive events directly from 130

origin;provider;receiver
130,130,2

or via node 1
130,1,2

but here must be at least one row for each origin,receiver pair.

If there is a sl_subscribe row connecting origin=130 to receiver=2 then 
that is what the sl_listen row should also look like.

However if there is no subscription between those nodes then the 
sl_listen is built with all possibilities.  I think my reasoning for 
this was because if nodes fail events can still propogate(in particular 
events used in the failover process).

Is there a particular problem the larger listen network is causing?




> Sl_subscribe of my test :
> sub_set;sub_provider;sub_receiver;sub_forward;sub_active
> 1;1;2;FALSE;TRUE
> 1;1;130;FALSE;TRUE
> 
> Sl_path :
> pa_server;pa_client;pa_conninfo
> 1;2;dbname=DB host=192.168.0.29 port=5432 user=slony password=123;10
> 1;130;dbname=DB host=192.168.0.29 port=5432 user=slony password=123;10
> 2;1;dbname=DB host=192.168.0.3 port=5432 user=slony password=123;10
> 130;1;dbname=DB host=192.168.0.230 port=5432 user=slony password=123;10
> 
> Sub_forward true or false change nothing...
> 
> Thx for your help.
> 
> -----Message d'origine-----
> De : Steve Singer [mailto:steve at ssinger.info]
> Envoyé : jeudi 19 juillet 2018 05:42
> À : Sébastien Marchand
> Cc : slony1-general at lists.slony.info
> Objet : Re: [Slony1-general] Problème with sl_listen and too many nodes
> 
> On Wed, 18 Jul 2018, Sébastien Marchand wrote:
> 
> What is sl_subscribe?
> 
> (I assume sl_path has paths between each node)
> 
> 
> 
> 
>>
>> Hello,
>>
>>   
>>
>> I have a problem with the sl_listen table.
>>
>> I have a replication that has been running for a very long time from a
> master to X slaves (1 master
>> node for 18 nodes)
>>
>> My concern comes from the sl_listen table which instead of just containing
> what it needs it creates
>> all the possible combinations.
>>
>> For example I have a test for 1 master and 2 slaves and I have 4 too many
> lines that are useless
>> for me:
>>
>>   
>>
>> SL_LISTEN table
>>
>> origin; provider; receiver
>>
>> 1, 1, 2
>>
>> 1; 1; 130
>>
>> 2; 1; 130
>>
>> 2; 2; 1
>>
>> 2; 130; 1
>>
>> 130, 1, 2
>>
>> 130, 2, 1
>>
>> 130; 130; 1
>>
>>   
>>
>> Here after cleaning what should be:
>>
>>   
>>
>> origin; provider; receiver
>>
>> 1, 1, 2
>>
>> 1; 1; 130
>>
>> 2; 2; 1
>>
>> 130; 130; 1
>>
>>   
>>
>> The problem is that with each add / delete of tables / nodes the table is
> re-filled again and I
>> have to redo the cleaning.
>>
>>   
>>
>> My final question is: Is it normal for all nodes to talk to each other?
>>
>>   
>>
>> Thank you.
>>
>>
>>
> 
> _______________________________________________
> Slony1-general mailing list
> Slony1-general at lists.slony.info
> http://lists.slony.info/mailman/listinfo/slony1-general
> 




More information about the Slony1-general mailing list