Tory M Blue tmblue at gmail.com
Wed Feb 4 12:54:35 PST 2015
On Wed, Feb 4, 2015 at 12:07 PM, Christopher Browne <cbbrowne at afilias.info>
wrote:

> There's a health check script that's kind of recommended...
>
> http://slony.info/documentation/2.2/deploymentconcerns.html#TESTSLONYSTATE
>
> I'd suggest the one, "test_slony_state.sh"; it looks at quite a few
> issues, and I should think it would have noticed something going on.  I
> can't predict offhand what it would have first complained about, but
> doubtless something of some use.
>

Thanks Christopher.

Anyone know more about this and why it seems to be looking for a node 0,
also why it's citing my truples are so many on my master node?

My existing graphs show the following for table sizes

count log1:  260.8K
size log 1:  99.81K
count log2:  8.54K
size log 2:  3.57K

I only have 4 nodes, node 1 through node 4..


Thanks some of these will definitely help, if they are reporting the right
data :)

Tory

Node: 0 threads seem stuck
================================================
Slony-I components have not reported into sl_components in interval 00:05:00

Perhaps slon is not running properly?

Query:
     select co_actor, co_pid, co_node, co_connection_pid, co_activity,
co_starttime, now() - co_starttime, co_event, co_eventtype
     from "_cls".sl_components
     where  (now() - co_starttime) > '00:05:00'::interval
     order by co_starttime;



Node: 1 sl_log_1 tuples = 241823 > 200000
================================================
Number of tuples in Slony-I table sl_log_1 is 241823 which
exceeds 200000.

You may wish to investigate whether or not a node is down, or perhaps
if sl_confirm entries have not been propagating properly.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.slony.info/pipermail/slony1-general/attachments/20150204/e88461da/attachment.htm 


More information about the Slony1-general mailing list