Steve Singer ssinger at ca.afilias.info
Wed Jun 25 08:25:00 PDT 2014
On 06/25/2014 11:21 AM, Soni M wrote:

Find out why slony can't truncate sl_log_2

Slony does
lock table @NAMESPACE at .sl_log_2 in access exclusive mode nowait;

so why can't it get the access exclusive lock on sl_log_2,  (hint: maybe 
query pg_locks to see what process/transaction has a lock on sl_log_2).



> this is what i found on slony log :
>
> postgres at myserver:~$ tail -f /var/log/slony1/slon-myserver.log | grep -i
> -e lock -e "log switch"
> NOTICE:  Slony-I: log switch to sl_log_2 complete - truncate sl_log_1
> NOTICE:  Slony-I: could not lock sl_log_2 - sl_log_2 not truncated
> NOTICE:  Slony-I: could not lock sl_log_2 - sl_log_2 not truncated
> NOTICE:  Slony-I: log switch to sl_log_1 still in progress - sl_log_2
> not truncated
> NOTICE:  Slony-I: could not lock sl_log_2 - sl_log_2 not truncated
> NOTICE:  Slony-I: could not lock sl_log_2 - sl_log_2 not truncated
> NOTICE:  Slony-I: could not lock sl_log_2 - sl_log_2 not truncated
> NOTICE:  Slony-I: could not lock sl_log_2 - sl_log_2 not truncated
> NOTICE:  Slony-I: log switch to sl_log_1 still in progress - sl_log_2
> not truncated
> NOTICE:  Slony-I: could not lock sl_log_2 - sl_log_2 not truncated
> NOTICE:  Slony-I: could not lock sl_log_2 - sl_log_2 not truncated
> NOTICE:  Slony-I: could not lock sl_log_2 - sl_log_2 not truncated
>
> Truncating sl_log_2 always failed. and yes the slave is falling behind
> the master. usually the slave will catch up again later after the busy
> hours. Maybe this is due to high transactional load. But is there a way
> we can keep slave to catch up the master?
>
> Thanks....
>
>
>
> On Wed, Jun 25, 2014 at 10:04 PM, Steve Singer <ssinger at ca.afilias.info
> <mailto:ssinger at ca.afilias.info>> wrote:
>
>     On 06/25/2014 11:00 AM, Soni M wrote:
>
>         Hello everyone,
>
>         Is it OK to delete event log that has been replicated in sl_log_x.
>
>
>
>     No you shouldn't do that.
>
>     You should instead figure out why the slony cleanup event isn't
>     performing a log switch.  Perhaps because you have a node behind or
>     because confirmations are not making it back to the origin?
>
>
>
>         Here's the function I created :
>
>         CREATE OR REPLACE FUNCTION _slony_example.delete_sl_log()
>            RETURNS void
>            LANGUAGE plpgsql
>         AS $function$
>         DECLARE
>                   v_origin        int8;
>                   v_seqno         int8;
>                   v_xmin          bigint;
>         BEGIN
>
>
>                           for v_origin, v_seqno, v_xmin in
>                             select ev_origin, ev_seqno,
>         "pg_catalog".txid_snapshot___xmin(ev_snapshot) from
>         "_slony_example".sl_event
>                             where (ev_origin, ev_seqno) in (select
>         ev_origin,
>         min(ev_seqno) from "_slony_example".sl_event where ev_type = 'SYNC'
>         group by ev_origin)
>                           loop
>
>         delete from _slony_example.sl_log_1 where log_origin = v_origin and
>         log_txid < v_xmin;
>         delete from _slony_example.sl_log_2 where log_origin = v_origin and
>         log_txid < v_xmin;
>                           end loop;
>
>         END;
>         $function$;
>
>         The Slony failed to switch sl_log for several hours, and event
>         log keep
>         accumulated in a table.
>
>         Will this action help to reduce slony lag that falls far behind ?
>
>         Thanks all.
>
>         --
>         Regards,
>
>         Soni Maula Harriz
>
>
>         _________________________________________________
>         Slony1-general mailing list
>         Slony1-general at lists.slony.__info
>         <mailto:Slony1-general at lists.slony.info>
>         http://lists.slony.info/__mailman/listinfo/slony1-__general
>         <http://lists.slony.info/mailman/listinfo/slony1-general>
>
>
>
>
>
> --
> Regards,
>
> Soni Maula Harriz



More information about the Slony1-general mailing list