cbbrowne at ca.afilias.info cbbrowne
Sun Oct 30 22:02:52 PST 2005
> Oh my god!....
>
> DB is pg 7.4.6 on linux
>
> 2005-10-27 05:55:55 WARNING:  some databases have not been vacuumed in
> 2129225822 transactions
> HINT:  Better vacuum them within 18257825 transactions, or you may have
> a wraparound failure.
>
>
> 2005-10-28 05:56:58 WARNING:  some databases have not been vacuumed in
> over 2 billion transactions
> DETAIL:  You may have already suffered transaction-wraparound data loss.
>
> We have cronscripts that perform FULL vacuums
>
> # vacuum template1 every sunday
> 35 2 * * 7 /usr/local/pgsql/bin/vacuumdb --analyze --verbose template1
>
> # vacuum live DB every day
> 35 5 * * * /usr/local/bin/psql -c "vacuum verbose analyze" -d bp_live -U
> postgres --output /home/postgres/cronscripts/live/vacuumfull.log

There were some details in pg_autovacum that should tell you which
database needs vacuuming.

Are template1 and bp_live the only databases on that backend?  If not,
then the "possible data loss" could apply (in principle) to any of the
others.

If there are virtually unused databases, then you should do a "vacuum
freeze" on them, and become happy :-).

> Questions:
>
> 1) Why do have we data corruption? I thought we were doing everything we
> needed to stop any wraparound... Are the pg docs inadequate, or did I
> misunderstand what needed to be done?

I presume that there's another DB on the backend that you're not vacuuming...

> 2) What can I do to recover the data?

If it's an unused database, then there's a good chance you don't need to.

If it *is* used, the only thing I could think of is to do a VACUUM FREEZE
on it, and then start pummelling that DB with transactions so you can get
through another couple billion transactions so the invisible txns become
visible again, the VACUUM FREEZE again...

But you might want to bounce that off the likes of Tom Lane, first :-).



More information about the Slony1-general mailing list