Christopher Browne cbbrowne at ca.afilias.info
Mon Feb 14 14:27:52 PST 2011
Aleksey Tsalolikhin <atsaloli.tech at gmail.com> writes:
> Dear Melvin,
>
>   Thanks for answering my questions.  
>
>   My slave has same hardware configuration as the master.
>
>   I ran the SQL command you sent, and it reports database size 
> 100 GB on the slave.   Same SQL command reports 58 GB on 
> the master.
>
>   I tried VACUUM FULL on the database, but the size remains
> 100 GB.
>
>   The following command tells me that 97 GB is used by my large
> table and its TOAST and index:
>
>    SELECT relname as "Table", pg_size_pretty(pg_total_relation_size
> (relid)) 
>        As "Size" from pg_catalog.pg_statio_user_tables 
>    ORDER BY pg_total_relation_size(relid) DESC;
>
>
>   My disk on the slave is nearly full (only a couple of GB free) so
> when I get a maintenance window, I will try dropping my large table 
> (which is 45 GB on production) from replication, and then add it back,
> maybe it will come back smaller?  
>
>   I would like to understand why it's larger on the slave.

If anything, I'd expect it to be smaller on the replica, because the
replica won't get any of the
   BEGIN; UPDATE; oops, something conflicted...  ROLLBACK;
traffic that would be expected to generate dead space on the "master"
node.

What I expect is that there was some other problem that caused the
attempt to populate the big table to fail on the replica, so it has
filled up with dead tuples.

I'd suggest that you run TRUNCATE against that table, on the replica,
which should not have any bad side, because that's exactly what Slony is
going to do as part of the subscription process.

If you clear it out yourself, via TRUNCATE, you can verify that things
are in good order, and have greater confidence that the subscription
process will succeed.
-- 
output = reverse("ofni.sailifa.ac" "@" "enworbbc")
Christopher Browne
"Bother,"  said Pooh,  "Eeyore, ready  two photon  torpedoes  and lock
phasers on the Heffalump, Piglet, meet me in transporter room three"


More information about the Slony1-general mailing list