Mon Feb 14 15:07:03 PST 2011
- Previous message: [Slony1-general] unable to set up replication: slony1-1.2.21: sync of large table (45 GB) fails: sequence maxed out
- Next message: [Slony1-general] switchover slony question
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On 2/14/2011 2:39 PM, Melvin Davidson wrote: > Aleksey, > > To answer your questions: > > 1. So does the database get bigger as it is transfered by Slony? > > It doesn't get bigger, but PostgreSQL will write WAL files for all the > inserts. > I believe it is the WAL files (16MB each) that is starting to eat up you > disk space. I find this unlikely. Slony does nothing but COPY the table content from the data provider to the new subscriber. That should not increase the size of the pg_xlog directory beyond what is allowed by checkpoint_segments and checkpoing_timeout anyway. A COPY that fills those too fast should only increase the checkpoint frequency, eventually to the point where Postgres is reporting too frequent checkpoints. > > 2. how large should the slave's disk be compared to the master's disk? > > At least as big as the master. If you intend to use the slave as a > failover, it is essential > that the hardware and O/S be as identical as possible. Only as powerful. Slony was specifically designed to allow cross platform and even cross Postgres version replication. > > 3. However my database size on the slave is 100 GB, and on the master is > 58 GB. > > How are you determining this? Are you using df -h? > Or are you doing: > > SELECT datname, > rolname as owner, > pg_size_pretty(pg_database_size(datname))as size_pretty, > pg_database_size(datname) as size, > (SELECT pg_size_pretty (SUM( pg_database_size(datname))::bigint) > FROM pg_database) AS total, > ((pg_database_size(datname) / (SELECT SUM( pg_database_size(datname)) > FROM pg_database) ) * 100)::numeric(6,3) AS pct > FROM pg_database d > JOIN pg_authid a ON a.oid = datdba > ORDER BY datname; > > Also, make sure autovacuum is enabled on the slave? It would be interesting to see the part of the subscribers log where it starts preparing to copy. Especially if it succeeds in truncating the target table or not. A once failed attempt to replicate (for whatever reason) could well trigger this odd behavior. Jan -- Anyone who trades liberty for security deserves neither liberty nor security. -- Benjamin Franklin
- Previous message: [Slony1-general] unable to set up replication: slony1-1.2.21: sync of large table (45 GB) fails: sequence maxed out
- Next message: [Slony1-general] switchover slony question
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list