Jeff Davis pgsql
Fri Oct 6 15:23:37 PDT 2006
On Fri, 2006-10-06 at 14:48 -0700, snacktime wrote:
> > That's a lot of schemas and a lot of tables.
> >
> > Consider doing something like the following:
> 
> >Have the same set of tables for all 5000 clients. Add a "client_id"
> >field to each table. For each client, create their own schema, and then
> >use rules and views inside that schema to make it seem like each client
> >has their own set of tables.
> 
> You know I never thought of using the schema like that, that's a great
> idea.  Fortunately the table structure is already designed to work
> with just one set of tables.    The trick is pulling off a change like
> that without any downtime to speak of.  I've done it before, and it's
> not fun.

Well, first develop the rules and views and make sure they work (in a
test schema) on the physical tables in the allclients schema.

Then, the transfer procedure for one client would go like:

BEGIN;
LOCK TABLE ... ACCESS EXCLUSIVE;
INSERT INTO allclients.table1(id,attr1,attr2) SELECT 524,attr1,attr2
FROM client524.table1;
DROP TABLE client524.table1;
CREATE VIEW client524.table1;
CREATE RULE ... -- insert
CREATE RULE ... -- update
CREATE RULE ... -- delete
-- do the same for the other tables in the schema ...
COMMIT;

or something like that. PostgreSQL protects DDL changes in transactions,
so this procedure will be totally safe. There will be no downtime, a
locked table during the data move (and the data move itself) is the only
cost. 

Write a script to automate the process. Then just do one at a time over
a long period to make sure not to hit your I/O system too much. Then
replicate away.

Regards,
	Jeff Davis




More information about the Slony1-general mailing list