cbbrowne at ca.afilias.info cbbrowne
Sat Dec 9 21:42:33 PST 2006
> Ideally Slony would take a set of DDL statements 'figure out' what tables
> will be locked in executing the DDL get those locks on all the nodes make
> perform the DDL on each node and release the locks.  The above sequence
> might not quite be right, I haven't thought about it in enough detail.
>
> I don't think  there is a to 'figure out' from outside the backend
> what locks are needed, would it even be possible to add an SPI function
> that
> returned this information? If so how far would this go to helping things?
> Are there other uses for this type of thing outside of replication?

You'd need to introduce a full SQL parser that would determine what tables
were affected by any given script.

One way that comes to mind as a *POSSIBLE* way of evaluating things would
be to do the following:

Step 1.  Submit BEGIN, alter all replicated tables so that all
INSERT/UPDATE/DELETE ops become no-ops.

Step 2.  Submit the contents of the DDL script

Step 3.  Query pg_catalog.pg_locks, matching against the connection PID
and any replicated tables with AccessExclusiveLock

Step 4.  ROLLBACK

Step 5.  Based on the list found in Step 3, remove replication triggers
and lock those tables, apply the DDL script, then repair triggers.

Unfortunately, there are two drawbacks:

1.  It still has to lock all of the tables, and worse, it locks some
tables twice :-(

2.  It applies the script twice, which, if the script does substantial
data modification, could be very time consuming.

The trouble is that you need some "magic method" to figure out the
equivalent to what's in step #3.

The only possibility that leaps out would be to create a temporary sandbox
database, set up the master schema there, and apply the DDL in a
transaction so you could pull a list of affected tables via
pg_catalog.pg_locks.

That seems as though it could be mighty expensive as a general mechanism...




More information about the Slony1-general mailing list