Bugzilla – Bug 181
Bulk Adding Tables
Last modified: 2011-03-08 11:21:53 PST
You need to
before you can comment on or make changes to this bug.
Slonik would support the following extensions:
set add table (... tables='public.*', ..);
set add table (.. tables='billing.(!password)', ...);
I have an initial implementation of this + bug 182 (this one depends on 182)
*** Bug 182 has been marked as a duplicate of this bug. ***
I've started taking a peek...
I have set up a branch based on yours, which adds in, thus far:
a) Minor typo fixes to docs
b) A regression test that doesn't work just yet.
It looks like the 'TABLES' option isn't yet documented akin to 'SEQUENCES'...
In trying to run the regression test, I'm running into some problems with the
When the regression test script tries to add the tables in, node #2 hasn't yet
been initialized, and so the function slonik_get_next_tab_id() fails.
-> % cat /tmp/slony-regress.DbP0kt/slonik.script
init cluster (id=1, comment = 'Regress test node');
echo 'update functions on node 1 after initializing it';
update functions (id=1);
create set (id=1, origin=1, comment='All test1 tables');
set add table (set id = 1, tables='public.*');
-> % cat /tmp/slony-regress.DbP0kt/slonik.log
<stdin>:4: WARNING: This node is running PostgreSQL 8.3 - cannot apply
CONTEXT: SQL statement "SELECT "_slony_regress1".add_truncate_triggers()"
PL/pgSQL function "upgradeschema" line 12 at PERFORM
<stdin>:7: PGRES_FATAL_ERROR select max(tab_id) FROM "_slony_regress1".sl_table
- ERROR: schema "_slony_regress1" does not exist
LINE 1: select max(tab_id) FROM "_slony_regress1".sl_table
<stdin>:2: Possible unsupported PostgreSQL version (90100) 9.1, defaulting to
<stdin>:3: update functions on node 1 after initializing it
<stdin>:4: Possible unsupported PostgreSQL version (90100) 9.1, defaulting to
<stdin>:7: Error: could not query node 2 for next table id
I think we need to treat nodes that aren't accessible as not being a cause for
failure, but rather a cause for ignoring them.
The same almost certainly applies to slonik_get_next_sequence_id()
Clarifying a bit...
- If a node reports back that it's not a Slony node yet, then it may be ignored
for the purposes of determining highest table/sequence IDs.
- If a node is inaccessible, then it is quite likely appropriate to report back
to the user that there's a problem.
Merely reporting a PSQL error seems unkind.
Instead, the interpretation to pass back is along the lines of:
"This node you told me to check isn't reporting back. I don't know if
it's got some higher table ID, so, human, you'd best intervene."
I noticed some error conditions on this where we should declare explicitly what
we intend to happen...
The addition of tables will fail, for ALL the tables, if any of the following
problems are encountered:
a) A table does not have a primary key. Note that a candidate primary key
is not good enough; since we do not have an option here to specify
candidate primary keys, the use of this requires an explicit primary
b) A table is already replicated.
Both of those conditions seem apropos to me; if you want to use regular
expressions to specify a set of tables, you're going to have to have the tables
suitably defined so we don't have to jump thru too many hoops to ascertain
replication configuration for them.
And by the way, this should be added to the man pages for SET ADD
I suggest the following positive and negative tests; they probably should get
implemented in the "cluster test" framework in the other branch...
1. Put a series of nice, replicable tables in one schema...
Tables complete with primary keys, and regular names so they're readily
Should work, no problems.
2. Negative test: Redo 1.
The attempt to re-add them should fail. "Kaboom!"
3. Have a node in the preamble that hasn't had STORE NODE called on it yet.
Run against a set of tables not added yet.
4. Have a node that's dead.
I have some sample slonik for relevant tests - see:
This test works fine, and covers all but the "scenario #4" mentioned nearby
(e.g. - if a node seems well and truly dead, that's a problem).
I guess I'd also suggest doing a bit of torture testing relating to nasty
regular expressions. I went a *little* further afield with:
But it's probably a good thing to do weirder :-).
I have added further documentation on error conditions for bulk adding of
Perhaps more should be said, but this seems a good start.
Given that documentation and some portion of test cases are added in, I'd think
it reasonable to consider merging this into master. At that point, it should
merge into the cluster test branch to make it easier to implement relevant
This got merged into MASTER on 2011-01-28:
Is there more to do on this, or can this be closed out?
This can be marked as fixed.
It should show up in the 2.1 release