Andrew Ruthven andrew.ruthven
Fri Sep 17 04:51:24 PDT 2004
On Fri, 2004-09-17 at 15:41, cbbrowne at ca.afilias.info wrote:
> > Hi,
> >
> > I'm currently writing some inhouse DR instructions for a database using
> > slony.  I'm currently writing these aimed at the altperl scripts for
> > maintaining slony.  I've noticed that some of the scripts expect the
> > arguements in different orders.  In particular move_set.pl and
> > failover.pl.  Any chance of having these consistent?
> 
> I happily haven't had call to run the failover one; making the arguments
> more consistent there is an excellent idea.

Ah, fair enough.  I've attached a patch which does so.  The other patch
makes the move_set.pl script work.  It is currently trying to unset the
lock on the new master which claims that the set doesn't originate on
the local node.

I've changed the slonik commands to drop the unlock and add some wait
events (like in the docs) and now things work okay.

> I haven't heard any hint of anyone having any 'metascripts' that use these
> scripts, so nothing prevents improving the rationality of the arguments. 
> Please point out any places where consistency could be improved.

Okay, I'll continue to look around.  One script that jumps out to me as
being needed is a "add a node to the cluster" type of thing.  At the 
moment my DR manual calls for dropping the cluster and rebuilding it.

> I have generated some inhouse instructions that live alongside the
> scripts; I could easily see it being useful to try to turn this into
> boilerplate that would be generally usable.  If you contributed some
> (suitably redacted) of your instructions, that could surely be helpful to
> this.

I'll need to check with management, but I'll see what I can do.

> We have noticed something today of this sort in terms of, oh, call it
> "anti-documentation" vis-a-vis node numbering.  I set up a whole series of
> nodes on a set of 4 servers for different replication instances.  I
> numbered things 1 = master, 2=main slave, 3=server "db3", 4=server at
> another site.  Nodes #3 and #4 are always pointing to two particular
> servers.  Unfortunately, #1 and #2 aren't consistent that way.
> 
> In retrospect, since internal server numbers for these four servers are
> 003, 004, 005, and 501, it would likely be most intuitive to have nodes 3,
> 4, 5, and 501, which , while not consecutive, will be easy to guess right
> about.
> 
> I have to kick myself on that a bit because I remember rueing someone
> else's choice, with ERServe, the same sort of scheme where, for each set
> of nodes:
>  1 = the master node,
>  2 = the first slave we set up,
>  3 = the second slave we set up,
> and so forth, which were anything but easy to intuit about.
> 
> It's no big deal for people that just have 2 nodes.  But eventually, we're
> likely to add more to the 4.  And for the node numbering to not be pretty
> coherent lies madness.

Yes, totally agreed.  Especially as I'm thinking that we may want to
swap our master database over on a semiregular basis to make sure that
failover does in fact continue to work!  Suddenly node 1 may not be the
master for months at a time.  Perhaps I might use the IP address in some
mannor...

It's still possible to become confused with onto two servers!  When
should I have 1 and when shoud I have 2?  :)

Cheers!

-- 
Andrew Ruthven, Wellington, New Zealand
Senior Developer, Catalyst IT Limited --> http://www.catalyst.net.nz
At work: andrew.ruthven at catalyst.net.nz
At home: andrew at etc.gen.nz
GPG fpr: 34CA 12A3 C6F8 B156 72C2  D0D7 D286 CE0C 0C62 B791
-------------- next part --------------
A non-text attachment was scrubbed...
Name: failover.puck.patch
Type: text/x-patch
Size: 1346 bytes
Desc: not available
Url : http://gborg.postgresql.org/pipermail/slony1-general/attachments/20040917/bee9ce37/failover.puck-0001.bin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: move_set.puck.patch
Type: text/x-patch
Size: 930 bytes
Desc: not available
Url : http://gborg.postgresql.org/pipermail/slony1-general/attachments/20040917/bee9ce37/move_set.puck-0001.bin


More information about the Slony1-general mailing list