Steve Singer ssinger at ca.afilias.info
Wed Oct 17 17:19:26 PDT 2012
On 12-10-17 06:45 PM, Joe Conway wrote:
> On 10/17/2012 03:35 PM, Jan Wieck wrote:
>> Please elaborate on those constraints. Does that mean you cannot deploy
>> any binaries on an existing, running master?
>
> not my choice, but yes
>
>> If that is the case, you could deploy the 2.1.2 binaries but not use
>> them yet on all replicas. Switch over to one of them (still using 2.1.0)
>> to deploy the 2.1.2 on the previous master (now replica). Then use the
>> regular Slony upgrade mechanism from there.
>
> The environment is strictly controlled, and binaries only deployed
> through the original rpms in the repo when the machine was provisioned.
> This situation might force a reevaluation of that.
>
> But in any case this is the least of our problems unless you can tell me
> that 2.1.2 won't have the same problem when we failover.
>
>>> At the moment we are testing with clusters that are all running 2.1.0.
>>> It is in this configuration where failover is failing.
>>
>> People need to stop using FAILOVER when there is actually no physical
>> problem with the existing master node. What you probably want to do is a
>> controlled MOVE SET instead.
>
> Not possible. We MUST failover because when we are all done the original
> master will be taken out of service. If we do a move set we cannot take
> out the old node from service.
>

After your done your MOVE SET you can issue SUBSCRIBE SET to make any 
other nodes use your new master as a provider.  Then you should be able 
to issue DROP NODE commands to drop the old master.  I don't see why you 
can't take the old nodes out of service after you do a DROP NODE.

This of course doesn't help you want slony to help you in the case of a 
real failover.  I thought you were trying to test your failover 
scenarios, If people are planning on using slony for failover I 
*strongly* encourage them to test their scripts before hand.

If your going to move forward with Jan's idea of provisioning a box with 
both slony 2.1.0 and slony 2.1.2 (I am not convinced that the failover 
bug you hit is fixed in 2.1.2/ is #260 ) you will need to put two 
versions of slony on the same machine.  A 2.2 feature we added puts the 
slony version number inside of the slony_funcs.so filename to make this 
work nicely.  We have back-ported this to the 2.1 branch here
https://github.com/ssinger/slony1-engine/tree/REL_2_1_STABLE_multiversion

I've built 2.1.1 and 2.1.2 versions from the multiversion branch, these 
should install on the same system as your existing 2.1.0 binaries. I 
also have RPM spec files that allows an install multiple slony versions 
at the same time. Your policies probably also prevent you from deploying 
code on a new machine from a random github branch, so this might not be 
much help.

Steve


More information about the Slony1-hackers mailing list