Fri Jun 22 14:50:49 PDT 2007
- Previous message: [Slony1-general] Can't drop/recreate slave node
- Next message: [Slony1-general] Huge database remote sync issue. Ideas?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Andrew Hammond wrote: > On 6/22/07, Bill Moran <wmoran at collaborativefusion.com> wrote: >> In response to Craig James <craig_james at emolecules.com>: >> >> > When you create a cluster, it appears that "Node 1" is special >> >> It isn't. Sure seems that way, though, doesn't it. We went round and >> round with this. One thing that creates this illusion is the fact that >> many commands assume that node 1 is the master if you don't specify >> otherwise. > > Node 1 is the default for commands that need a node. I've often > thought that it'd be better to have no default since there's no reason > to believe that node 1 is the correct node to connect to in many > cases. If we _must_ have a default, then the lowest node number should > be default since there's no reason to believe that node 1 will always > exist. > > I actually think that this is a good argument for avoiding having a > node 1 in clusters in general. > I tend to agree. The behaviour tends to be that events are submitted to node #1 if there is no obvious default for that command and if no event node is specified. I make sure that tests include cases where there is no node #1. >> > -- there >> > doesn't seem to be a "store node 1" command to create "node 1" in the >> > first place, whereas for subsequent nodes, you have to issue "store >> node >> > N". Now, suppose node 1 happens to crash and burn, and I use >> "failover" >> > to make Node 2 the master. Questions: >> >> The first node has an implied "store node" so that command isn't >> explicitly >> used. You can make the first node be any valid # though. >> >> > Does Node 2 stay node 2, or does it become node 1? (I'm pretty sure >> > it stays node 2, but I want to be certain). >> >> Node #s never change. > > Unless you're dangerously crazy. Speaking of dangerously crazy, has > anyone written a script to change node numbers yet? I thought I had sent something over to the list quite a while back in support of a "CLONE NODE" command; Jan thinks he'll try to have something like that in Slony-I 2.x... >> > If I get node 1's server back online, discard its database, and >> recreate >> > the schema, can I use add it to the cluster as "node 1" again, or do I >> > need to pick a different node number? >> >> AFAIK, as long as you've dropped that node from all other nodes, you can >> add it back in with the same #. > > While this should theoretically be possible, why would you want to? > Being lazy about defaults to node 1 seems a poor reason. I would not > recommend doing it in practice. It can cause very big, nasty problems > if the previously dropped node's cruft is not completely purged before > you create the new node. If your scripts are so fragile that they hard > code some node number then fix your scripts. I'd not be keen on recreating a node number previously used within the same day as its deletion...
- Previous message: [Slony1-general] Can't drop/recreate slave node
- Next message: [Slony1-general] Huge database remote sync issue. Ideas?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list