Andreas Pflug pgadmin
Mon Oct 3 11:50:37 PDT 2005
Dave Page wrote:

>>We already discussed to include the creation scripts into pgadmin 
>>installations to make administrator's life easier, but apparently we 
>>*must* do that to have pgadmin working on slony 1.1.
> 
> 
> No, that is a very *bad* idea.

At least my proposal provoked an answer finally...

> The Slony team have told us that they
> don't thing the proposed changes are appropriate,

I wrote a lengthy mail about this timezone stuff. The issue is fixed in 
slonik, but not in the db. Don't they want other tools to run? Why do 
they insist on relying on server's display formatting settings, if 
things can easily be coded to avoid that?

> so all that will end
> up happening is that any users running pgAdmin will get told to come and
> see us for support, or stop running a non-standard version of Slony.


I did hope to get feedback what exactly they think happens to 
non-enabled nodes, or finally what the no_active flag is good for if a 
node with the flag set to false is assumed to screw up everything.

Imagine a standard node (freshly created with slonik, no_active=true) 
that has no path information to other nodes. Will it interfere? If not, 
will the existence (not the creation) of path information interfere? 
AFAICS the interference starts when creating listens, not earlier.

>  do not want us to have to deal with any Slony support beyond our own
> code in pgAdmin 

We'll be dealing with it anyway. Several tasks are implemented deep down 
in slonik, not really documented to be executed in a different tool.

> 
> What exactly do we need the admin nodes for? I'm assuming that you're
> setting up the maintenance DB for the server as the admin node so we can
> easily find all sets on that server - is that right?

It's not needed to locate stuff in the current db's cluster, but to 
perform actions that have to be executed on the local db as well as the 
remote server.

> Perhaps for older
> versions (well 1.1) we need to just find sets as we connect to any
> individual databases, and use your original plan for 1.2 in which we use
> a separate table as Jan/Chris have suggested?

While I'm not particularly fond of doing things in 1.2+ different from 
1.0-1.1, this is certainly not a real problem.
What *is* a problem is scanning all servers and databases to find a 
suitable connection, this won't work for several reasons:
- we might hit a cluster with the same name, which is actually a 
different one.
- This may take a very long time. I have several remote servers 
registered, which aren't accessible until I connect via VPN. It would 
take some minutes until all timeouts elapsed.



> Also, what has Chris K-L done in phpPgAdmin? I thought he was going to
> use the admin nodes idea as well, but perhaps not...

He might not have got to the point where he needs multiple connections. 
Joining a cluster needs it, but this is trivial because the 
server/db/cluster-to-join is manually selected. Failover does require it 
(failednode to be called on all nodes).

I'd consider it highly error prone if the connect information entry is 
deferred until failover really has to happen. Failover is a last-resort 
action, after everything else failed. In that situation human errors 
tend to happen, and they usually have far more dramatic consequences 
(Murphy...).

The second use-case is monitoring. When working on a cluster, it's 
desirable to work cluster-centric, i.e. do everything from a single 
point, instead of jumping from one database to another to locate the 
provider node or to find out what's happening on a particular node.

While omitting the second point is simply (a lot) less user friendly, 
failover support can't be reliably implemented without persistent 
connection info.


More information about the Slony1-general mailing list