Mirko Vogt lists at nanl.de
Tue Apr 24 07:40:08 PDT 2012
Hey all!

I successfully setup my first slony replication - however quite a few
conceptual questions raised on the way:

I have the following setup (taken from slon_tools.conf):

add_node(node     => 1,
         host     => 'master.foo',
         dbname   => 'foo',
         port     => 5432,
         user     => 'slony',
         password => 'XXX');

add_node(node     => 11,
         host     => 'slave1.foo',
         dbname   => 'foo',
         port     => 6254,
         user     => 'slony',
         password => 'XXX');

add_node(node     => 12,
         host     => 'slave2.foo',
         dbname   => 'foo',
         port     => 2254,
         user     => 'slony',
         password => 'XXX');

This config got deployed on every node, every pg_hba.conf-file contained
a line which allowed all other servers to connect as user slony.

Everything was working - until I tried to optimize.

I thought, well, there needs to be a connection between the master and
the slaves but no direct connection between the slaves - so I dropped
the access lines in pg_hba.conf on the slaves for the other slave
respectively.

The setup seemed to still work, however on the slaves I noticed error
messages like:
  FATAL:  no pg_hba.conf entry for host "slave1.foo", user "slony",
database "foo"

Okay, fine, obviously they try to connect to each other: so I purged out
the respective node-definitions out of the slon_tools.conf file on the
slaves (on node 11 I deleted the definition of node 12 and vice versa).

After restarting slony the slaves still tried to connect to each other.
Where do they have the connect information from? And why are they trying
to connect to each other at all?

Anyway, next thought: if one node gets hacked the attacker shouldn't be
able to access the database on the other nodes. Idea was: The slaves do
not need to access the master with a user who has write access to that
database (slony). That's why I created a read-only user on the master
(slony_ro) and tried to tell the slaves - by changing the user 'slony'
to 'slony_ro' within the slon_tools.conf-files - to connect as 'slony_ro'.
However also that change didn't show any effect after restarting slony.

It seems to me - by initializing the cluster, creating and subscribing
to the going-to-be-replicated sets - the information got pushed to the
slaves from the master.

That raises 2 (sub-)questions:
a) Where is this information stored?
b) why there is the need of a slon_tools.conf file if its data is not
used anyway (at least on the slaves)?

Maybe somebody could lighten me up here? I didn't find any information
able to clear my confusion about that yet :/

Cheers, thanks a lot in advance and have a nice week!

  mirko


More information about the Slony1-general mailing list