- Table of Contents
- 4.1. Events & Confirmations
- 4.2. Slony-I Listen Paths
- 4.3. Slony-I Trigger Handling
- 4.4. Locking Issues
- 4.5. Log Shipping - Slony-I with Files
Slony-I transfers configuration changes and application data through events. Events in Slony-I have an origin, a type and some parameters. When an event is created it is inserted into the event queue (the sl_event table) on the node the event originates on. The remoteListener threads for each remote slon process then picks up that event (by querying the table sl_event) and pass the event to the slon's remoteWorker thread for processing.
An event is uniquely identified via the combination of the node id of the node the event originates on and the event sequence number for that node. For example, (1,5000001) identifies event 5000001 originating from node 1. In contrast, (3,5000001) identifies a different event that originated on a different node.
SYNC events are used to transfer application data for one node to the next. When data in a replicated table changes, a trigger fires that records information about the change in the sl_log_1 or sl_log_2 tables. The localListener thread in the slon processes will then periodically generate a SYNC event. When the SYNC event is created, Slony-I will determine the highest log_seqid assigned so far along with a list of log_seqid's that were assigned to transactions that have not yet been committed. This information is all stored as part of the SYNC event.
When the remoteWorker thread for a slon processes a SYNC, it queries the rows from sl_log_1 and sl_log_2 that are covered by the SYNC (e.g. - log_seqid rows that had been committed at the time the SYNC was generated). The data modifications indicated by this logged data are then applied to the subscriber.
When an event is processed by the slon process for a remote node, a CONFIRM message is generated by inserting a tuple into the sl_confirm table. This tuple indicates that a particular event has been confirmed by a particular receiver node. Confirmation messages are then transferred back to all other nodes in the cluster.
The slon cleanupThread periodically runs the schemadoccleanupevent(p_interval interval) database function that deletes all but the most recently confirmed event for each origin/receiver pair (this is safe to do because if an event has been confirmed by a receiver, then we know that all older events from that origin have also been confirmed by the receiver). Then the function deletes all SYNC events that are older than the oldest row left in sl_confirm (for each origin). The data for these deleted events will also be removed from the sl_log_1 and sl_log_2 tables.
When Slony-I is first enabled it will log the data to replicate to the sl_log_1 table. After a while it will stop logging to sl_log_1 and switch to logging in sl_log_2. When all the data in sl_log_1 is known to have been replicated to all the other nodes, Slony-I will TRUNCATE the sl_log_1 table, clearing out this now-obsolete replication data. Then, it stops logging to sl_log_2, switching back to logging to the freshly truncated sl_log_1 table. This process is repeated periodically as Slony-I runs, keeping these tables from growing uncontrollably. By using TRUNCATE, we guarantee that the tables are properly emptied out.
slonik can submit configuration commands to different event nodes, as controlled by the parameters of each slonik command. If two commands are submitted to different nodes, it might be important to ensure they are processed by other nodes in a consistent order. The slonik SLONIK WAIT FOR EVENT command may be used to accomplish this, but as of Slony-I 2.1 this consistency is handled automatically by slonik under a number of circumstances.
Before slonik submits an event to a node, it waits until that node has confirmed the last configuration event from the previous event node.
Before slonik submits a SLONIK SUBSCRIBE SET command, it verifies that the provider node has confirmed all configuration events from all other nodes.
Before slonik submits a SLONIK CLONE PREPARE it verifies that the node being cloned is caught up with all other nodes in the cluster.
When slonik starts up, it contacts all nodes for which it has SLONIK ADMIN CONNINFO information, to find the last non-SYNC event from each node. Submitting commands from multiple slonik instances at the same time will confuse slonik and is not recommended. Whenever slonik is waiting for an event confirmation, it displays a message every 10 seconds indicating which events are still outstanding. Any commands that might require slonik to wait for event confirmations may not be validly executed within a try block for the very same reasons that SLONIK WAIT FOR EVENT command may not be used within a try block, namely that it is not reasonable to ask Slony-I to try to roll back events.