|Slony-I 2.1.4 Documentation|
|Prev||Fast Backward||Chapter 5. Deployment Considerations||Fast Forward||Next|
5.2. Component Monitoring
There are several ways available to see what Slony-I processes are up to:
5.2.1. Looking at pg_stat_activity view
The standard PostgreSQL view pg_stat_activity indicates what the various database connections are up to.
On recent versions of PostgreSQL, this view includes an attribute, application_name, which Slony-I components populate based on the names of their respective threads.
5.2.2. Looking at sl_components view
Slony-I has a table, sl_components, introduced in Slony-I 2.1, which captures Slony-I activity for each node.
slonyregress1@localhost-> select * from _slony_regress1.sl_components order by co_actor; co_actor | co_pid | co_node | co_connection_pid | co_activity | co_starttime | co_event | co_eventtype ----------------------+--------+---------+-------------------+------------------+------------------------+----------+-------------- local_cleanup | 24586 | 0 | 24907 | thread main loop | 2011-02-24 17:02:55+00 | | n/a local_listen | 24896 | 1 | 24900 | thread main loop | 2011-02-24 17:03:07+00 | | n/a local_monitor | 24586 | 0 | 24909 | thread main loop | 2011-02-24 17:02:55+00 | | n/a local_sync | 24517 | 0 | 24906 | thread main loop | 2011-02-24 17:03:09+00 | | n/a remote listener | 24586 | 2 | 24910 | thread main loop | 2011-02-24 17:03:03+00 | | n/a remoteWorkerThread_2 | 24586 | 2 | 24908 | thread main loop | 2011-02-24 17:02:55+00 | | n/a (6 rows)
This example indicates the various Slony-I threads that are typically running as part of a slon process:
This thread periodically wakes up to trim obsolete data and (optionally) vacuum Slony-I tables
This thread listens for events taking place on the local node, and changes the slon's configuration as needed.
This thread is rather self-referential, here; it manages the queue of events to be published to the sl_components table.
This thread generates SYNC events on the local database. If the local database is the origin for a replication set, those SYNC events are used to propagate changes to other nodes in the cluster.
This thread listens for events on a remote node database, and queues them into the remote worker thread for that node.
This thread waits for events (from the remote listener thread), and takes action. This is the thread that does most of the visible work of replication.
5.2.3. Notes On Interpreting Component Activity
Many of these will typically report, as their activity, thread main loop, which indicates that the thread exists, and is simply executing its main loop, waiting to have work to do.
Most threads will never indicate an event or event type, as they do not process Slony-I events.
local_monitor thread never reports any activity.
It would be a nice idea for this thread, which manages sl_components, to report in on its work. Unfortunately, the fact of adding in its own events would make it perpetually busy, as the action of processing the queue would add a monitoring entry, in effect becoming a repetitive recursive activity.
It does report in when it starts up, which means you may expect that this entry indicates the time at which the slon process began.
Timestamps are based on the clock time of the slon process.
In order for the timestamps to be accurate, it is important to use NTP or similar technology to keep servers' clocks synchronized, as recommended in Section 1.2. If the host where a slon runs has its time significantly out of sync with the database that it manages, queries against sl_components may provide results that will confuse the reader.