Fri Oct 8 11:14:59 PDT 2010
- Previous message: [Slony1-hackers] Docs not building just now
- Next message: [Slony1-hackers] Slonik uninstall node
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Slony version 2.0.3 I have a 2 node Slony cluster and I wish to cleanup everything (manually) if it is detected that one of the nodes has failed. So, is just stopping the slon daemons and then executing 'uninstall node' for the remaining node enough to clean up everything? $ ./slon_kill $ ( ./slonik_print_preamble && echo 'uninstall node ( id = 2 ); ' ) | slonik I looked at slonik.c;slonik_uninstall_node() and _cluster_name.uninstallNode() plpgsql function, all I could notice is that slonik_uninstall_node() calls the plpgsql function and then issues 'drop schema _cluster_name cascade;'. The plpgsql function just issues a 'lock table _cluster_name.sl_config_lock'. So I don't see a problem in performing the above 2 commands to clean up and then configure replication setup from scratch. Any objections? Regards, -- gurjeet.singh @ EnterpriseDB - The Enterprise Postgres Company http://www.EnterpriseDB.com singh.gurjeet@{ gmail | yahoo }.com Twitter/Skype: singh_gurjeet Mail sent from my BlackLaptop device -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.slony.info/pipermail/slony1-hackers/attachments/20101008/09972df7/attachment.htm
- Previous message: [Slony1-hackers] Docs not building just now
- Next message: [Slony1-hackers] Slonik uninstall node
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-hackers mailing list