Csaba Nagy nagy
Thu Feb 17 17:34:30 PST 2005
Hi all,

It's just marginally relevant to the issue at hand: in our code we drop
the connection on any error we cannot map to an expected condition. This
would eventually recycle all connections on such unexpected problems
after just one error per connection. Of course if the error surfaces as
a constraint violation which is a recognized condition, this won't help
either, and if the problem is persistent, the connections will keep
recycling which will drop the effect of the pooling, in the latter case
the pooling code could also announce an admin to take a look...

HTH,
Csaba.

On Tue, 2005-02-15 at 13:03, John Sidney-Woollett wrote:
> Thanks for the info, Richard.
> 
> I didn't think that it was a slony issue per se, but that a note should 
> be added to the slony docs warning to recycle connections after making 
> substantive changes to the schema.
> 
> You're right, we use both (java) prepared statements and pl/pgsql functions.
> 
> The data loss aspect is not so clear cut (for us). We definitely got 
> records that failed to insert (missing sequence numbers in tables) while 
> others did insert correctly into the same tables (using the same 
> pl/pgsql functions). So we can't figure out what the pattern is.
> 
> Although we do have our connection pool recycle idle connections - it 
> could be that things worked when a new recycled connection was used by 
> the web app, and failed when one of the "old/original" connections was 
> used. This is probably what masked the errors for us...
> 
> It would be great if postgres could "recompile" pl/pgsql functions 
> whenever it found a missing object referenced within the function - 
> chances are that it would compile cleanly (in this case) and then could 
> be executed without error.
> 
> something along the lines of
> 
> execute function
> OK -> return result
> ERROR - OID -> recompile function, and re-execute
>    OK -> return result
>    ERROR - OID -> report error
> 
> This would help get rid of the temp table in functions work around 
> having to use an execute statement.
> 
> Thanks for your help and feedback.
> 
> John Sidney-Woollett
> 
> Richard Huxton wrote:
> 
> > John Sidney-Woollett wrote:
> > 
> >> Hopefully this will prevent data loss or problems for others using 
> >> slony 1.0.5 and pg 7.4.6...
> >>
> >> We just got bitten by something we didn't foresee when completely 
> >> uninstalling a slony replication cluster from the master and slave...
> >>
> >> MAKE SURE YOU STOP YOUR APPLICATION RUNNING AGAINST YOUR MASTER 
> >> DATABASE WHEN REMOVING THE WHOLE SLONY CLUSTER, or at least re-cycle 
> >> all your open connections after the event!
> >>
> >> The connections appear to "remember" or refer to objects which are 
> >> removed by the uninstall node script. And you get lots of errors as a 
> >> result (and possible data loss??)...
> >>
> >> Question: Why do our database objects still refer to removed slony 
> >> objects after they are removed?
> > 
> > 
> > Well, there are two areas I know cache plans/OIDs:
> >  1. Prepared statements
> >  2. plpgsql functions
> > 
> > At a guess, since you mention Java the first almost certainly applies to 
> > you.
> > 
> > This isn't a slony issue so much as a cached plan issue. I'm guessing 
> > the same problems would occur if you were manually changing the database 
> > schema.
> > 
> > Don't think you could get data loss (unless the application ignores 
> > errors). You will however get to see a wide range of OID related errors.
> > 
> > -- 
> >   Richard Huxton
> >   Archonet Ltd
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo at postgresql.org)



More information about the Slony1-general mailing list