Martin Eriksson m.eriksson at albourne.com
Fri Aug 22 09:06:09 PDT 2008
I'm dont know fully how the cache server works as i'm not really a 
developer as such and i've never looked closely at the code, but I know 
that its storing objects and not rows. and no we don't use hibernate.

Shahaf Abileah wrote:
> Do you use hibernate or some other O-R mapping layer?  Do you cache your data as domain objects or are some Record representation?
>
> --S
>
> -----Original Message-----
> From: slony1-general-bounces at lists.slony.info [mailto:slony1-general-bounces at lists.slony.info] On Behalf Of Martin Eriksson
> Sent: Friday, August 22, 2008 8:13 AM
> Cc: slony1-general at lists.slony.info
> Subject: Re: [Slony1-general] Slony over a WAN?
>
> our cache server is all in-house written specifically for our 
> application/database, (as not all data is cached only things likely to 
> be access often etc)
>
> but we do use java as language of choice, and the cache server is using 
> the sun java rmiregistry
>
> cache server updated as soon as it picks up changes that come in from 
> the replication, its not perfect but it works pretty well. worst case 
> user will get an error message when trying to write to master db, but 
> then it will try to update the cache and by the time the cache is 
> updated the write will go through..
>
>
>
>
> Shahaf Abileah wrote:
>   
>> Thanks for the great info Martin.
>>
>> If you don't mind me asking, what technology do you use for your cache?  How do you combine the data in a local site's cache with the (potentially different) data in that local site's slave DB?
>>
>> Thanks,
>>
>> --S
>>
>> -----Original Message-----
>> From: slony1-general-bounces at lists.slony.info [mailto:slony1-general-bounces at lists.slony.info] On Behalf Of Martin Eriksson
>> Sent: Friday, August 22, 2008 12:54 AM
>> Cc: slony1-general at lists.slony.info
>> Subject: Re: [Slony1-general] Slony over a WAN?
>>
>> Hi, we do slony over WAN where W = World hehe
>>
>> Our master Database is in London, then we got one node in Cyprus, one 
>> node in San Fransisco, and then one node in Frankfurt and about to add 
>> another node in Hong Kong. Our database is only around 8 gigs though.
>>
>> we got a bit of a special setup in terms of the applications using the 
>> dbs as we want our application to read locally but write to the master 
>> so we a pretty advanced cache system for handling if Bill is sitting In 
>> San Fransisco writes to the db and then look at the data he sees the 
>> data he just wrote so does everyone else in that office even though it 
>> might not have been replicated to his local db yet. this work pretty 
>> good, we never have more then 5-15 seconds delay on slony event across 
>> the globe so in worst case if someone tries to change something that 
>> already been changed it wont work and they just reset their cache (which 
>> takes 2 minutes) and then they continue working.
>>
>> and we do have an AWFUL line london <-> Cyprus that on average does 
>> 40kbytes/s which is horribly low. But it still works 100 times better 
>> then having both read and write going to London all the time from across 
>> the world..
>>
>> Of course if the shit hits the fan so to say when we do DLL changes 
>> which happens every 3 months for release and we cant recover it will 
>> take up to 72 hours to replicate to all nodes (which is not really an 
>> option)
>>
>> but if you got good bandwidth between your nodes its not a problem.
>>
>> Though if you got a 80 gig db you might want to consider not replicating 
>> it across the WAN as it will take quite a long time and might cost your 
>> a lot depending on your ISP i guess.. well if you got a gigabit line and 
>> dont mind using it then I guess you are fine :) but if you bandwidth is 
>> limited you could do as we do sometime,
>>
>> setup a second db instance on your masternode (assuming node and master 
>> will be in the same hardware architecture) do a replication to the local 
>> instance once done shut down that instance move the whole /data 
>> directory of that instance onto so movable disk, and drive down to the 
>> other node load it up, modify path to the node using slony store path 
>> and fire it up and let it catch up on the last hour or so of the new data.
>>
>>  slon processes should run in the same "network context" as the node that
>> each is responsible for managing so that the connection to that node is a
>> "local" one. 
>>      Do not run such links across a WAN.
>>
>>
>>
>> Its already been covered but I'll add to it. Yes you should run the slon 
>> process on the node in question do not run them all on the master node, 
>> not only because it will work better also because when you do have 
>> multiple slon process running on one machine it can get VERY confusing 
>> figure out which one goes where and which one is having connections 
>> open. Life is much easier if one slon process run on each node machine.
>>
>> good luck!
>> Martin
>>
>>
>> Mark Steben wrote:
>>   
>>     
>>> Looking for advice on how to proceed.  We are running Postgres 8.2.5 in Lee,
>>> Massachusetts.  We will be installing same in Norfolk Virginia in the not
>>> too distant future.  Our need is to replicate roughly 60 - 70 percent
>>> Of our 80 GB database in Lee over a WAN to Norfolk for reporting purposes.
>>> In reading 'Slony-1 Best Practices' on the Website I came across the
>>> following statement:
>>>      
>>>   slon processes should run in the same "network context" as the node that
>>> each is responsible for managing so that the connection to that node is a
>>> "local" one. 
>>>      Do not run such links across a WAN.
>>>
>>> Does this still hold true?  If not I would like to hear experiences of
>>> anyone engaging in a 'Slony-1 long distance relationship'
>>> Any other alternatives to consider?  I do run Slony-1.2.14 in development
>>> with everything encompassed in Lee.  However 
>>> we will be opening another office 35 miles west in West Springfield that I
>>> will be operating out of.  I plan on employing Slony-1
>>> To provide replication between these 2 'not so long distance' locations.
>>>
>>> Any comments welcome.  Thanks
>>>
>>>
>>> Mark Steben│Database Administrator│ @utoRevenueT
>>> 480 Pleasant Street, Suite B200, Lee, MA 01238 
>>> 413-243-4800 x1512 (Phone) │ 413-243-4809 (Fax)
>>> A Division of Dominion Enterprises
>>>
>>>  
>>>  
>>>  
>>>  
>>>  
>>>
>>> _______________________________________________
>>> Slony1-general mailing list
>>> Slony1-general at lists.slony.info
>>> http://lists.slony.info/mailman/listinfo/slony1-general
>>>   
>>>     
>>>       
>> _______________________________________________
>> Slony1-general mailing list
>> Slony1-general at lists.slony.info
>> http://lists.slony.info/mailman/listinfo/slony1-general
>>
>>
>>   
>>     
>
> _______________________________________________
> Slony1-general mailing list
> Slony1-general at lists.slony.info
> http://lists.slony.info/mailman/listinfo/slony1-general
>
>
>   



More information about the Slony1-general mailing list