[storm] Freeze while dropping multiple rows spanning multiple tables

akira nhytro-python at web.de
Fri Oct 5 03:35:41 BST 2007


I just found out that a form was being submitted multiple times. Thanks 
for the info and tips guys!
> akira wrote:
>   
>> Hi Gustavo, you are right about the locks, the ere were many locks 
>> beeing held. Is there some way I can handle this better?  I´m doomed :-((
>>     
>
> Usually this is caused by some other process being badly behaved, and that
> is probably the best place to start if it is under your control. Badly
> behaved in this case means running for a long time without committing -
> maybe it is necessary because there are lot of changes that need to be made
> and they need to be done in a single transaction to ensure integrity, or
> maybe the developer doesn't fully understand the concept of transactions and
> transaction isolation.
>
> If these other processes are legitimately holding locks open for extended
> periods of time or you just have to deal with it, then you need to decide
> what the correct behaviour should be. You have the choice of blocking,
> failing and returning an error, or detecting the outstanding lock and
> skipping steps that need that resource.
>
> Blocking is default behaviour in general, and what you probably have now.
> There is also a chance, if your process and the competing process are trying
> to access resources in different orders, of ending up with a deadlock. Under
> PostgreSQL this will raise an exception (what backend are you using?)
>
> If you want either of the other two options or handle deadlocks more
> gracefully, it starts getting backend specific. Under PostgreSQL you use the
> LOCK statement to explicitly obtain locks on the resources you need. To
> detect a competing lock, you start a SAVEPOINT and issue a LOCK TABLE ... NO
> WAIT statement which will raise an exception instead of blocking if the lock
> cannot be obtained, at which point you can roll back to the savepoint and
> continue on or report a pretty error to the user.
>
> (None of this is Storm specific of course).
>
>   



Stuart Bishop wrote:
> akira wrote:
>   
>> Hi Gustavo, you are right about the locks, the ere were many locks 
>> beeing held. Is there some way I can handle this better?  I´m doomed :-((
>>     
>
> Usually this is caused by some other process being badly behaved, and that
> is probably the best place to start if it is under your control. Badly
> behaved in this case means running for a long time without committing -
> maybe it is necessary because there are lot of changes that need to be made
> and they need to be done in a single transaction to ensure integrity, or
> maybe the developer doesn't fully understand the concept of transactions and
> transaction isolation.
>
> If these other processes are legitimately holding locks open for extended
> periods of time or you just have to deal with it, then you need to decide
> what the correct behaviour should be. You have the choice of blocking,
> failing and returning an error, or detecting the outstanding lock and
> skipping steps that need that resource.
>
> Blocking is default behaviour in general, and what you probably have now.
> There is also a chance, if your process and the competing process are trying
> to access resources in different orders, of ending up with a deadlock. Under
> PostgreSQL this will raise an exception (what backend are you using?)
>
> If you want either of the other two options or handle deadlocks more
> gracefully, it starts getting backend specific. Under PostgreSQL you use the
> LOCK statement to explicitly obtain locks on the resources you need. To
> detect a competing lock, you start a SAVEPOINT and issue a LOCK TABLE ... NO
> WAIT statement which will raise an exception instead of blocking if the lock
> cannot be obtained, at which point you can roll back to the savepoint and
> continue on or report a pretty error to the user.
>
> (None of this is Storm specific of course).
>
>   




More information about the storm mailing list