win32 file locking

Andrew Bennetts andrew at
Wed May 25 07:23:42 BST 2005

On Wed, May 25, 2005 at 12:49:38AM -0500, John A Meinel wrote:
> >
> I agree that resources should be released explicitly, that is what
> .commit() is for. But in the case that there is an exception that wasn't
> caught properly, it seems prudent to have the resource cleaned up
> eventually.
> I suppose the alternative is to wrap your AtomicFile in a try: finally:
> block. That takes more effort, and is easy to leak resources accidentally.
> Take your pick. I'll probably always try to do the try/finally. And it
> is probably the recommended method since that does force the release of
> resources. But that doesn't mean having __del__ around as a cleanup is a
> terrible idea.

The problem with adding a __del__ just-in-case is that it's usually a poor
tradeoff -- in exchange for having a cleanup that might (but might not, so
you can't rely on it) trigger when you forget, you also get the possibility
of memory leaks.

There's a compromise that might be helpful.  Twisted recently removed the
__del__ method on its Deferred objects, and instead moved the __del__ into a
sub-object that unless someone is being naughty, should never be in a
cycle by virtue of never being referenced from anywhere by the Deferred.
i.e. this:

    class Deferred:
        def __del__(self):
            # log warning if it wasn't explicitly cleaned up

to this (simplified):

    class Deferred:
        _debugInfo = None

        def foo(self):
            if self.isDirty():
                if self._debugInfo is None:
                    self._debugInfo = DebugInfo()
                self._debugInfo.dirty = True
                # add some debug strings to DebugInfo, to be logged if
                # necessary, e.g. the string of traceback of the caller:
                self._debugInfo.msg = "foo called, but cleanup wasn't"
     class DebugInfo:
         dirty = False
         msg = ''
         def __del__(self):
             if self.dirty:
                 print self.msg

I think this is a reasonable compromise; don't rely on __del__ to actually
perform cleanup, but you can at least warn the user that it didn't happen,
and so therefore that there's a bug that needs fixing.  Note that DebugInfo
cannot have a reference back to the Deferred that created it, so if you want
to print any useful debugging in DebugInfo.__del__, the owner of it must set
it on the DebugInfo explicitly (preferably as simple strings to minimise the
chance of an unexpected reference cycle).

This works because Deferred isn't part of a reference cycle with __del__
(assuming the code everywhere else is sane!) and it holds the only reference
to its DebugInfo, so when it is collected you know (in the CPython
implementation anyway) that the corresponding DebugInfo will also be

Another possibility is to use weakref callbacks, although they are more
limited by design -- by the time the weakref callback is fired, the object
has already been collected (but this is what allows them to avoid the
problems of __del__ in the first place).  Also beware of segfault bugs in
the weakref module in Pythons older than 2.3.5.


More information about the bazaar mailing list