[storm] connection pooling

Drew Smathers drew.smathers at gmail.com
Tue Jul 31 17:38:24 BST 2007


> Not right now.  It'd be somewhat easy to support it, since we can just
> put connections back in the database once they're close()d, and return
> them back on new connections.  We have plans to support this in the
> base database class, what would immediately offer pooling for all backends.

Yeah,  a backend-agnostic approach would make sense - rather than
tying in some messy logic for enabling the backends pooling logic if
available.  psycopg2 has a pool module, for example.

> OTOH, we've never felt the need for it, because of how zstorm works.
> Stores are cached per-thread, and they keep a reference to the
> connection, so in practice we have not only connection pooling, but
> objects that may stay in-memory across transactions and requests.

Reasonable  However, this doesn't solve the problem when you have have
many concurrent sessions and your angry Oracle DBA only lets you keep
a max of 20 open connections per process - so you need to share
connections via a checkin/out pool.

> If you ever feel the need for pooling in practice, please let us know.

I actually don't right now ;)  I'm forced to use java/jdbc/oracle for
my day job.  I have some projects outside of work using zope3.  I'd
actually be more interesting in zope integration right now - I've
started down the path of implementing my own.  So far its been easy
just based on a simple thread-local stores model.



More information about the storm mailing list