[storm] question on zstorm (aggressive caching)
Roy Mathew
rmathew8 at gmail.com
Thu Mar 10 22:51:11 UTC 2011
Hi Gustavo, Thanks for your reply.
My assumption was that the the goal of caching was to avoid making the
same query more than once in a given transaction.
However, my (perhaps flawed) analysis after a bit of digging suggests
that this is not the case, and the caching model used in the storm
container is more about managing objects in memory that correspond to
table rows, and doing the right thing as far as their state is concerned
when dirty, etc...
Our application uses a SQL container model. I turned on logging on
the postgres backend, and saw that doing something like this inside
a transaction:
obj1 = C['key1']
obj1 = C['key1'] # and again
(that is to say, it seems that invoking __getitem__ twice), causes the
same SQL query to run twice. Please help me understand if I
understand this correctly. thanks!
On Thu, Mar 10, 2011 at 9:30 AM, Gustavo Niemeyer <gustavo at niemeyer.net>wrote:
> Hi Roy,
>
> > I would like to override the default cache behavior, so that I can
> > explicitly invalidate the cache, rather than have it be cleared on
> > each transaction commit. My use case is that I have a largely
> > read-only database, and don't want to pay the penalty for a query each
> > time. Has anyone worked on this problem? Is this a reasonable thing to
> > do?
>
> This should be easy to handle by simply not committing/rolling back
> the store. Just allow it to stay within the same transaction for the
> period you don't care about flushing the cache, and I believe it
> should all work well.
>
> Does that work for you?
>
> --
> Gustavo Niemeyer
> http://niemeyer.net
> http://niemeyer.net/blog
> http://niemeyer.net/twitter
>
--
Roy.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/storm/attachments/20110310/8bac0a4d/attachment.html>
More information about the storm
mailing list