[storm] question on zstorm (aggressive caching)
Roy Mathew
rmathew8 at gmail.com
Sun Mar 13 14:31:06 UTC 2011
Thankyou James. That certainly helps me understand why...
On Sun, Mar 13, 2011 at 10:01 AM, james at jamesh.id.au <james at jamesh.id.au>wrote:
> On Fri, Mar 11, 2011 at 6:51 AM, Roy Mathew <rmathew8 at gmail.com> wrote:
> > Hi Gustavo, Thanks for your reply.
> > My assumption was that the the goal of caching was to avoid making the
> > same query more than once in a given transaction.
> > However, my (perhaps flawed) analysis after a bit of digging suggests
> > that this is not the case, and the caching model used in the storm
> > container is more about managing objects in memory that correspond to
> > table rows, and doing the right thing as far as their state is concerned
> > when dirty, etc...
> > Our application uses a SQL container model. I turned on logging on
> > the postgres backend, and saw that doing something like this inside
> > a transaction:
> > obj1 = C['key1']
> > obj1 = C['key1'] # and again
> > (that is to say, it seems that invoking __getitem__ twice), causes the
> > same SQL query to run twice. Please help me understand if I
> > understand this correctly. thanks!
>
> Storm's object cache works off of the table's primary key. It doesn't
> have any knowledge of any alternative keys for the table, so queries
> that rely on those keys won't benefit from the cache.
>
> As a general rule, calls to Store.get() (and code that calls it, such
> as References to the primary key of a table) may avoid a query if
> there is a cache hit, while calls to Store.find() will always issue a
> query.
>
> James.
>
--
Roy.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/storm/attachments/20110313/e4a5b6ef/attachment.html>
More information about the storm
mailing list