<div dir="ltr">Hi, @Free.<br><div><br>I just looking a way to reduce number of DB queries. Of course, there can be better solutions. My patch was created from the principle of least interference in the internal API, and simplicity of maintenance and upgrading.<br><br>> And you can see the result of this complexity in the increased complexity of the APIs you propose (for example the "exists" parameter), which at a sudden become more sophisticated and hence difficult to undestand.<br><br>Thanks you. I'm agree with you, and I've killed exists param, and extended invalidate method just now:<br><br> def invalidate(self, obj=None):<br> if type(obj) is tuple:<br> self.nonexistent_cache.remove(obj)<br> else:<br> StoreOrig.invalidate(self, obj)<br> if obj is None:<br> del self.nonexistent_cache[:]<br><br>So, now I can invalidate even nonexistent objects. And API is not modified.<br><br>> (because the minimum 100% "storm-safe" isolation level would become serializable, when it's now repeatable read)<br><br>I'm not sure in this. It's not a phantom read in its purest form.<br><br>"A phantom read occurs when, in the course of a transaction, two identical queries are executed, and the collection of rows returned by the second query is different from the first."<br><br>My patch does not affect the collections, only "get" certain row. So, safe isolation level of my patch is also repeatable read.<br><br>> Let's not add more built-in complexity, instead I suggest that you implement this additional caching mechanism in you application (and I'd personally create a separate API built on top of Store, instead of subclassing Store).<br><br></div><div>-- In my case overhead was fully absorbed by a decrease in number of DB-queries. It's a real problem for non-auto-incremental or composite primary key, especially for models of social profiles. In any case, thank you.<br></div><div><br><br><div class="gmail_extra"><br>
<br><div class="gmail_quote">2015-01-17 12:00 GMT+02:00 Free Ekanayaka <span dir="ltr"><<a href="mailto:free@64studio.com" target="_blank">free@64studio.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Jan 17, 2015 at 12:12 AM, Ivan Zakrevskyi <span dir="ltr"><<a href="mailto:ivan.zakrevskyi@rebelmouse.com" target="_blank">ivan.zakrevskyi@rebelmouse.com</a>></span> wrote:</div><div class="gmail_quote"><br></div><div class="gmail_quote">[...]<span class=""><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
On the other hand, suppose that an object exists, and you have already<br>
got this object in current transaction. After it, suppose, object was<br>
changed in DB by concurrent thread. But these changes will not affect<br>
your object. I think in this case it does not matter what type of<br>
object, None or Model instance. Since the object has been read, it can<br>
not be changed even if it has been modified by a parallel process.<br></blockquote><div><br></div></span><div>Yes, for objects already in the cache, that's a tradeoff of the existing cache mechanism, and it's why Store.invalidate exist. This was considered by the original Storm developers an acceptable tradeoff, which requires people a bit more care but does some immediate performance benefit.</div><div><br></div><div>You're proposing to extend the behavior, but this will inevitably makes reasoning about code more difficult and require even more care (because the minimum 100% "storm-safe" isolation level would become serializable, when it's now repeatable read), and at that point my feeling is that the tradeoff stops to be worth.</div><div><br></div><div>Caches ARE hard:</div><div><br></div><div><a href="http://martinfowler.com/bliki/TwoHardThings.html" target="_blank">http://martinfowler.com/bliki/TwoHardThings.html</a><br></div><div><br></div><div>because they are subtle. And you can see the result of this complexity in the increased complexity of the APIs you propose (for example the "exists" parameter), which at a sudden become more sophisticated and hence difficult to undestand.</div><div> </div><div>One of the design goals of Storm is to be simple, and I agree with that goal since the very idea of an ORM is probably questionable: every abstraction layer has a cost, especially in the case of ORMs where the abstraction layer can't be mapped cleanly to the underlying model, due to the object-relation impedance mismatch problem.</div><div><br></div><div>Let's not add more built-in complexity, instead I suggest that you implement this additional caching mechanism in you application (and I'd personally create a separate API built on top of Store, instead of subclassing Store).</div><div><br></div><div>Cheers,<br></div><div><br>Free</div><div><div class="h5"><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
My patch does not affect store.find(), and, hence, selection. I'm not<br>
sure, that phantom reads is possible here, except that<br>
store.get_multi(). There is rather a "Non-repeatable reads", than<br>
"Phantom reads". Because it can hide changes of certain row (with<br>
specified primary key), but not of selection.<br>
<br>
So, for "Repeatable Read" and "Serializable" my patch is safe (only<br>
need add reset of store.nonexistent_cache on commit).<br>
<br>
For "Read Committed" and "Read Uncommitted" my patch is not safe,<br>
because this levels should not to have "Non-repeatable reads". But for<br>
existent object storm also can not provide "repeatable reads". So,<br>
it's not mater, will "Non-repeatable" be applied reads for existent<br>
object or for nonexistent object.<br>
<br>
Of course, my patch is temporary solution. There is can be more<br>
elegant solutions on library level. But it really reduce many DB<br>
queries for nonexistent primary keys.<br>
<div><div><br>
<br>
<br>
2015-01-16 23:20 GMT+02:00 Free Ekanayaka <<a href="mailto:free@64studio.com" target="_blank">free@64studio.com</a>>:<br>
><br>
> See:<br>
><br>
> <a href="http://en.wikipedia.org/wiki/Isolation_%28database_systems%29" target="_blank">http://en.wikipedia.org/wiki/Isolation_%28database_systems%29</a><br>
><br>
> for reference.<br>
><br>
> On Fri, Jan 16, 2015 at 10:19 PM, Free Ekanayaka <<a href="mailto:free@64studio.com" target="_blank">free@64studio.com</a>> wrote:<br>
>><br>
>> Hi Ivan,<br>
>><br>
>> it feels what you suggest would work safely on for transactions set the serializable isolation level, not repeteable reads down to read uncommitted (since phantom reads could occur there, and the non-existing cache would hide new results).<br>
>><br>
>> Cheers<br>
>><br>
>> On Fri, Jan 16, 2015 at 5:55 PM, Ivan Zakrevskyi <<a href="mailto:ivan.zakrevskyi@rebelmouse.com" target="_blank">ivan.zakrevskyi@rebelmouse.com</a>> wrote:<br>
>>><br>
>>> Hi, all. Thanks for answer. I'll try to explain.<br>
>>><br>
>>> Try to get existent object.<br>
>>><br>
>>> In [2]: store.get(StTwitterProfile, (1,3))<br>
>>> base.py:50 =><br>
>>> u'(0.001) SELECT ... FROM twitterprofile WHERE twitterprofile.context_id = %s AND twitterprofile.user_id = %s LIMIT 1; args=(1, 3)'<br>
>>> Out[2]: <users.orm.TwitterProfile at 0x7f1e93b6d450><br>
>>><br>
>>> In [3]: store.get(StTwitterProfile, (1,3))<br>
>>> Out[3]: <users.orm.TwitterProfile at 0x7f1e93b6d450><br>
>>><br>
>>> In [4]: store.get(StTwitterProfile, (1,3))<br>
>>> Out[4]: <users.orm.TwitterProfile at 0x7f1e93b6d450><br>
>>><br>
>>> You can see, that storm made only one query.<br>
>>><br>
>>> Ok, now try get nonexistent twitter profile for given context:<br>
>>><br>
>>> In [5]: store.get(StTwitterProfile, (10,3))<br>
>>> base.py:50 =><br>
>>> u'(0.001) SELECT ... FROM twitterprofile WHERE twitterprofile.context_id = %s AND twitterprofile.user_id = %s LIMIT 1; args=(1, 10)'<br>
>>><br>
>>> In [6]: store.get(StTwitterProfile, (10,3))<br>
>>> base.py:50 =><br>
>>> u'(0.001) SELECT ... FROM twitterprofile WHERE twitterprofile.context_id = %s AND twitterprofile.user_id = %s LIMIT 1; args=(1, 10)'<br>
>>><br>
>>> In [7]: store.get(StTwitterProfile, (10,3))<br>
>>> base.py:50 =><br>
>>> u'(0.001) SELECT ... FROM twitterprofile WHERE twitterprofile.context_id = %s AND twitterprofile.user_id = %s LIMIT 1; args=(1, 10)'<br>
>>><br>
>>> Storm sends a query to the database each time.<br>
>>><br>
>>> For example, we have a some util:<br>
>>><br>
>>> def myutil(user_id, *args, **kwargs):<br>
>>> context_id = get_context_from_mongodb_redis_memcache_environment_etc(user_id, *args, **kwargs)<br>
>>> twitter_profile = store.get(TwitterProfile, (context_id, user_id))<br>
>>> return twitter_profile.some_attr<br>
>>><br>
>>> In this case, Storm will send a query to the database each time.<br>
>>><br>
>>> The similar situation for non-existent relation.<br>
>>><br>
>>> In [20]: u = store.get(StUser, 10)<br>
>>> base.py:50 =><br>
>>> u'(0.001) SELECT ... FROM user WHERE <a href="http://user.id" target="_blank">user.id</a> = %s LIMIT 1; args=(10,)'<br>
>>><br>
>>><br>
>>> In [22]: u.profile<br>
>>> base.py:50 =><br>
>>> u'(0.001) SELECT ... FROM userprofile WHERE userprofile.user_id = %s LIMIT 1; args=(10,)'<br>
>>><br>
>>> In [23]: u.profile<br>
>>> base.py:50 =><br>
>>> u'(0.001) SELECT ... FROM userprofile WHERE userprofile.user_id = %s LIMIT 1; args=(10,)'<br>
>>><br>
>>> In [24]: u.profile<br>
>>> base.py:50 =><br>
>>> u'(0.001) SELECT ... FROM userprofile WHERE userprofile.user_id = %s LIMIT 1; args=(10,)'<br>
>>><br>
>>> I've created a temporary patch, to reduce number of DB queries (see bellow). But I am sure that a solution can be more elegant (on library level).<br>
>>><br>
>>><br>
>>> class NonexistentCache(list):<br>
>>><br>
>>> _size = 1000<br>
>>><br>
>>> def add(self, val):<br>
>>> if val in self:<br>
>>> self.remove(val)<br>
>>> self.insert(0, val)<br>
>>> if len(self) > self._size:<br>
>>> self.pop()<br>
>>><br>
>>><br>
>>> class Store(StoreOrig):<br>
>>><br>
>>> def __init__(self, database, cache=None):<br>
>>> StoreOrig.__init__(self, database, cache)<br>
>>> self.nonexistent_cache = NonexistentCache()<br>
>>><br>
>>> def get(self, cls, key, exists=False):<br>
>>> """Get object of type cls with the given primary key from the database.<br>
>>><br>
>>> This method is patched to cache nonexistent values to reduce number of DB-queries.<br>
>>> If the object is alive the database won't be touched.<br>
>>><br>
>>> @param cls: Class of the object to be retrieved.<br>
>>> @param key: Primary key of object. May be a tuple for composed keys.<br>
>>><br>
>>> @return: The object found with the given primary key, or None<br>
>>> if no object is found.<br>
>>> """<br>
>>><br>
>>> if self._implicit_flush_block_count == 0:<br>
>>> self.flush()<br>
>>><br>
>>> if type(key) != tuple:<br>
>>> key = (key,)<br>
>>><br>
>>> cls_info = get_cls_info(cls)<br>
>>><br>
>>> assert len(key) == len(cls_info.primary_key)<br>
>>><br>
>>> primary_vars = []<br>
>>> for column, variable in zip(cls_info.primary_key, key):<br>
>>> if not isinstance(variable, Variable):<br>
>>> variable = column.variable_factory(value=variable)<br>
>>> primary_vars.append(variable)<br>
>>><br>
>>> primary_values = tuple(var.get(to_db=True) for var in primary_vars)<br>
>>><br>
>>> # Patched<br>
>>> alive_key = (cls_info.cls, primary_values)<br>
>>> obj_info = self._alive.get(alive_key)<br>
>>> if obj_info is not None and not obj_info.get("invalidated"):<br>
>>> return self._get_object(obj_info)<br>
>>><br>
>>> if obj_info is None and not exists and alive_key in self.nonexistent_cache:<br>
>>> return None<br>
>>> # End of patch<br>
>>><br>
>>> where = compare_columns(cls_info.primary_key, primary_vars)<br>
>>><br>
>>> select = Select(cls_info.columns, where,<br>
>>> default_tables=cls_info.table, limit=1)<br>
>>><br>
>>> result = self._connection.execute(select)<br>
>>> values = result.get_one()<br>
>>> if values is None:<br>
>>> # Patched<br>
>>> self.nonexistent_cache.add(alive_key)<br>
>>> # End of patch<br>
>>> return None<br>
>>> return self._load_object(cls_info, result, values)<br>
>>><br>
>>> def get_multi(self, cls, keys, exists=False):<br>
>>> """Get objects of type cls with the given primary key from the database.<br>
>>><br>
>>> If the object is alive the database won't be touched.<br>
>>><br>
>>> @param cls: Class of the object to be retrieved.<br>
>>> @param key: Collection of primary key of objects (that may be a tuple for composed keys).<br>
>>><br>
>>> @return: The object found with the given primary key, or None<br>
>>> if no object is found.<br>
>>> """<br>
>>> result = {}<br>
>>> missing = {}<br>
>>> if self._implicit_flush_block_count == 0:<br>
>>> self.flush()<br>
>>><br>
>>> for key in keys:<br>
>>> key_orig = key<br>
>>> if type(key) != tuple:<br>
>>> key = (key,)<br>
>>><br>
>>> cls_info = get_cls_info(cls)<br>
>>><br>
>>> assert len(key) == len(cls_info.primary_key)<br>
>>><br>
>>> primary_vars = []<br>
>>> for column, variable in zip(cls_info.primary_key, key):<br>
>>> if not isinstance(variable, Variable):<br>
>>> variable = column.variable_factory(value=variable)<br>
>>> primary_vars.append(variable)<br>
>>><br>
>>> primary_values = tuple(var.get(to_db=True) for var in primary_vars)<br>
>>><br>
>>> alive_key = (cls_info.cls, primary_values)<br>
>>> obj_info = self._alive.get(alive_key)<br>
>>> if obj_info is not None and not obj_info.get("invalidated"):<br>
>>> result[key_orig] = self._get_object(obj_info)<br>
>>> continue<br>
>>><br>
>>> if obj_info is None and not exists and alive_key in self.nonexistent_cache:<br>
>>> result[key_orig] = None<br>
>>> continue<br>
>>><br>
>>> missing[primary_values] = key_orig<br>
>>><br>
>>> if not missing:<br>
>>> return result<br>
>>><br>
>>> wheres = []<br>
>>> for i, column in enumerate(cls_info.primary_key):<br>
>>> wheres.append(In(cls_info.primary_key[i], tuple(v[i] for v in missing)))<br>
>>> where = And(*wheres) if len(wheres) > 1 else wheres[0]<br>
>>><br>
>>> for obj in self.find(cls, where):<br>
>>> key_orig = missing.pop(tuple(var.get(to_db=True) for var in get_obj_info(obj).get("primary_vars")))<br>
>>> result[key_orig] = obj<br>
>>><br>
>>> for primary_values, key_orig in missing.items():<br>
>>> self.nonexistent_cache.add((cls, primary_values))<br>
>>> result[key_orig] = None<br>
>>><br>
>>> return result<br>
>>><br>
>>> def reset(self):<br>
>>> StoreOrig.reset(self)<br>
>>> del self.nonexistent_cache[:]<br>
>>><br>
>>><br>
>>><br>
>>> 2015-01-16 9:03 GMT+02:00 Free Ekanayaka <<a href="mailto:free@64studio.com" target="_blank">free@64studio.com</a>>:<br>
>>>><br>
>>>> Hi Ivan<br>
>>>><br>
>>>> On Thu, Jan 15, 2015 at 10:23 PM, Ivan Zakrevskyi <<a href="mailto:ivan.zakrevskyi@rebelmouse.com" target="_blank">ivan.zakrevskyi@rebelmouse.com</a>> wrote:<br>
>>>>><br>
>>>>> Hi all.<br>
>>>>><br>
>>>>> Storm has excellent caching behavior, but stores in Store._alive only existent objects. If object does not exists for some key, storm makes DB-query again and again.<br>
>>>>><br>
>>>>> Are you planning add caching for keys of nonexistent objects to prevent DB-query?<br>
>>>><br>
>>>><br>
>>>> If an object doesn't exist in the cache it meas that either it was not yet loaded at all, or it was loaded but it's now mark as "invalidated" (for example the transaction in which it was loaded fresh has terminated).<br>
>>>><br>
>>>> So I'm note sure what you mean in you question, but I don't think anything more that could be cached (in terms of key->object values).<br>
>>>><br>
>>>> Cheers<br>
>>>><br>
>>><br>
>>><br>
>>> --<br>
>>> storm mailing list<br>
>>> <a href="mailto:storm@lists.canonical.com" target="_blank">storm@lists.canonical.com</a><br>
>>> Modify settings or unsubscribe at: <a href="https://lists.ubuntu.com/mailman/listinfo/storm" target="_blank">https://lists.ubuntu.com/mailman/listinfo/storm</a><br>
>>><br>
>><br>
><br>
<br>
--<br>
storm mailing list<br>
<a href="mailto:storm@lists.canonical.com" target="_blank">storm@lists.canonical.com</a><br>
Modify settings or unsubscribe at: <a href="https://lists.ubuntu.com/mailman/listinfo/storm" target="_blank">https://lists.ubuntu.com/mailman/listinfo/storm</a><br>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div></div>