Rev 4710: do a bit of profiling of the time it takes to hash(). in http://bazaar.launchpad.net/~jameinel/bzr/2.1-memory-consumption
John Arbash Meinel
john at arbash-meinel.com
Tue Sep 29 22:51:36 BST 2009
At http://bazaar.launchpad.net/~jameinel/bzr/2.1-memory-consumption
------------------------------------------------------------
revno: 4710
revision-id: john at arbash-meinel.com-20090929215131-4ddlo63wgvgnjpic
parent: john at arbash-meinel.com-20090929153659-arh13xituaerkzpm
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: 2.1-memory-consumption
timestamp: Tue 2009-09-29 16:51:31 -0500
message:
do a bit of profiling of the time it takes to hash().
-------------- next part --------------
=== modified file 'bzrlib/_keys_type_c.h'
--- a/bzrlib/_keys_type_c.h 2009-09-29 15:25:12 +0000
+++ b/bzrlib/_keys_type_c.h 2009-09-29 21:51:31 +0000
@@ -26,6 +26,17 @@
#endif
#define KEY_HAS_HASH 0
+/* Caching the hash adds memory, but allows us to save a little time during
+ * lookups. TIMEIT hash(key) shows it as
+ * 0.108usec w/ hash
+ * 0.160usec w/o hash
+ * Note that the entries themselves are strings, which already cache their
+ * hashes. So while there is a 1.5:1 difference in the time for hash(), it is
+ * already a function which is quite fast. Probably the only reason we might
+ * want to do so, is if we implement a KeyIntern dict that assumes it is
+ * available, and can then drop the 'hash' value from the item pointers. Then
+ * again, if Key_hash() is fast enough, we may not even care about that.
+ */
/* This defines a single variable-width key.
* It is basically the same as a tuple, but
More information about the bazaar-commits
mailing list