[RFC] Ways to make initial knit creation faster (both push and commit)

John Arbash Meinel john at arbash-meinel.com
Fri Aug 18 20:02:48 BST 2006

I spent a little time today doing an --lsprof of why 'bzr push' is so
slow when creating a new remote branch. Here are a few notes I found:

1) Total push time is around 190s when the latency is 30ms.

2) We spend 187s in 'paramiko._sftp_client._read_response()'
   So it really is the round trip latency that is being tested.

   Of the 187s, 162s is spent in _request, and 26s is spent in _write.

   So it seems that we are wasting a lot of latency in requesting
   information about the remote location, rather than actually writing
   out data. People have often complained about this. But now we have a
   benchmark that shows us where it is happening.

4) Opening a file in append mode requires a round trip.
   I'm not sure if we can do much better than this, but because
   sftp.write() is actually a pwrite() call (it includes the offset
   to write at), it has to do a stat of the remote file to figure out
   where the append should start.

   We spend 24s just doing a stat() on every file that we try to write.
   Because of that, we might think about switching from opening new
   files in 'append', and instead open new files as more of a 	
   put_new_file(). We don't want to use a plain 'put()' because that
   copies data into a temporary file, and then does an atomic rename,
   which should be quite slow over sftp. But this would save 10% of our
   latency that when we know the file is missing, we just write it.

5) In Transport.append() we make 350 calls (90s), and instantiate 350
   sftp file objects (which takes 60s). We only actually call _pump() on
   314 of those files (25s) which I assume is because we have to create
   the hash prefix directory for the other 36 files, which requires a
   NoSuchFile error to be raised, and another round trip. Another 8s is
   spent calling chmod() on 100 of those files.

   Now, the tree has 100 files, so it should have 200 knit+index files.
   So this tells me that we have a different bug in that we don't create
   all of the files/directories with the right permissions.

   We also call SFTPClient.file() 615 times for ~200 real knit files.

   I think the overhead is something like:

   1) KnitIndex() calls get() to read the one that is not there.
   2) This raises no such file, so we probably call append() to create
      the knit index with just the header.
   3) For some fraction, this gives us NoSuchFile, because the hash
      prefix directory does not exist yet. We then have a mkdir() call.
      And I *believe* these exceptions are caught by the store not by
      KnitIndex, so all of these cases require another get() to
      determine that the target isn't there, and then an append() to
      actually create the target file.
   4) Then once the file is created, we have to make 2 more calls to
      append(), once to upload the data into the .knit file, and another
      to fill the .kndx file.

So here is my small proposals to reduce all of the round trips. I also
think this will help us in local performance, because a lot of this is
statting files that are not there, etc.

1) Change the VersionedFile interface, so that the file knows whether or
not it might need to create the parent directory if it cannot directly
create the file. This saves us the extra get() request since we already
know the file doesn't exist.

2) Create a new API on Transport for something like put_non_atomic().
Which assumes that the remote file doesn't exist, and opens the target
file for writing() and blats the bytes. We can have warnings to only use
this when you *know* that the target file doesn't exist. Because it
isn't a strictly safe function to call.

3) Don't write out the knit header until you are ready to write data.
Instead just keep a flag that the file needs to be created, and the
header needs to be written when we start writing data.

I think these 3 changes could drastically reduce our round trips when
creating new knits. Which helps with the first commit, and helps
whenever pushing files that don't exist yet.



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 254 bytes
Desc: OpenPGP digital signature
Url : https://lists.ubuntu.com/archives/bazaar/attachments/20060818/ee51d7ff/attachment.pgp 

More information about the bazaar mailing list