[PATCH] Branch and pull-- now with remote

David Allouche david at allouche.net
Tue Jun 7 14:35:02 BST 2005


On Mon, 2005-06-06 at 09:24 -0400, Aaron Bentley wrote:
> Martin Pool wrote:
> | On 29 May 2005, Aaron Bentley <aaron.bentley at utoronto.ca> wrote:
> |
> | I'm thinking of some kind of producer/consumer or blackboard model
> | where the main program adds URLs to a queue that it wants, and then
> | one or more HTTP threads pull them down and eventually mark them as
> | either complete or failed.
> 
> Yes.  I've done some thinking about this for Arch, and what I'd like is
> an interface where "get" produces file-like objects.  Each file-like
> object is will asynchonously download.  If there is not enough data to
> satisfy the read() request, we do select(), and buffer data for other
> get requests, until select() says there's data for the file we're
> actually interested in.  Lather, rinse, repeat.

That looks like it stems from the process handling discussion we had
about pybaz. I think that is hackish.

If the point is doing efficient asynchronous networking, which it
appears to be, why not do it properly, and use Twisted?

Because of the "no mandatory external dependency" goal?

-- 
                                                            -- ddaa
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
Url : https://lists.ubuntu.com/archives/bazaar/attachments/20050607/e7c0e978/attachment.pgp 


More information about the bazaar mailing list