Is urlgrabber a premature optimization?

Martin Pool mbp at sourcefrog.net
Fri Jun 24 01:09:52 BST 2005


On 23 Jun 2005, Aaron Bentley <aaron.bentley at utoronto.ca> wrote:
> Hi all,
> 
> AIUI, the key advantage of urlgrabber over urllib is that it supports
> keepalive, which makes batch downloading somewhat faster.  But it
> doesn't support pipelining or parallel downloading, so for batch
> downloading, it's not really adequate.
> 
> If we switch to effbot.org or twisted, urlgrabber will not provide any
> advantages, because it will only download one file per session.
> 
> Currently, support for urlgrabber is on by default.  When urlgrabber is
> not available, bzr cannot use RemoteBranches.  It has been suggested
> that bzr should fall back to urllib if urlgrabber is not available.
> 
> However, if urlgrabber is really a premature optimization, perhaps we
> should simply disable support for urlgrabber completely.  Insert
> standard justifications about maintainability, testing, dependencies,
> etc here.

It looks like the best thing to do is switch to effbot's library, and
remove urlgrabber.  I'll pull in those patches.

I'm inclined to move a copy of it under bzrllib so packagers can just
install bzr without clobbering any other libraries.

> Urlgrabber isn't current available as part of Debian or Ubuntu, which is
> blocking bzr from entering those distros.  Rob Weir is willing to
> package urllib, but if it's not needed, we can save him the trouble.

Doesn't urllib come with python?

-- 
Martin





More information about the bazaar mailing list