[step 11] request: 12 steps towards a high performance server

Andrew Bennetts andrew at canonical.com
Fri Sep 15 07:33:41 BST 2006


On Wed, Sep 13, 2006 at 02:46:30PM -0500, John Arbash Meinel wrote:
> John Arbash Meinel wrote:
> >>    http://people.ubuntu.com/~andrew/bzr/add-get_smart_client/
> >>      Adds support for bzr:// and bzr+ssh:// urls, adds a new 'bzr serve'
> >>      command, and adds a Transport.get_smart_client method.  This includes a
> >>      SmartTransport, that performs file operations over the RPC added in this
> >>      branch. 
> > 
> 
> Let me start by saying I think it is going to be really fun to have a
> real smart server. And Launchpad is probably going to be a great place
> to deploy it. I'm still hoping that we will be able to keep reasonably
> good performance over standard protocols. (Obviously I don't expect them
> to be up to par with a real server, but I think it is a good thing that
> bzr supports them).

My thoughts exactly :)

> > +class TestBzrServe(TestCaseWithTransport):
> > +    
> > +    def test_bzr_serve_port(self):
[...]
> > +        # Shutdown the server
> > +        result = self.finish_bzr_subprocess(process, retcode=3,
> > +                                            send_signal=signal.SIGINT)
> > +        self.assertEqual('', result[0])
> > +        self.assertEqual('bzr: interrupted\n', result[1])
> 
> ^- The first thing I notice is that you are using SIGINT to kill 'bzr
> serve'. Which you can't do on windows. I don't really know what to do.
> Though maybe we just need to include a 'shutdown' command to the RPC
> code. Perhaps only enabled in the test server, so that people can't
> randomly shutdown a real bzr server.

I'm now using the skip_if_plan_to_signal option on start_bzr_subprocess in this
test, so the test will be skipped on platforms without os.kill.

> 
> ...
> 
> > +"""Tests for smart transport"""
> > +
> > +# all of this deals with byte strings so this is safe
> > +from cStringIO import StringIO
> > +import subprocess
> > +import sys
> > +
> > +import bzrlib
> > +from bzrlib import tests, errors, bzrdir
> > +from bzrlib.transport import local, memory, smart, get_transport
> 
> ^- Both of these lines are incorrect. At the minimum, they need to be in
> lexicographically sorted order. But it is also nicer to put each one on
> a new line, because it makes better diffs later. So I suggest:
> 
> from bzrlib import (
>    bzrdir,
>    errors,
>    tests,
>    )
> 
> And similarly for the .transport line.

Done, thanks.

> v- This is commented out, and not 2 spaces from the import statements.
> At a minimum, add a newline, or possibly just delete the whole thing.
> 
> > +
> > +## class SmartURLTests(tests.TestCase):
> > +##     """Tests for handling of URLs and detection of smart servers"""
> > +## 
> > +##     def test_bzr_url_is_smart(self):
> 

Deleted, thanks.

> ...
> 
> v- Are we positive we want to use 'bzr://' as our url scheme. It seems
> right to me, but I don't believe there has been any discussion of it.
> 
> > +    def test_plausible_url(self):
> > +        self.assert_(self.get_url().startswith('bzr://'))

I think so.  Robert and Martin are both happy with it (as am I), and it has a
direct parallel with svn:// so I think it's what users will tend to expect.

> 
> ...
> 
> > +class BasicSmartTests(tests.TestCase):
> > +    
> > +    def test_smart_query_version(self):
> > +        """Feed a canned query version to a server"""
> > +        to_server = StringIO('hello\n')
> > +        from_server = StringIO()
> > +        server = smart.SmartStreamServer(to_server, from_server, local.LocalTransport('file:///'))
> > +        server._serve_one_request()
> > +        self.assertEqual('ok\0011\n',
> > +                         from_server.getvalue())
> 
> ^- So the opening request is a plain 'hello' in order to get the 'ok +
> version number'? I would rather avoid things that other protocols might
> want to use.

In general servers and clients have to cope with being accidentally connected to
something speaking another protocol.  The worst that will typically happen is
both sides wait indefinitely for each other to say something, and eventually
timeout.  Often they'll immediately both notice the other end is speaking
gibberish, and give up.

So I don't think there's much need to worry about this.  Do you have a specific
problem in mind?

> Also, do we want to be more like SSH, which sends the version string at
> connect time, rather than HTTP that requires an initial query? I
> probably prefer the initial query, because it makes it easier to
> multiplex on the same port. (So if there is a GET request, you just act
> like an HTTP server, but if there is a HELLO SMART SERVER, then you act
> like a bzr smart server).

It's a request-response protocol like HTTP.  As Robert has replied, we want to
keep it stateless, so each request is independent.  An initial greeting from the
server doesn't really fit with that.

> It also seems weird to create the SmartStreamServer with the text that
> you are going to be passing it. Though maybe you are just passing
> file-like objects, and filling one to start with. So I suppose it is okay.

Yeah, that's what's happening.

> > +
> > +    def test_canned_get_response(self):
> > +        transport = memory.MemoryTransport('memory:///')
> > +        transport.put('testfile', StringIO('contents\nof\nfile\n'))
> > +        to_server = StringIO('get\001./testfile\n')
> > +        from_server = StringIO()
> > +        server = smart.SmartStreamServer(to_server, from_server, transport)
> > +        server._serve_one_request()
> > +        self.assertEqual('ok\n'
> > +                         '17\n'
> > +                         'contents\nof\nfile\n'
> > +                         'done\n',
> > +                         from_server.getvalue())
> 
> ^- for 'hello' you returned a null terminated string, here you are
> returning newline separated stuff. It seems like it would be a lot
> better to be consistent.
> 
> Is the smart server intended to be more connectionless? (I think RPC
> generally works that way). If it is, I guess that is okay. Though I
> think I would rather it be more of a conversational server. Such that
> you start with a handshake, and then talk back and forth from there.
> 
> Also, shouldn't requests have length prefixed strings, rather than
> having the server read till a newline? Maybe this is just the way RPC
> works. But I really prefer SSH's method which tells you how long the
> next message is going to be. Since then the server can be a little bit
> smarter about oversized requests/invalid data, etc.
> 

As Robert said, the exact protocol can change later.  The more urgent issue is
getting the infrastructure for it where people can play with it.

> ...
> 
> > +    def test_server_subprocess(self):
> > +        """Talk to a server started as a subprocess
> > +        
> > +        This is similar to running it over ssh, except that it runs in the same machine 
> > +        without ssh intermediating.
> > +        """
> > +        args = [sys.executable, sys.argv[0], 'serve', '--inet']
> > +        child = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
> > +                                 close_fds=True)
> > +        conn = smart.SmartStreamClient(lambda: (child.stdout, child.stdin))
> > +        conn.query_version()
> > +        conn.query_version()
> > +        conn.disconnect()
> > +        returncode = child.wait()
> > +        self.assertEquals(0, returncode)
> 
> ^- Shouldn't you be checking the output of 'query_version'? That it at
> least looks something like a version string? Maybe even just:
> val1 = conn.query_version()
> val2 = conn.query_version()
> self.assertEqual(val1, val2)
> self.assertNotEqual('', val1)
> self.assertNotEqual(None, val1)

We've turned this into a blackbox test.  It's about checking that we can make a
request successfully, rather than about what query_version returns, and in fact
the new test gets the revision graph off the branch object.

> ...
> 
> > +    def test_start_tcp_server(self):
> > +        url = self.server.get_url()
> > +        self.assertContainsRe(url, r'^bzr://127\.0\.0\.1:[0-9]{2,}/')
> 
> ^- Will it always be at 127.0.0.1?

For the test server, yes.  So I think it's fine.

> > +
> > +    def test_smart_transport_has(self):
> > +        """Checking for file existence over smart."""
> 
> v- It is a bit cleaner to use put_bytes('foo', 'contents of foo\n')
> Though I realize that didn't exist yet. But all of the 'put()' routines
> are supposed to be deprecated.
> 
> > +        self.backing_transport.put("foo", StringIO("contents of foo\n"))
> > +        self.assertTrue(self.transport.has("foo"))
> > +        self.assertFalse(self.transport.has("non-foo"))
> > +
> > +    def test_smart_transport_get(self):
> > +        """Read back a file over smart."""
> > +        self.backing_transport.put("foo", StringIO("contents\nof\nfoo\n"))
> > +        fp = self.transport.get("foo")
> > +        self.assertEqual('contents\nof\nfoo\n', fp.read())
> 
> ^- there is also get_bytes() but this is a reasonable start.

Fair enough.  This now uses put_bytes.

> ...
> 
> > +    def test_get_error_enoent(self):
> > +        """Error reported from server getting nonexistent file."""
> > +        # The path in a raised NoSuchFile exception should be the precise path
> > +        # asked for by the client. This gives meaningful and unsurprising errors
> > +        # for users.
> 
> v-- This brings up something that I've wanted to see for a while. And
> that is an 'assertRaises' that somehow checks the string representation
> of the exception. Because we check that exceptions are raised, but not
> that they will be meaningful to the user when its done.
> 
> > +        try:
> > +            self.transport.get('not%20a%20file')
> > +        except errors.NoSuchFile, e:
> > +            self.assertEqual('not%20a%20file', e.path)
> > +        else:
> > +            self.fail("get did not raise expected error")

Actually, what is better is an assertRaises that returns the exception caught,
so you can not only check the string representation, but inspect the attributes
and distinguish subclasses.  Maybe we should make a new catchException helper,
or just extend the existing assertRaises.  The Twisted test suite does the
latter, fwiw.

> > +
> > +    def test_simple_clone_conn(self):
> > +        """Test that cloning reuses the same connection."""
> > +        # we create a real connection not a loopback one, but it will use the
> > +        # same server and pipes
> > +        conn2 = self.transport.clone('.')
> > +        # XXX: shouldn't this assert something?
> > +        # assertIdentical(self.transport._client, conn2._client), perhaps?
> 
> ^- I think there should be some sort of test that the thing returned is
> a real object, and not None, etc. Perhaps just doing a 'get()' on it.

Yeah, this test is clearly not complete.  I've added:

    self.assertTrue(self.transport._client is conn2._client)

> 
> v- For consistency, Robert usually names stuff like this:
> test__remote_path(self). Since test_ is the prefix, and the function
> name is _remote_path().
> 
> > +
> > +    def test_remote_path(self):
> > +        self.assertEquals('/foo/bar',
> > +                          self.transport._remote_path('foo/bar'))
> > +

Fair enough.  Done.

> 
> ...
> v- We generally avoid has() on directories. But I suppose it is okay here.
> 
> I would also like to see:
> self.assertFalse(sub_conn.has('toffee'))
> and
> self.assertFalse(transport.has('apple'))
> 
> Because that ensures that the directories aren't getting mashed together
> in a weird way.
> > +    def test_open_dir(self):
> > +        """Test changing directory"""
> > +        transport = self.transport
> > +        self.backing_transport.mkdir('toffee')
> > +        self.backing_transport.mkdir('toffee/apple')
> > +        self.assertEquals('/toffee', transport._remote_path('toffee'))
> > +        self.assertTrue(transport.has('toffee'))
> > +        sub_conn = transport.clone('toffee')
> > +        self.assertTrue(sub_conn.has('apple'))
> > +

We've added those assertions, and a comment explaining why it's worth checking.

> ...
> 
> v- minor thing, we generally avoid file(...).write().
> Also, you don't use a newline, so it doesn't really matter, but 'wb'
> instead of 'w'.
> 
> And finally, I think we avoid 'self.assert_()' either use
> 'self.assertTrue()' or 'self.failUnless()'.
> 
> > +    def test_get_bundle(self):
> > +        from bzrlib.bundle import serializer
> > +        wt = self.make_branch_and_tree('.')
> > +        b = wt.branch
> > +        file('hello', 'w').write('hello world')
> > +        wt.add('hello')
> > +        wt.commit(message='add hello', rev_id='rev-1')
> > +        
> > +        server = smart.SmartServer(self.get_transport())
> > +        response = server.dispatch_command('get_bundle', ('.', 'rev-1'))
> > +        self.assert_(response.body.startswith('# Bazaar revision bundle '),
> > +                     "doesn't look like a bundle: %r" % response.body)
> > +        bundle = serializer.read_bundle(StringIO(response.body))
> 
> ^- This command asserts the format of a bundle (that it at least starts
> with '# Bazaar revision bundle'). Which is true for current formats, do
> we want to assert it always?
> 
> (For a smart server, it seems to make sense to talk in mostly raw-binary
> bundles, though having a plain text header would probably be a good thing)

We replaced the file(...).write() with self.build_tree_contents, which is
clearer, shorter, and avoids your concerns.

The assert seems to be redundant with the call to read_bundle.  I'll just remove
it.

> ...
> 
> v- I think it would be good for the next Bundle code to look more like a
> Branch internally. So we could specify a '--revision', etc.
> Also, I'm not sure that this TODO is in the right place.
> 
> > +#
> > +# Getting a bundle from a smart server is a bit different from reading a
> > +# bundle from a URL:
> > +#
> > +#  - we can reasonably remember the URL we last read from 
> > +#  - you can specify a revision number to pull, and we need to pass it across
> > +#    to the server as a limit on what will be requested
> > +#
> > +# TODO: Given a URL, determine whether it is a smart server or not (or perhaps
> > +# otherwise whether it's a bundle?)  Should this be a property or method of
> > +# the transport?  For the ssh protocol, we always know it's a smart server.
> > +# For http, we potentially need to probe.  But if we're explicitly given
> > +# bzr+http:// then we can skip that for now. 
> > 

These are the tests of the smart server, and it's reasonable to have TODOs here
to drive the development.

> ...
> 
> > +"""Smart-server protocol, client and server.
> > +
> > +Requests are sent as a command and list of arguments, followed by optional
> > +bulk body data.  Responses are similarly a response and list of arguments,
> > +followed by bulk body data. ::
> > +
> > +  SEP := '\001'
> > +    Fields are separated by Ctrl-A.
> > +  BULK_DATA := CHUNK+ TRAILER
> > +    Chunks can be repeated as many times as necessary.
> > +  CHUNK := CHUNK_LEN CHUNK_BODY
> > +  CHUNK_LEN := DIGIT+ NEWLINE
> > +    Gives the number of bytes in the following chunk.
> > +  CHUNK_BODY := BYTE[chunk_len]
> > +  TRAILER := SUCCESS_TRAILER | ERROR_TRAILER
> > +  SUCCESS_TRAILER := 'done' NEWLINE
> > +  ERROR_TRAILER := 
> > +
> 
> ^- You define SEP, but it is never used.
> Shouldn't there be a total data length at the beginning, rather than
> just having the length of each chunk? It would at least be nicer for
> clients, since they can know how much space they need to reserve ahead
> of time, rather than always adding X bytes more everytime they get a new
> chunk.
> 
> I think it also helps for pipelining, since you know how much you need
> to read, before switching to the next response. But I don't see any
> effort in this protocol to support pipelining, so maybe that isn't until
> a future version.
> 
> SSH handles this (IIRC) by having distinct channels, so each message
> comes back on a different channel, and could thus even be multiplexed.
> HTTP does it by having a Content-Length field. I suppose it might just
> depend on how the layering is done.
> 
> We should also really consider having a Tranport.close()/disconnect().
> Since stuff like spawning a remote bzr should really have a nicer
> cleanup. (Though I guess it needs to handle random disconnects anyway).

Robert says: There are two different things to consider here. The chunking of
each block of data is important for streaming from the server: you can only send
data when you know how long it is (to avoid bugs on network disconnection), so
forcing the entire response to be prefixed forces the server to buffer the
entire response : not reasonable for a 4Gb repository single-bundle, for
instance. Pipelining is entirely orthogonal to this. See HTTP's TE for example.
HTTP's Content-Length field is a holdover from 1.0 and should be very rarely
seen in the field today except for completely static content where the length is
conveniently available, IFF the content has not been gzipped or otherwise TE'd.
Transport.close/disconnect is a problematic issue Andrew and I are talking about
at the moment, probably we'll have a spec for 0.12 about this. Consider that we
want Transport to behave as a Proxy for a remote directory. For testability and
model-compatability with HTTP its easier if this is essentially stateless, and
close/disconnect are most definately not stateless. Consider what happens when
we have 3 or four transports sharing a single stream-based client - i.e. SFTP
transports, FTP transports, and HTTP with connection sharing, SmartTransport
with its client: does disconnect that ONE Transport is finished with, or that
they all should get disconnected, but an implicit reconnection may occur?
Consider the impact of that ambiguity with the requirement that we only prompt
for SSH passwords etc once during a single session with a single remote server.
(And that we dont ever get a copy of the SSH passphrase either!)

> > +Paths are passed across the network.  The client needs to see a namespace that
> > +includes any repository that might need to be referenced, and the client needs
> > +to know about a root directory beyond which it cannot ascend.
> > +
> > +Servers run over ssh will typically want to be able to access any path the user 
> > +can access.  Public servers on the other hand (which might be over http, ssh
> > +or tcp) will typically want to restrict access to only a particular directory 
> > +and its children, so will want to do a software virtual root at that level.
> > +In other words they'll want to rewrite incoming paths to be under that level
> > +(and prevent escaping using ../ tricks.)
> > +
> > +URLs that include ~ should probably be passed across to the server verbatim
> > +and the server can expand them.  This will proably not be meaningful when 
> > +limited to a directory?
> > +"""
> > +
> 
> ^- Are we including '~', or are we just passing a relative path like
> sftp does? (If we want to support ~user/foo, then we probably need to
> pass the string)

As the "probably" and "?" indicate, this is future work.

> > +
> > +
> > +# TODO: A plain integer from query_version is too simple; should give some
> > +# capabilities too?
> 
> ^- definitely there should be a way to query for capabilities. But since
> you seem to be defining a connectionless communication, it can just be
> another command.

We have plenty to discuss with this protocol as we develop it :)

> ...
> 
> > +# TODO: Client and server warnings perhaps should contain some non-ascii bytes
> > +# to make sure the channel can carry them without trouble?  Test for this?
> 
> ^- Should 'warnings' contain them, or handshaking stuff (like 'hello')

This is about making sure the protocol blows up if the channel is not suitable
for the rpc-like protocol we are implementing -- that is, that we do not get
silent failures in integrity. Unfortunately we think that just throwing some
non-ascii bytes on the wire is insufficient, and instead a cheap checksum in the
headers of each chunk of data, and possibly the tuple level too, would provide
much more complete integrity with less chance of missing any imaginable form of
accidental corruption. So we've updated the TODO.

> ...
> 
> > +# TODO: is it useful to allow multiple chunks in the bulk data?
> > +#
> > +# TODO: If we get an exception during transmission of bulk data we can't just
> > +# emit the exception because it won't be seen.
> 
> ^- I think it would be worthwhile to have a header on each chunk, that
> indicates it is another chunk. Then you can send an 'error' chunk as
> long as you finish the previous chunk.

We've made a note of this in that TODO.

> v- At least these are sorted, but they should probably be on separate
> lines. So adding a new one is obvious, and causes fewer conflicts.
> 
> > +from bzrlib import bzrdir, errors, revision, transport, trace, urlutils
> > +from bzrlib.transport import sftp, local
> > +from bzrlib.bundle.serializer import write_bundle
> > +from bzrlib.trace import mutter

Done.

> > +
> > +# must do this otherwise urllib can't parse the urls properly :(
> > +for scheme in ['ssh', 'bzr', 'bzr+loopback', 'bzr+ssh']:
> > +    transport.register_urlparse_netloc_protocol(scheme)
> > +del scheme
> > +
> 
> v- Errors are rarely defined outside of 'bzrlib/errors.py'
> 
> > +
> > +class BzrProtocolError(errors.TransportError):
> > +    """Generic bzr smart protocol error: %(details)s"""
> > +
> > +    def __init__(self, details):
> > +        self.details = details
> > +

Renamed to SmartProtocolError and moved.

> ...
> 
> v- If we used '\0' as the separator, we could do:
> ('\0'.join(args)).encode('utf-8')
> 
> Which is quite a bit more efficient. Are you thinking you need \0 as a
> valid character in your requests?
> 
> > +def _send_tuple(to_file, args):
> > +    to_file.write('\1'.join((a.encode('utf-8') for a in args)) + '\n')
> > +    to_file.flush()
> > +
> > +

This protocol isn't making enough requests that performance of this encoding
should ever be an issue.

> v-- I didn't see any tests with a LocalTransport as the backing server.
> It seems reasonable to have one that checks to make sure files show up
> on the filesystem.
> 
[...]

There are already tests that check that files turn up on the backing transport.
I think the coverage of this aspect is already fine.

> ...
> 
> v- You have a very odd mix of a 'stateless' protocol, versus a serve()
> connection that quits when it gets an empty message.
> 
> > +    def _serve_one_request(self):
[...]

While the protocol is stateless, this channel (stdin/stdout) is
connection-based.  As such, the channel needs a termination mechanism.

> v- Incomplete sentence.
> 
> > +class SmartServer(object):
> > +    """Protocol logic for smart.
> > +    

Fixed.

> v- All of these should probably be using the _bytes() forms, at least
> for now.
> 
> > +    def __init__(self, backing_transport):
> > +        self._backing_transport = backing_transport
> > +        
> > +    def do_hello(self):
> > +        """Answer a version request with my version."""
> > +        return SmartServerResponse(('ok', '1'))
> > +
> > +    def do_has(self, relpath):
> > +        r = self._backing_transport.has(relpath) and 'yes' or 'no'
> > +        return SmartServerResponse((r,))
> > +
> > +    def do_get(self, relpath):
> > +        backing_file = self._backing_transport.get(relpath)
> > +        return SmartServerResponse(('ok',), backing_file.read())
> > +

Fixed.

> 
> ...
> 
> v- I realize this is all we strictly need. But it might be nice if
> 'stat' had a few more fields. (Like nlinks would let us handle remote
> hardlinked files).
> 
> > +    def do_stat(self, relpath):
> > +        stat = self._backing_transport.stat(relpath)
> > +        return SmartServerResponse(('stat', str(stat.st_size), oct(stat.st_mode)))
> > +        

Sure.  We can do this in future.

> v- I guess my confusion is because you are calling the 'handler' a
> Server, but it isn't really *the* server. It is just a subroutine that
> is run per connection.
> So maybe SmartStreamHandler is a better name.
> 
> > +    def accept_and_serve(self):
> > +        conn, client_addr = self._server_socket.accept()
> > +        conn.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
> > +        from_client = conn.makefile('r')
> > +        to_client = conn.makefile('w')
> > +        handler = SmartStreamServer(from_client, to_client,
> > +                self.backing_transport)
> > +        connection_thread = threading.Thread(None, handler.serve, name='smart-server-child')
> > +        connection_thread.setDaemon(True)
> > +        connection_thread.start()
> > +
> 
> ...

It's the thing that takes bytes in, dispatches to a handler method, and returns
bytes.  Calling it a "server" seems accurate to us.

> > +    def _optional_mode(self, mode):
> > +        if mode is None:
> > +            return ''
> > +        else:
> > +            return '%d' % mode
> 
> ^- Is mode supposed to be in '%d' or is it supposed to be 'oct()'.
> I saw earlier that you return Stat() information in octal, it seems like
> we should make requests in octal as well.

This is wire level serialisation and deserialisation.  It has nothing to do with
the stat information.

> v- All of these need to be switched over to the *_file and *_bytes
> alternatives.
> 
> > +
> > +    def mkdir(self, relpath, mode=None):
> > +        resp = self._client._call('mkdir', 
> > +                                  self._remote_path(relpath), 
> > +                                  self._optional_mode(mode))
> > +        self._translate_error(resp)
> > +
> > +    def put(self, relpath, upload_file, mode=None):
> > +        # FIXME: upload_file is probably not safe for non-ascii characters -
> > +        # should probably just pass all parameters as length-delimited
> > +        # strings?
> > +        # XXX: wrap strings in a StringIO.  There should be a better way of
> > +        # handling this.
> > +        if isinstance(upload_file, str):
> > +            upload_file = StringIO(upload_file)
> > +        resp = self._client._call_with_upload('put', 
> > +                                              (self._remote_path(relpath), 
> > +                                               self._optional_mode(mode)),
> > +                                              upload_file.read())
> > +        self._translate_error(resp)

It might be nice if Transports could choose to just implement put_bytes and have
put_file forward to that by default, or vice versa, rather than having to
implement both.  Robert suggests we could check in the root put_file and see if
self.put is the same function as Transport.put, and if thats the case forward to
put_bytes rather than warning and forwarding to put()?
 
> ...
> 
> v- I thought we wanted to avoid __del__ members, and instead prefer
> try/finally blocks.
> 
> > +class SmartStreamClient(SmartProtocolBase):
> > +    """Connection to smart server over two streams"""
> > +
> > +    def __init__(self, connect_func):
> > +        self._connect_func = connect_func
> > +        self._connected = False
> > +
> > +    def __del__(self):
> > +        self.disconnect()
> > +

Yes. Robert and I are planning to overhaul this as part of the 0.12 work. It
works for now but may well give grief during heavy use -- we'll find out.

> ...
> 
> v- As mentioned earlier, I think '--port' is a bad name for this parameter.
> 
> > +class cmd_serve(Command):
> > +    """Run the bzr server.
> > +    """
> > +    takes_options = [
> > +        Option('inet',
> > +               help='serve on stdin/out for use from inetd or sshd'),
> > +        Option('port',
> > +               help='listen for connections on nominated port of the form '
> > +                    '[hostname:]portnumber. Passing 0 as the port number will '
> > +                    'result in a dynamically allocated port.',
> > +               type=str),
> > +        Option('directory',
> > +               help='serve contents of directory',
> > +               type=unicode),
> > +        ]

Got a better name? FWIW squid uses http_port to mean a field that accepts
'host:port', 'port' etc.

Also we should set a default port for the bzr:// url scheme.

> ---
> Finally... That took a little while.
> 
> Anyway, I think there is a lot of promise here. But there are a lot of
> points of design that haven't been revealed until now, so I had a lot of
> comments to make. Some of it should block on getting it accepted, but
> not all of it.

I hope we've addressed this sufficiently to get +1

Thanks for the review!

-Andrew and Robert.





More information about the bazaar mailing list