[step 11] request: 12 steps towards a high performance server

John Arbash Meinel john at arbash-meinel.com
Wed Sep 13 20:46:30 BST 2006


John Arbash Meinel wrote:
>>    http://people.ubuntu.com/~andrew/bzr/add-get_smart_client/
>>      Adds support for bzr:// and bzr+ssh:// urls, adds a new 'bzr serve'
>>      command, and adds a Transport.get_smart_client method.  This includes a
>>      SmartTransport, that performs file operations over the RPC added in this
>>      branch. 
> 

Let me start by saying I think it is going to be really fun to have a
real smart server. And Launchpad is probably going to be a great place
to deploy it. I'm still hoping that we will be able to keep reasonably
good performance over standard protocols. (Obviously I don't expect them
to be up to par with a real server, but I think it is a good thing that
bzr supports them).

...

> +class TestBzrServe(TestCaseWithTransport):
> +    
> +    def test_bzr_serve_port(self):
> +        # Make a branch
> +        self.make_branch('.')
> +
> +        # Serve that branch from the current directory
> +        process = self.start_bzr_subprocess('serve', '--port', 'localhost:0')
> +        port_line = process.stdout.readline()
> +        prefix = 'listening on port: '
> +        self.assertStartsWith(port_line, prefix)
> +        port = int(port_line[len(prefix):])
> +
> +        # Connect to the server
> +        branch = Branch.open('bzr://localhost:%d/' % port)
> +
> +        # We get a working branch
> +        branch.repository.get_revision_graph()
> +        self.assertEqual(None, branch.last_revision())
> +
> +        # Shutdown the server
> +        result = self.finish_bzr_subprocess(process, retcode=3,
> +                                            send_signal=signal.SIGINT)
> +        self.assertEqual('', result[0])
> +        self.assertEqual('bzr: interrupted\n', result[1])

^- The first thing I notice is that you are using SIGINT to kill 'bzr
serve'. Which you can't do on windows. I don't really know what to do.
Though maybe we just need to include a 'shutdown' command to the RPC
code. Perhaps only enabled in the test server, so that people can't
randomly shutdown a real bzr server.

...

> +"""Tests for smart transport"""
> +
> +# all of this deals with byte strings so this is safe
> +from cStringIO import StringIO
> +import subprocess
> +import sys
> +
> +import bzrlib
> +from bzrlib import tests, errors, bzrdir
> +from bzrlib.transport import local, memory, smart, get_transport

^- Both of these lines are incorrect. At the minimum, they need to be in
lexicographically sorted order. But it is also nicer to put each one on
a new line, because it makes better diffs later. So I suggest:

from bzrlib import (
   bzrdir,
   errors,
   tests,
   )

And similarly for the .transport line.

v- This is commented out, and not 2 spaces from the import statements.
At a minimum, add a newline, or possibly just delete the whole thing.

> +
> +## class SmartURLTests(tests.TestCase):
> +##     """Tests for handling of URLs and detection of smart servers"""
> +## 
> +##     def test_bzr_url_is_smart(self):

...

v- Are we positive we want to use 'bzr://' as our url scheme. It seems
right to me, but I don't believe there has been any discussion of it.

> +    def test_plausible_url(self):
> +        self.assert_(self.get_url().startswith('bzr://'))

...

> +class BasicSmartTests(tests.TestCase):
> +    
> +    def test_smart_query_version(self):
> +        """Feed a canned query version to a server"""
> +        to_server = StringIO('hello\n')
> +        from_server = StringIO()
> +        server = smart.SmartStreamServer(to_server, from_server, local.LocalTransport('file:///'))
> +        server._serve_one_request()
> +        self.assertEqual('ok\0011\n',
> +                         from_server.getvalue())

^- So the opening request is a plain 'hello' in order to get the 'ok +
version number'? I would rather avoid things that other protocols might
want to use.

Also, do we want to be more like SSH, which sends the version string at
connect time, rather than HTTP that requires an initial query? I
probably prefer the initial query, because it makes it easier to
multiplex on the same port. (So if there is a GET request, you just act
like an HTTP server, but if there is a HELLO SMART SERVER, then you act
like a bzr smart server).

It also seems weird to create the SmartStreamServer with the text that
you are going to be passing it. Though maybe you are just passing
file-like objects, and filling one to start with. So I suppose it is okay.


> +
> +    def test_canned_get_response(self):
> +        transport = memory.MemoryTransport('memory:///')
> +        transport.put('testfile', StringIO('contents\nof\nfile\n'))
> +        to_server = StringIO('get\001./testfile\n')
> +        from_server = StringIO()
> +        server = smart.SmartStreamServer(to_server, from_server, transport)
> +        server._serve_one_request()
> +        self.assertEqual('ok\n'
> +                         '17\n'
> +                         'contents\nof\nfile\n'
> +                         'done\n',
> +                         from_server.getvalue())

^- for 'hello' you returned a null terminated string, here you are
returning newline separated stuff. It seems like it would be a lot
better to be consistent.

Is the smart server intended to be more connectionless? (I think RPC
generally works that way). If it is, I guess that is okay. Though I
think I would rather it be more of a conversational server. Such that
you start with a handshake, and then talk back and forth from there.

Also, shouldn't requests have length prefixed strings, rather than
having the server read till a newline? Maybe this is just the way RPC
works. But I really prefer SSH's method which tells you how long the
next message is going to be. Since then the server can be a little bit
smarter about oversized requests/invalid data, etc.

...

> +    def test_server_subprocess(self):
> +        """Talk to a server started as a subprocess
> +        
> +        This is similar to running it over ssh, except that it runs in the same machine 
> +        without ssh intermediating.
> +        """
> +        args = [sys.executable, sys.argv[0], 'serve', '--inet']
> +        child = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
> +                                 close_fds=True)
> +        conn = smart.SmartStreamClient(lambda: (child.stdout, child.stdin))
> +        conn.query_version()
> +        conn.query_version()
> +        conn.disconnect()
> +        returncode = child.wait()
> +        self.assertEquals(0, returncode)

^- Shouldn't you be checking the output of 'query_version'? That it at
least looks something like a version string? Maybe even just:
val1 = conn.query_version()
val2 = conn.query_version()
self.assertEqual(val1, val2)
self.assertNotEqual('', val1)
self.assertNotEqual(None, val1)


...

> +    def test_start_tcp_server(self):
> +        url = self.server.get_url()
> +        self.assertContainsRe(url, r'^bzr://127\.0\.0\.1:[0-9]{2,}/')

^- Will it always be at 127.0.0.1?

> +
> +    def test_smart_transport_has(self):
> +        """Checking for file existence over smart."""

v- It is a bit cleaner to use put_bytes('foo', 'contents of foo\n')
Though I realize that didn't exist yet. But all of the 'put()' routines
are supposed to be deprecated.

> +        self.backing_transport.put("foo", StringIO("contents of foo\n"))
> +        self.assertTrue(self.transport.has("foo"))
> +        self.assertFalse(self.transport.has("non-foo"))
> +
> +    def test_smart_transport_get(self):
> +        """Read back a file over smart."""
> +        self.backing_transport.put("foo", StringIO("contents\nof\nfoo\n"))
> +        fp = self.transport.get("foo")
> +        self.assertEqual('contents\nof\nfoo\n', fp.read())

^- there is also get_bytes() but this is a reasonable start.

...

> +    def test_get_error_enoent(self):
> +        """Error reported from server getting nonexistent file."""
> +        # The path in a raised NoSuchFile exception should be the precise path
> +        # asked for by the client. This gives meaningful and unsurprising errors
> +        # for users.

v-- This brings up something that I've wanted to see for a while. And
that is an 'assertRaises' that somehow checks the string representation
of the exception. Because we check that exceptions are raised, but not
that they will be meaningful to the user when its done.

> +        try:
> +            self.transport.get('not%20a%20file')
> +        except errors.NoSuchFile, e:
> +            self.assertEqual('not%20a%20file', e.path)
> +        else:
> +            self.fail("get did not raise expected error")
> +
> +    def test_simple_clone_conn(self):
> +        """Test that cloning reuses the same connection."""
> +        # we create a real connection not a loopback one, but it will use the
> +        # same server and pipes
> +        conn2 = self.transport.clone('.')
> +        # XXX: shouldn't this assert something?
> +        # assertIdentical(self.transport._client, conn2._client), perhaps?

^- I think there should be some sort of test that the thing returned is
a real object, and not None, etc. Perhaps just doing a 'get()' on it.

v- For consistency, Robert usually names stuff like this:
test__remote_path(self). Since test_ is the prefix, and the function
name is _remote_path().

> +
> +    def test_remote_path(self):
> +        self.assertEquals('/foo/bar',
> +                          self.transport._remote_path('foo/bar'))
> +

...
v- We generally avoid has() on directories. But I suppose it is okay here.

I would also like to see:
self.assertFalse(sub_conn.has('toffee'))
and
self.assertFalse(transport.has('apple'))

Because that ensures that the directories aren't getting mashed together
in a weird way.
> +    def test_open_dir(self):
> +        """Test changing directory"""
> +        transport = self.transport
> +        self.backing_transport.mkdir('toffee')
> +        self.backing_transport.mkdir('toffee/apple')
> +        self.assertEquals('/toffee', transport._remote_path('toffee'))
> +        self.assertTrue(transport.has('toffee'))
> +        sub_conn = transport.clone('toffee')
> +        self.assertTrue(sub_conn.has('apple'))
> +

...

v- minor thing, we generally avoid file(...).write().
Also, you don't use a newline, so it doesn't really matter, but 'wb'
instead of 'w'.

And finally, I think we avoid 'self.assert_()' either use
'self.assertTrue()' or 'self.failUnless()'.

> +    def test_get_bundle(self):
> +        from bzrlib.bundle import serializer
> +        wt = self.make_branch_and_tree('.')
> +        b = wt.branch
> +        file('hello', 'w').write('hello world')
> +        wt.add('hello')
> +        wt.commit(message='add hello', rev_id='rev-1')
> +        
> +        server = smart.SmartServer(self.get_transport())
> +        response = server.dispatch_command('get_bundle', ('.', 'rev-1'))
> +        self.assert_(response.body.startswith('# Bazaar revision bundle '),
> +                     "doesn't look like a bundle: %r" % response.body)
> +        bundle = serializer.read_bundle(StringIO(response.body))

^- This command asserts the format of a bundle (that it at least starts
with '# Bazaar revision bundle'). Which is true for current formats, do
we want to assert it always?

(For a smart server, it seems to make sense to talk in mostly raw-binary
bundles, though having a plain text header would probably be a good thing)

...

v- I think it would be good for the next Bundle code to look more like a
Branch internally. So we could specify a '--revision', etc.
Also, I'm not sure that this TODO is in the right place.

> +#
> +# Getting a bundle from a smart server is a bit different from reading a
> +# bundle from a URL:
> +#
> +#  - we can reasonably remember the URL we last read from 
> +#  - you can specify a revision number to pull, and we need to pass it across
> +#    to the server as a limit on what will be requested
> +#
> +# TODO: Given a URL, determine whether it is a smart server or not (or perhaps
> +# otherwise whether it's a bundle?)  Should this be a property or method of
> +# the transport?  For the ssh protocol, we always know it's a smart server.
> +# For http, we potentially need to probe.  But if we're explicitly given
> +# bzr+http:// then we can skip that for now. 
> 


...

> +"""Smart-server protocol, client and server.
> +
> +Requests are sent as a command and list of arguments, followed by optional
> +bulk body data.  Responses are similarly a response and list of arguments,
> +followed by bulk body data. ::
> +
> +  SEP := '\001'
> +    Fields are separated by Ctrl-A.
> +  BULK_DATA := CHUNK+ TRAILER
> +    Chunks can be repeated as many times as necessary.
> +  CHUNK := CHUNK_LEN CHUNK_BODY
> +  CHUNK_LEN := DIGIT+ NEWLINE
> +    Gives the number of bytes in the following chunk.
> +  CHUNK_BODY := BYTE[chunk_len]
> +  TRAILER := SUCCESS_TRAILER | ERROR_TRAILER
> +  SUCCESS_TRAILER := 'done' NEWLINE
> +  ERROR_TRAILER := 
> +

^- You define SEP, but it is never used.
Shouldn't there be a total data length at the beginning, rather than
just having the length of each chunk? It would at least be nicer for
clients, since they can know how much space they need to reserve ahead
of time, rather than always adding X bytes more everytime they get a new
chunk.

I think it also helps for pipelining, since you know how much you need
to read, before switching to the next response. But I don't see any
effort in this protocol to support pipelining, so maybe that isn't until
a future version.

SSH handles this (IIRC) by having distinct channels, so each message
comes back on a different channel, and could thus even be multiplexed.
HTTP does it by having a Content-Length field. I suppose it might just
depend on how the layering is done.

We should also really consider having a Tranport.close()/disconnect().
Since stuff like spawning a remote bzr should really have a nicer
cleanup. (Though I guess it needs to handle random disconnects anyway).


> +Paths are passed across the network.  The client needs to see a namespace that
> +includes any repository that might need to be referenced, and the client needs
> +to know about a root directory beyond which it cannot ascend.
> +
> +Servers run over ssh will typically want to be able to access any path the user 
> +can access.  Public servers on the other hand (which might be over http, ssh
> +or tcp) will typically want to restrict access to only a particular directory 
> +and its children, so will want to do a software virtual root at that level.
> +In other words they'll want to rewrite incoming paths to be under that level
> +(and prevent escaping using ../ tricks.)
> +
> +URLs that include ~ should probably be passed across to the server verbatim
> +and the server can expand them.  This will proably not be meaningful when 
> +limited to a directory?
> +"""
> +

^- Are we including '~', or are we just passing a relative path like
sftp does? (If we want to support ~user/foo, then we probably need to
pass the string)


> +
> +
> +# TODO: A plain integer from query_version is too simple; should give some
> +# capabilities too?

^- definitely there should be a way to query for capabilities. But since
you seem to be defining a connectionless communication, it can just be
another command.

...

> +# TODO: Client and server warnings perhaps should contain some non-ascii bytes
> +# to make sure the channel can carry them without trouble?  Test for this?

^- Should 'warnings' contain them, or handshaking stuff (like 'hello')

...

> +# TODO: is it useful to allow multiple chunks in the bulk data?
> +#
> +# TODO: If we get an exception during transmission of bulk data we can't just
> +# emit the exception because it won't be seen.

^- I think it would be worthwhile to have a header on each chunk, that
indicates it is another chunk. Then you can send an 'error' chunk as
long as you finish the previous chunk.


v- current dir is handled by the client. I think it can stay this way.
> +#
> +# TODO: Clone method on Transport; should work up towards parent directory;
> +# unclear how this should be stored or communicated to the server... maybe
> +# just pass it on all relevant requests?
> +#
> +# TODO: Better name than clone() for changing between directories.  How about
> +# open_dir or change_dir or chdir?
> +#
> +# TODO: Is it really good to have the notion of current directory within the
> +# connection?  Perhaps all Transports should factor out a common connection
> +# from the thing that has the directory context?
> +#

...

v- I think this is just where you need a separation between what the
Server's root is, and what the clients root is. The server doesn't
maintain a current working directory, and only deals in paths relative
to its transport.base.

While the client keeps track of a 'base', and only sends absolute paths
to the server.

...

v- This is the standard 'cd /'; 'cd ..' behavior. But I would be fine
with switching all transports to raise an exception.

> +# FIXME: This transport, with several others, has imperfect handling of paths
> +# within urls.  It'd probably be better for ".." from a root to raise an error
> +# rather than return the same directory as we do at present.
> +#

...

v- I think you already did this one. At least you seem to have a test
for it.

> +# TODO: Transport should probably not implicitly connect from its constructor;
> +# it's quite useful to be able to deal with Transports representing locations
> +# without actually opening it.
> +#

v- Because of latency, I think it is better to always pass the whole
file back. (Look at sftp, where we have a round trip to open the file,
and another one just to read it).

We have 'readv()' which lets us specify ahead of time just a small range
for us to read.
> +# TODO: Perhaps support file-level readwrite operations over the transport
> +# too.
> +#
> +# TODO: SmartBzrDir class, proxying all Branch etc methods across to another
> +# branch doing file-level operations.
> +
> +
> +from cStringIO import StringIO
> +import errno
> +import os
> +import socket
> +import sys
> +import tempfile
> +import threading
> +import urllib
> +import urlparse
> +

v- At least these are sorted, but they should probably be on separate
lines. So adding a new one is obvious, and causes fewer conflicts.

> +from bzrlib import bzrdir, errors, revision, transport, trace, urlutils
> +from bzrlib.transport import sftp, local
> +from bzrlib.bundle.serializer import write_bundle
> +from bzrlib.trace import mutter
> +
> +# must do this otherwise urllib can't parse the urls properly :(
> +for scheme in ['ssh', 'bzr', 'bzr+loopback', 'bzr+ssh']:
> +    transport.register_urlparse_netloc_protocol(scheme)
> +del scheme
> +

v- Errors are rarely defined outside of 'bzrlib/errors.py'

> +
> +class BzrProtocolError(errors.TransportError):
> +    """Generic bzr smart protocol error: %(details)s"""
> +
> +    def __init__(self, details):
> +        self.details = details
> +

...

v- If we used '\0' as the separator, we could do:
('\0'.join(args)).encode('utf-8')

Which is quite a bit more efficient. Are you thinking you need \0 as a
valid character in your requests?

> +def _send_tuple(to_file, args):
> +    to_file.write('\1'.join((a.encode('utf-8') for a in args)) + '\n')
> +    to_file.flush()
> +
> +


v-- I didn't see any tests with a LocalTransport as the backing server.
It seems reasonable to have one that checks to make sure files show up
on the filesystem.

> +
> +    The server passes requests through to an underlying backing transport, 
> +    which will typically be a LocalTransport looking at the server's filesystem.
> +    """
> +
> +    def __init__(self, in_file, out_file, backing_transport):
> +        """Construct new server.
> +
> +        :param in_file: Python file from which requests can be read.
> +        :param out_file: Python file to write responses.
> +        :param backing_transport: Transport for the directory served.
> +        """
> +        self._in = in_file
> +        self._out = out_file
> +        self.smart_server = SmartServer(backing_transport)
> +        # server can call back to us to get bulk data - this is not really
> +        # ideal, they should get it per request instead
> +        self.smart_server._recv_body = self._recv_bulk

...

v- You have a very odd mix of a 'stateless' protocol, versus a serve()
connection that quits when it gets an empty message.

> +    def _serve_one_request(self):
> +        """Read one request from input, process, send back a response.
> +        
> +        :return: False if the server should terminate, otherwise None.
> +        """
> +        req_args = self._recv_tuple()
> +        if req_args == None:
> +            # client closed connection
> +            return False  # shutdown server
> +        try:
> +            response = self.smart_server.dispatch_command(req_args[0], req_args[1:])
> +            self._send_tuple(response.args)
> +            if response.body is not None:
> +                self._send_bulk_data(response.body)
> +        except KeyboardInterrupt:
> +            raise
> +        except Exception, e:
> +            # everything else: pass to client, flush, and quit
> +            self._send_error_and_disconnect(e)
> +            return False
> +
> +    def serve(self):
> +        """Serve requests until the client disconnects."""
> +        # Keep a reference to stderr because the sys module's globals get set to
> +        # None during interpreter shutdown.
> +        from sys import stderr
> +        try:
> +            while self._serve_one_request() != False:
> +                pass
> +        except Exception, e:
> +            stderr.write("%s terminating on exception %s\n" % (self, e))
> +            raise

v- Incomplete sentence.

> +class SmartServer(object):
> +    """Protocol logic for smart.
> +    
> +    This doesn't handle serialization at all, it just processes requests and
> +    creates responses.
> +    """
> +
> +    # TODO: Better way of representing the body for commands that take it,
> +    # and allow it to be streamed into the server.
> +    


v- All of these should probably be using the _bytes() forms, at least
for now.

> +    def __init__(self, backing_transport):
> +        self._backing_transport = backing_transport
> +        
> +    def do_hello(self):
> +        """Answer a version request with my version."""
> +        return SmartServerResponse(('ok', '1'))
> +
> +    def do_has(self, relpath):
> +        r = self._backing_transport.has(relpath) and 'yes' or 'no'
> +        return SmartServerResponse((r,))
> +
> +    def do_get(self, relpath):
> +        backing_file = self._backing_transport.get(relpath)
> +        return SmartServerResponse(('ok',), backing_file.read())
> +

...

v- I realize this is all we strictly need. But it might be nice if
'stat' had a few more fields. (Like nlinks would let us handle remote
hardlinked files).

> +    def do_stat(self, relpath):
> +        stat = self._backing_transport.stat(relpath)
> +        return SmartServerResponse(('stat', str(stat.st_size), oct(stat.st_mode)))
> +        

v- get_bundle should almost definitely be defined in terms of 2 revision
ids (if not 3). Maybe this was just for testing, but I think clients are
more likely to want just the tip of a branch, not the whole thing. But
there is an 'exploration' phase that needs to happen first.

> +    def do_get_bundle(self, path, revision_id):
> +        # open transport relative to our base
> +        t = self._backing_transport.clone(path)
> +        control, extra_path = bzrdir.BzrDir.open_containing_from_transport(t)
> +        repo = control.open_repository()
> +        tmpf = tempfile.TemporaryFile()
> +        base_revision = revision.NULL_REVISION
> +        write_bundle(repo, revision_id, base_revision, tmpf)
> +        tmpf.seek(0)
> +        return SmartServerResponse((), tmpf.read())
> +

...

v- I guess my confusion is because you are calling the 'handler' a
Server, but it isn't really *the* server. It is just a subroutine that
is run per connection.
So maybe SmartStreamHandler is a better name.

> +    def accept_and_serve(self):
> +        conn, client_addr = self._server_socket.accept()
> +        conn.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
> +        from_client = conn.makefile('r')
> +        to_client = conn.makefile('w')
> +        handler = SmartStreamServer(from_client, to_client,
> +                self.backing_transport)
> +        connection_thread = threading.Thread(None, handler.serve, name='smart-server-child')
> +        connection_thread.setDaemon(True)
> +        connection_thread.start()
> +

...
v- The name 'clone()' is partially used because we may have multiple
Transport objects connected to the same host, just based in a different
directory. (BzrDir.root_transport versus BzrDir.transport).

We could change this, so that the current directory state is maintained
in a separate object. But either that becomes the primary object with
all of the get/put requests, or you have to pass that to every get/put
request.

It might be better to just pull out the concept of a connection, and
have them as an explicit thing. That you use to say
'connection.get_rooted_transport()' or something like that.

But the goal is that Branch & BzrDir don't have to really track their
base location and make absolute requests on the Transport. The Transport
handles the relative path issues.

> +    def clone(self, relative_url):
> +        """Make a new SmartTransport related to me, sharing the same connection.
> +
> +        This essentially opens a handle on a different remote directory.
> +        """
> +        if relative_url is None:
> +            return self.__class__(self.base, self)
> +        else:
> +            return self.__class__(self.abspath(relative_url), self)
> +

...

> +    def _optional_mode(self, mode):
> +        if mode is None:
> +            return ''
> +        else:
> +            return '%d' % mode

^- Is mode supposed to be in '%d' or is it supposed to be 'oct()'.
I saw earlier that you return Stat() information in octal, it seems like
we should make requests in octal as well.

v- All of these need to be switched over to the *_file and *_bytes
alternatives.

> +
> +    def mkdir(self, relpath, mode=None):
> +        resp = self._client._call('mkdir', 
> +                                  self._remote_path(relpath), 
> +                                  self._optional_mode(mode))
> +        self._translate_error(resp)
> +
> +    def put(self, relpath, upload_file, mode=None):
> +        # FIXME: upload_file is probably not safe for non-ascii characters -
> +        # should probably just pass all parameters as length-delimited
> +        # strings?
> +        # XXX: wrap strings in a StringIO.  There should be a better way of
> +        # handling this.
> +        if isinstance(upload_file, str):
> +            upload_file = StringIO(upload_file)
> +        resp = self._client._call_with_upload('put', 
> +                                              (self._remote_path(relpath), 
> +                                               self._optional_mode(mode)),
> +                                              upload_file.read())
> +        self._translate_error(resp)

...

v- I thought we wanted to avoid __del__ members, and instead prefer
try/finally blocks.

> +class SmartStreamClient(SmartProtocolBase):
> +    """Connection to smart server over two streams"""
> +
> +    def __init__(self, connect_func):
> +        self._connect_func = connect_func
> +        self._connected = False
> +
> +    def __del__(self):
> +        self.disconnect()
> +


...

v- As mentioned earlier, I think '--port' is a bad name for this parameter.

> +class cmd_serve(Command):
> +    """Run the bzr server.
> +    """
> +    takes_options = [
> +        Option('inet',
> +               help='serve on stdin/out for use from inetd or sshd'),
> +        Option('port',
> +               help='listen for connections on nominated port of the form '
> +                    '[hostname:]portnumber. Passing 0 as the port number will '
> +                    'result in a dynamically allocated port.',
> +               type=str),
> +        Option('directory',
> +               help='serve contents of directory',
> +               type=unicode),
> +        ]

---
Finally... That took a little while.

Anyway, I think there is a lot of promise here. But there are a lot of
points of design that haven't been revealed until now, so I had a lot of
comments to make. Some of it should block on getting it accepted, but
not all of it.

John
=:->


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 254 bytes
Desc: OpenPGP digital signature
Url : https://lists.ubuntu.com/archives/bazaar/attachments/20060913/163a59b1/attachment.pgp 


More information about the bazaar mailing list