# Bazaar revision bundle v0.9
#
# message:
#   sanitize developers docs
# committer: Alexander Belchenko <bialix@ukr.net>
# date: Tue 2007-06-05 11:02:04.938999891 +0300

=== renamed file doc/developers/bundle-creation.rst // doc/developers/bundle-cr
... eation.txt
--- doc/developers/bundle-creation.rst
+++ doc/developers/bundle-creation.txt
@@ -26,7 +26,7 @@
 :i: length of stored diff
 
 Needs
-=====
+-----
 - Improved common ancestor algorithm
 - Access to partial revision graph proportional to relevant revisions
 - Access to changed files proportional to number of change files and

=== renamed file doc/developers/initial-push-pull.rst // doc/developers/initial
... -push-pull.txt
--- doc/developers/initial-push-pull.rst
+++ doc/developers/initial-push-pull.txt
@@ -1,5 +1,5 @@
 Initial push / pull
-###################
+===================
 
 Optimal case
 ------------
@@ -43,12 +43,12 @@
 ------------------
 
 Phase 1
-=======
+~~~~~~~
 Push: ask if there is a repository, and if not, what formats are okay
 Pull: Nothing
 
 Phase 2
-=======
+~~~~~~~
 Push: send initial push command, streaming data in acceptable format, following
 disk case strategy
 Pull: receive initial pull command, specifying format

=== renamed file doc/developers/merge-scaling.rst // doc/developers/merge-scali
... ng.txt
--- doc/developers/merge-scaling.rst
+++ doc/developers/merge-scaling.txt
@@ -25,11 +25,11 @@
 :i: number of revisions between base and other
 
 Needs
-=====
+-----
 - Access to revision graph proportional to number of revisions read
 - Access to changed file metadata proportional to number of changes and number of intervening revisions.
 - O(1) access to fulltexts
 
 Notes
-=====
+-----
 Multiparent deltas may offer some nice properties for performance of annotation based merging.

=== modified file doc/developers/add.txt
--- doc/developers/add.txt
+++ doc/developers/add.txt
@@ -1,5 +1,5 @@
 Add
----
+===
 
 Add is used to recursively version some paths supplied by the user. Paths that
 match ignore rules are not versioned, and paths that become versioned are
@@ -7,7 +7,7 @@
 a single tree, but perhaps with nested trees this should change.
 
 Least work we can hope to perform
-=================================
+---------------------------------
 
 * Read a subset of the full versioned paths data for the tree matching the scope of the paths the user supplied.
 * Seek once to each directory within the scope and readdir its contents.
@@ -25,7 +25,7 @@
   (proportional to the number we actually calculate).
 
 Per file algorithm
-==================
+------------------
 
 #. If the path is versioned, and it is a directory, push onto the recurse stack.
 #. If the path is supplied by the user or is not ignored, version it, and if a 

=== modified file doc/developers/annotate.txt
--- doc/developers/annotate.txt
+++ doc/developers/annotate.txt
@@ -1,5 +1,5 @@
 Annotate
---------
+========
 
 Broadly tries to ascribe parts of the tree state to individual commits.
 

=== modified file doc/developers/gc.txt
--- doc/developers/gc.txt
+++ doc/developers/gc.txt
@@ -1,5 +1,5 @@
 Garbage Collection
-------------------
+==================
 
 Garbage collection is used to remove data from a repository that is no longer referenced.
 
@@ -7,7 +7,7 @@
 then generating a new repository with less data.
 
 Least work we can hope to perform
-=================================
+---------------------------------
 
 * Read all branches to get initial references - tips + tags.
 * Read through the revision graph to find unreferenced revisions. A cheap HEADS

=== modified file doc/developers/incremental-push-pull.txt
--- doc/developers/incremental-push-pull.txt
+++ doc/developers/incremental-push-pull.txt
@@ -1,5 +1,5 @@
 Incremental push/pull
----------------------
+=====================
 
 This use case covers pulling in or pushing out some number of revisions which
 is typically a small fraction of the number already present in the target
@@ -9,7 +9,7 @@
 responsibility of the Repository object.
 
 Functional Requirements
-=======================
+-----------------------
 
 A push or pull operation must:
  * Copy all the data to reconstruct the selected revisions in the target
@@ -18,7 +18,7 @@
    data, corrupted data should not be incorporated accidentally.
 
 Factors which should add work for push/pull
-===========================================
+-------------------------------------------
 
  * Baseline overhead: The time to connect to both branches.
  * Actual new data in the revisions being pulled (drives the amount of data to
@@ -27,7 +27,7 @@
    determination of what revisions to move around).
 
 Push/pull overview
-==================
+------------------
 
 1. New data is identified in the source repository.
 2. That data is read from the source repository.
@@ -35,7 +35,7 @@
    manner that its not visible to readers until its ready for use.
 
 New data identification
-+++++++++++++++++++++++
+~~~~~~~~~~~~~~~~~~~~~~~
 
 We have a single top level data object: revisions. Everything else is
 subordinate to revisions, so determining the revisions to propogate should be
@@ -124,7 +124,7 @@
 
 
 Data reading
-++++++++++++
+~~~~~~~~~~~~
 
 When transferring information about a revision the graph of data for the
 revision is walked: revision -> inventory, revision -> matching signature,
@@ -156,7 +156,7 @@
 
 
 Data Verification and writing
-+++++++++++++++++++++++++++++
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 New data written to a repository should be completed intact when it is made
 visible. This suggests that either all the data for a revision must be made
@@ -247,7 +247,7 @@
 avoid ending up wth corrupt/bad data
 
 Notes from London
-=================
+-----------------
 
  #. setup
 
@@ -268,7 +268,7 @@
    #. validate the sha1 of the full text of each transmitted text.
    #. validate the sha1:name mapping in each newly referenced inventory item.
    #. validate the sha1 of the XML of each inventory against the revision.
-      *** this is proportional to tree size and must be fixed ***
+      **this is proportional to tree size and must be fixed**
 
  #. write the data to the local repo.
     The API should output the file texts needed by the merge as by product of the transmission

=== modified file doc/developers/performance-roadmap-rationale.txt
--- doc/developers/performance-roadmap-rationale.txt
+++ doc/developers/performance-roadmap-rationale.txt
@@ -1,5 +1,5 @@
 What should be in the roadmap?
-------------------------------
+==============================
 
 A good roadmap provides a place for contributors to look for tasks, it
 provides users with a sense of when we will fix things that are
@@ -28,7 +28,7 @@
 have learnt over the last years.
 
 What should the final system look like, how is it different to what we have today?
-----------------------------------------------------------------------------------
+==================================================================================
 
 One of the things I like the most about bzr is its rich library API, and
 I've heard this from numerous other folk. So anything that will remove
@@ -50,7 +50,7 @@
 on what we have learnt.
 
 What use cases should be covered?
----------------------------------
+=================================
 
 My list of use cases is probably not complete - its just the ones I
 happen to see a lot :). I think each should be analysed comprehensively
@@ -88,8 +88,8 @@
  * update
  * cbranch
 
-how is development on the roadmap coordinated?
-----------------------------------------------
+How is development on the roadmap coordinated?
+==============================================
 
 I think we should hold regular get-togethers (on IRC) to coordinate on
 our progress, because this is a big task and its a lot easier to start

=== modified file doc/developers/performance-roadmap.txt
--- doc/developers/performance-roadmap.txt
+++ doc/developers/performance-roadmap.txt
@@ -2,6 +2,15 @@
 Bazaar Performance Roadmap
 ==========================
 
+.. Mark sections in included files as following:
+..   level 1 ========
+..   level 2 --------
+..   level 3 ~~~~~~~~
+..   level 4 ^^^^^^^^ (better don't use level above 3!)
+
+.. contents::
+.. sectnum::
+
 About the performance roadmap
 #############################
 
@@ -12,7 +21,10 @@
 
 .. include:: performance-use-case-analysis.txt
 
-.. include:: initial-push-pull.rst
+Use cases
+#########
+
+.. include:: initial-push-pull.txt
 
 .. include:: incremental-push-pull.txt
 
@@ -24,6 +36,6 @@
 
 .. include:: annotate.txt
 
-.. include:: merge-scaling.rst
+.. include:: merge-scaling.txt
 
-.. include:: bundle-creation.rst
+.. include:: bundle-creation.txt

=== modified file doc/developers/performance-use-case-analysis.txt
--- doc/developers/performance-use-case-analysis.txt
+++ doc/developers/performance-use-case-analysis.txt
@@ -1,5 +1,5 @@
 Analysing a specific use case
------------------------------
+=============================
 
 The analysis of a use case needs to provide as outputs:
  * The functional requirements that the use case has to satisfy.
@@ -15,7 +15,7 @@
    should also be mentioned.
 
 Performing the analysis
------------------------
+=======================
 
 The analysis needs to be able to define the characteristics of the
 involved disk storage and APIs. That means we need to examine the data
@@ -38,7 +38,7 @@
 fine to post-process the annotated text to obtain dotted-revnos.'
 
 What factors should be considered?
-----------------------------------
+==================================
 
 Obviously, those that will make for an extremely fast system :). There
 are many possible factors, but the ones I think are most interesting to
@@ -49,7 +49,7 @@
    - The time to get bzr ready to begin the use case.
 
 - scaling: how does performance change when any of the follow aspects
-   of the system are ratched massively up or down:
+  of the system are ratched massively up or down:
 
    - number of files/dirs/symlinks/subtrees in a tree (both working and 
      revision trees)
@@ -71,12 +71,13 @@
    - bandwidth to the disk storage
    - latency to perform semantic operations (hpss specific)
    - bandwidth when performing semantic operations.
+
 - locality of reference: If an operation requires data that is located
-   within a small region at any point, we often get better performance 
-   than with an implementation of the same operation that requires the
-   same amount of data but with a lower locality of reference. Its 
-   fairly tricky to add locality of reference after the fact, so I think
-   its worth considering up front.
+  within a small region at any point, we often get better performance 
+  than with an implementation of the same operation that requires the
+  same amount of data but with a lower locality of reference. Its 
+  fairly tricky to add locality of reference after the fact, so I think
+  its worth considering up front.
 
 Using these factors, to the annotate example we can add that its
 reasonable to do two 'semantic' round trips to the local tree, one to

=== modified file doc/developers/revert.txt
--- doc/developers/revert.txt
+++ doc/developers/revert.txt
@@ -1,11 +1,11 @@
 Revert
-------
+======
 
 Change users selected paths to be the same as those in a given revision making
 backups of any paths that bzr did not set the last contents itself.
 
 Least work we can hope to perform
-=================================
+---------------------------------
 
 We should be able to do work proportional to the scope the user is reverting
 and the amount of changes between the working tree and the revision being

=== modified directory  // last-changed:bialix@ukr.net-20070605080204-hvhqw69nj
... lpxcscb
# revision id: bialix@ukr.net-20070605080204-hvhqw69njlpxcscb
# sha1: 73b75363e8e9089a74ce10a43b9559e310498294
# inventory sha1: ff8f1310fdf7740ba20676500bccc81f08a7e848
# parent ids:
#   pqm@pqm.ubuntu.com-20070604194535-ihhpf84qp0icoj2t
# base id: pqm@pqm.ubuntu.com-20070604194535-ihhpf84qp0icoj2t
# properties:
#   branch-nick: installer-0.17
