Rev 5539: (gz) Documentation fixes for its uses of "its" when it's meaning "it's" in file:///home/pqm/archives/thelove/bzr/%2Btrunk/
Canonical.com Patch Queue Manager
pqm at pqm.ubuntu.com
Fri Nov 12 23:43:59 GMT 2010
At file:///home/pqm/archives/thelove/bzr/%2Btrunk/
------------------------------------------------------------
revno: 5539 [merge]
revision-id: pqm at pqm.ubuntu.com-20101112234358-l7ncso1pkmwpczup
parent: pqm at pqm.ubuntu.com-20101112002801-qb8mk6pt3pqhp444
parent: zearin at users.sourceforge.net-20101112142836-gk9dk10q6oxdcz37
committer: Canonical.com Patch Queue Manager <pqm at pqm.ubuntu.com>
branch nick: +trunk
timestamp: Fri 2010-11-12 23:43:58 +0000
message:
(gz) Documentation fixes for its uses of "its" when it's meaning "it's"
(Zearin)
modified:
doc/developers/groupcompress-design.txt design-20080705181503-ccbxd6xuy1bdnrpu-2
doc/developers/incremental-push-pull.txt incrementalpushpull.-20070508045640-zneiu1yzbci574c6-1
doc/developers/integration.txt integration.txt-20080404022341-2lorxocp1in07zij-1
doc/developers/performance-roadmap-rationale.txt performanceroadmapra-20070507174912-mwv3xv517cs4sisd-1
doc/developers/performance-use-case-analysis.txt performanceusecasean-20070508045640-zneiu1yzbci574c6-2
doc/developers/planned-change-integration.txt plannedchangeintegra-20070619004702-i1b3ccamjtfaoq6w-1
doc/developers/planned-performance-changes.txt plannedperformancech-20070604053752-bnjdhako613xfufb-1
doc/developers/repository.txt repository.txt-20070709152006-xkhlek456eclha4u-1
doc/developers/revert.txt revert.txt-20070515111013-grc9hgp21zxqbwbl-1
doc/developers/tortoise-strategy.txt tortoisestrategy.txt-20080403024510-2ahdqrvnwqrb5p5t-1
=== modified file 'doc/developers/groupcompress-design.txt'
--- a/doc/developers/groupcompress-design.txt 2009-04-08 16:33:19 +0000
+++ b/doc/developers/groupcompress-design.txt 2010-11-12 14:28:36 +0000
@@ -29,7 +29,7 @@
Reasonable sizes 'amount read' from remote machines to reconstruct an arbitrary
text: Reading 5MB for a 100K plain text is not a good trade off. Reading (say)
-500K is probably acceptable. Reading ~100K is ideal. However, its likely that
+500K is probably acceptable. Reading ~100K is ideal. However, it's likely that
some texts (e.g NEWS versions) can be stored for nearly-no space at all if we
are willing to have unbounded IO. Profiling to set a good heuristic will be
important. Also allowing users to choose to optimise for a server environment
=== modified file 'doc/developers/incremental-push-pull.txt'
--- a/doc/developers/incremental-push-pull.txt 2009-12-02 20:34:07 +0000
+++ b/doc/developers/incremental-push-pull.txt 2010-11-12 14:28:36 +0000
@@ -32,7 +32,7 @@
1. New data is identified in the source repository.
2. That data is read from the source repository.
3. The same data is verified and written to the target repository in such a
- manner that its not visible to readers until its ready for use.
+ manner that it's not visible to readers until it's ready for use.
New data identification
~~~~~~~~~~~~~~~~~~~~~~~
@@ -119,7 +119,7 @@
ghost points from the target; plus a set difference search is needed on
signatures.
- * Semantic level can probably be tuned, but as its also complex I suggest
+ * Semantic level can probably be tuned, but as it's also complex I suggest
deferring analysis for optimal behaviour of this use case.
@@ -180,7 +180,7 @@
validate the contents of a revision we need the new texts in the inventory for
the revision - to check a fileid:revisionid we need to know the expected sha1
of the full text and thus also need to read the delta chain to construct the
-text as we accept it to determine if its valid. Providing separate validators
+text as we accept it to determine if it's valid. Providing separate validators
for the chosen representation would address this.
e.g: For an inventory entry FILEID:REVISIONID we store the validator of the
full text :SHA1:. If we also stored the validator of the chosen disk
=== modified file 'doc/developers/integration.txt'
--- a/doc/developers/integration.txt 2010-05-16 09:29:44 +0000
+++ b/doc/developers/integration.txt 2010-11-12 14:28:36 +0000
@@ -43,7 +43,7 @@
This gives us a WorkingTree object, which has various methods spread over
-itself, and its parent classes MutableTree and Tree - its worth having a
+itself, and its parent classes MutableTree and Tree - it's worth having a
look through these three files (workingtree.py, mutabletree.py and tree.py)
to see which methods are available.
=== modified file 'doc/developers/performance-roadmap-rationale.txt'
--- a/doc/developers/performance-roadmap-rationale.txt 2007-06-05 08:02:04 +0000
+++ b/doc/developers/performance-roadmap-rationale.txt 2010-11-12 14:28:36 +0000
@@ -109,7 +109,7 @@
of great assistance I think. We want to help everyone that wishes to
contribute to performance to do so effectively.
-Finally, its important to note that coding is not the only contribution
+Finally, it's important to note that coding is not the only contribution
- testing, giving feedback on current performance, helping with the
analysis are all extremely important tasks too and we probably want to
have clear markers of where that should be done to encourage such
=== modified file 'doc/developers/performance-use-case-analysis.txt'
--- a/doc/developers/performance-use-case-analysis.txt 2009-12-02 20:34:07 +0000
+++ b/doc/developers/performance-use-case-analysis.txt 2010-11-12 14:28:36 +0000
@@ -38,7 +38,7 @@
be constant time. Retrieval of the annotated text should be roughly
constant for any text of the same size regardless of the number of
revisions contributing to its content. Mapping of the revision ids to
-dotted revnos could be done as the text is retrieved, but its completely
+dotted revnos could be done as the text is retrieved, but it's completely
fine to post-process the annotated text to obtain dotted-revnos.'
What factors should be considered?
@@ -79,7 +79,7 @@
- locality of reference: If an operation requires data that is located
within a small region at any point, we often get better performance
than with an implementation of the same operation that requires the
- same amount of data but with a lower locality of reference. Its
+ same amount of data but with a lower locality of reference. It's
fairly tricky to add locality of reference after the fact, so I think
its worth considering up front.
@@ -102,8 +102,8 @@
file-level operation latency considerations.
Many performance problems only become visible when changing the scaling knobs
-upwards to large trees. On small trees its our baseline performance that drives
-incremental improvements; on large trees its the amount of processing per item
+upwards to large trees. On small trees it's our baseline performance that drives
+incremental improvements; on large trees it's the amount of processing per item
that drives performance. A significant goal therefore is to keep the amount of
data to be processed under control. Ideally we can scale in a sublinear fashion
for all operations, but we MUST NOT scale even linearly for operations that
=== modified file 'doc/developers/planned-change-integration.txt'
--- a/doc/developers/planned-change-integration.txt 2010-06-02 05:03:31 +0000
+++ b/doc/developers/planned-change-integration.txt 2010-11-12 14:28:36 +0000
@@ -66,7 +66,7 @@
* Network-efficient revision graph API: This influences what questions we will
want to ask a local repository very quickly; as such it's a driver for the
- new repository format and should be in place first if possible. Its probably
+ new repository format and should be in place first if possible. It's probably
not sufficiently different to local operations to make this a hard ordering
though.
@@ -91,14 +91,14 @@
* Repository stacking API: The key dependency/change required for this is that
repositories must individually be happy with having partial data - e.g. many
ghosts. However the way the API needs to be used should be driven from the
- command layer in, because its unclear at the moment what will work best.
+ command layer in, because it's unclear at the moment what will work best.
* Revision stream API: This API will become clear as we streamline commands.
On the data insertion side commit will want to generate new data. The
commands pull, bundle, merge, push, possibly uncommit will want to copy
existing data in a streaming fashion.
- * New container format: Its hard to tell what the right way to structure the
+ * New container format: It's hard to tell what the right way to structure the
layering is. Probably having smooth layering down to the point that code
wants to operate on the containers directly will make this more clear. As
bundles will become a read-only branch & repository, the smart server wants
@@ -135,7 +135,7 @@
today, it is likely able to be implemented immediately, but we are not sure
that its needed anymore, so it is being shelved.
- * Removing derivable data: Its very hard to do this while the derived data is
+ * Removing derivable data: It's very hard to do this while the derived data is
exposed in API's but not used by commands. Implemented the targeted API's
for our core use cases should allow use to remove accidental use of derived
data, making only explicit uses of it visible, and isolating the impact of
=== modified file 'doc/developers/planned-performance-changes.txt'
--- a/doc/developers/planned-performance-changes.txt 2010-08-13 19:08:57 +0000
+++ b/doc/developers/planned-performance-changes.txt 2010-11-12 14:28:36 +0000
@@ -43,7 +43,7 @@
without changing disk, possibly with a performance hit until the disk
formats match the validatory logic. It will be hard to tell if we have the
right routine for that until all the disk changes are complete, so while
- this is a library only change, its likely one that will be delayed to near
+ this is a library only change, it's likely one that will be delayed to near
the end of the process.
* Add an explicit API for managing cached annotations. While annotations are
@@ -68,7 +68,7 @@
through review.
* Stop requiring full memory copies of files. Currently bzr requires that it
- can hold 3 copies of any file its versioning in memory. Solving this is
+ can hold 3 copies of any file it's versioning in memory. Solving this is
tricky, particularly without performance regressions on small files, but
without solving it versioning of .iso and other large objects will continue
to be extremely painful.
@@ -84,7 +84,7 @@
* Revision data manipulation API. We need a single streaming API for adding
data to or getting it from a repository. This will need to allow hints such
as 'optimise for size', or 'optimise for fast-addition' to meet the various
- users planned, but it is a core part of the library today, and its not
+ users planned, but it is a core part of the library today, and it's not
sufficiently clean to let us simplify/remove a lot of related code today.
Interoperable disk changes
@@ -104,7 +104,7 @@
* Repository disk operation ordering. The order that tasks access data within
the repository and the layout of the data should be harmonised. This will
- require disk format changes but does not inherently alter the model, so its
+ require disk format changes but does not inherently alter the model, so it's
straight forward to export from a repository that has been optimised in this
way to a 0.16 based repository.
=== modified file 'doc/developers/repository.txt'
--- a/doc/developers/repository.txt 2009-12-02 20:34:07 +0000
+++ b/doc/developers/repository.txt 2010-11-12 14:28:36 +0000
@@ -246,7 +246,7 @@
offset for the data record in the knit, the byte length for the data
record in the knit and the no-end-of-line flag.
-Its important to note that knit repositories cannot be regenerated by
+It's important to note that knit repositories cannot be regenerated by
scanning .knits, so a mapped index is still irreplaceable and must be
transmitted on push/pull.
=== modified file 'doc/developers/revert.txt'
--- a/doc/developers/revert.txt 2009-12-02 20:34:07 +0000
+++ b/doc/developers/revert.txt 2010-11-12 14:28:36 +0000
@@ -17,7 +17,7 @@
for the selected scopes, for each element in the wt:
1. get hash tree data for that scope.
- 1. get 'new enough' hash data for the siblings of the scope: it can be out of date as long as its not older than the last move or rename out of that siblings scope.
+ 1. get 'new enough' hash data for the siblings of the scope: it can be out of date as long as it's not older than the last move or rename out of that siblings scope.
1. Use the hash tree data to tune the work done in finding matching paths/ids which are different in the two trees.
For each thing that needs to change - group by target directory?
=== modified file 'doc/developers/tortoise-strategy.txt'
--- a/doc/developers/tortoise-strategy.txt 2009-12-02 20:34:07 +0000
+++ b/doc/developers/tortoise-strategy.txt 2010-11-12 14:28:36 +0000
@@ -437,7 +437,7 @@
* Implement property pages and context menus in C++. Expand RPC server as
necessary.
-* Create binary for alpha releases, then go round-and-round until its baked.
+* Create binary for alpha releases, then go round-and-round until it's baked.
Alternative Implementation Strategies
-------------------------------------
More information about the bazaar-commits
mailing list